Sample records for simple correlation-based model

  1. Geometrical correlations in the nucleosomal DNA conformation and the role of the covalent bonds rigidity

    PubMed Central

    Ghorbani, Maryam; Mohammad-Rafiee, Farshid

    2011-01-01

    We develop a simple elastic model to study the conformation of DNA in the nucleosome core particle. In this model, the changes in the energy of the covalent bonds that connect the base pairs of each strand of the DNA double helix, as well as the lateral displacements and the rotation of adjacent base pairs are considered. We show that because of the rigidity of the covalent bonds in the sugar-phosphate backbones, the base pair parameters are highly correlated, especially, strong twist-roll-slide correlation in the conformation of the nucleosomal DNA is vividly observed in the calculated results. This simple model succeeds to account for the detailed features of the structure of the nucleosomal DNA, particularly, its more important base pair parameters, roll and slide, in good agreement with the experimental results. PMID:20972223

  2. Microarray-based cancer prediction using soft computing approach.

    PubMed

    Wang, Xiaosheng; Gotoh, Osamu

    2009-05-26

    One of the difficulties in using gene expression profiles to predict cancer is how to effectively select a few informative genes to construct accurate prediction models from thousands or ten thousands of genes. We screen highly discriminative genes and gene pairs to create simple prediction models involved in single genes or gene pairs on the basis of soft computing approach and rough set theory. Accurate cancerous prediction is obtained when we apply the simple prediction models for four cancerous gene expression datasets: CNS tumor, colon tumor, lung cancer and DLBCL. Some genes closely correlated with the pathogenesis of specific or general cancers are identified. In contrast with other models, our models are simple, effective and robust. Meanwhile, our models are interpretable for they are based on decision rules. Our results demonstrate that very simple models may perform well on cancerous molecular prediction and important gene markers of cancer can be detected if the gene selection approach is chosen reasonably.

  3. Impact of correlated magnetic noise on the detection of stochastic gravitational waves: Estimation based on a simple analytical model

    NASA Astrophysics Data System (ADS)

    Himemoto, Yoshiaki; Taruya, Atsushi

    2017-07-01

    After the first direct detection of gravitational waves (GW), detection of the stochastic background of GWs is an important next step, and the first GW event suggests that it is within the reach of the second-generation ground-based GW detectors. Such a GW signal is typically tiny and can be detected by cross-correlating the data from two spatially separated detectors if the detector noise is uncorrelated. It has been advocated, however, that the global magnetic fields in the Earth-ionosphere cavity produce the environmental disturbances at low-frequency bands, known as Schumann resonances, which potentially couple with GW detectors. In this paper, we present a simple analytical model to estimate its impact on the detection of stochastic GWs. The model crucially depends on the geometry of the detector pair through the directional coupling, and we investigate the basic properties of the correlated magnetic noise based on the analytic expressions. The model reproduces the major trend of the recently measured global correlation between the GW detectors via magnetometer. The estimated values of the impact of correlated noise also match those obtained from the measurement. Finally, we give an implication to the detection of stochastic GWs including upcoming detectors, KAGRA and LIGO India. The model suggests that LIGO Hanford-Virgo and Virgo-KAGRA pairs are possibly less sensitive to the correlated noise and can achieve a better sensitivity to the stochastic GW signal in the most pessimistic case.

  4. A simple-source model of military jet aircraft noise

    NASA Astrophysics Data System (ADS)

    Morgan, Jessica; Gee, Kent L.; Neilsen, Tracianne; Wall, Alan T.

    2010-10-01

    The jet plumes produced by military jet aircraft radiate significant amounts of noise. A need to better understand the characteristics of the turbulence-induced aeroacoustic sources has motivated the present study. The purpose of the study is to develop a simple-source model of jet noise that can be compared to the measured data. The study is based off of acoustic data collected near a tied-down F-22 Raptor. The simplest model consisted of adjusting the origin of a monopole above a rigid planar reflector until the locations of the predicted and measured interference nulls matched. The model has developed into an extended Rayleigh distribution of partially correlated monopoles which fits the measured data from the F-22 significantly better. The results and basis for the model match the current prevailing theory that jet noise consists of both correlated and uncorrelated sources. In addition, this simple-source model conforms to the theory that the peak source location moves upstream with increasing frequency and lower engine conditions.

  5. Comparing Free-Free and Shaker Table Model Correlation Methods Using Jim Beam

    NASA Technical Reports Server (NTRS)

    Ristow, James; Smith, Kenneth Wayne, Jr.; Johnson, Nathaniel; Kinney, Jackson

    2018-01-01

    Finite element model correlation as part of a spacecraft program has always been a challenge. For any NASA mission, the coupled system response of the spacecraft and launch vehicle can be determined analytically through a Coupled Loads Analysis (CLA), as it is not possible to test the spacecraft and launch vehicle coupled system before launch. The value of the CLA is highly dependent on the accuracy of the frequencies and mode shapes extracted from the spacecraft model. NASA standards require the spacecraft model used in the final Verification Loads Cycle to be correlated by either a modal test or by comparison of the model with Frequency Response Functions (FRFs) obtained during the environmental qualification test. Due to budgetary and time constraints, most programs opt to correlate the spacecraft dynamic model during the environmental qualification test, conducted on a large shaker table. For any model correlation effort, the key has always been finding a proper definition of the boundary conditions. This paper is a correlation case study to investigate the difference in responses of a simple structure using a free-free boundary, a fixed boundary on the shaker table, and a base-drive vibration test, all using identical instrumentation. The NAVCON Jim Beam test structure, featured in the IMAC round robin modal test of 2009, was selected as a simple, well recognized and well characterized structure to conduct this investigation. First, a free-free impact modal test of the Jim Beam was done as an experimental control. Second, the Jim Beam was mounted to a large 20,000 lbf shaker, and an impact modal test in this fixed configuration was conducted. Lastly, a vibration test of the Jim Beam was conducted on the shaker table. The free-free impact test, the fixed impact test, and the base-drive test were used to assess the effect of the shaker modes, evaluate the validity of fixed-base modeling assumptions, and compare final model correlation results between these boundary conditions.

  6. Predictive Analytics In Healthcare: Medications as a Predictor of Medical Complexity.

    PubMed

    Higdon, Roger; Stewart, Elizabeth; Roach, Jared C; Dombrowski, Caroline; Stanberry, Larissa; Clifton, Holly; Kolker, Natali; van Belle, Gerald; Del Beccaro, Mark A; Kolker, Eugene

    2013-12-01

    Children with special healthcare needs (CSHCN) require health and related services that exceed those required by most hospitalized children. A small but growing and important subset of the CSHCN group includes medically complex children (MCCs). MCCs typically have comorbidities and disproportionately consume healthcare resources. To enable strategic planning for the needs of MCCs, simple screens to identify potential MCCs rapidly in a hospital setting are needed. We assessed whether the number of medications used and the class of those medications correlated with MCC status. Retrospective analysis of medication data from the inpatients at Seattle Children's Hospital found that the numbers of inpatient and outpatient medications significantly correlated with MCC status. Numerous variables based on counts of medications, use of individual medications, and use of combinations of medications were considered, resulting in a simple model based on three different counts of medications: outpatient and inpatient drug classes and individual inpatient drug names. The combined model was used to rank the patient population for medical complexity. As a result, simple, objective admission screens for predicting the complexity of patients based on the number and type of medications were implemented.

  7. Hierarchical lattice models of hydrogen-bond networks in water

    NASA Astrophysics Data System (ADS)

    Dandekar, Rahul; Hassanali, Ali A.

    2018-06-01

    We develop a graph-based model of the hydrogen-bond network in water, with a view toward quantitatively modeling the molecular-level correlational structure of the network. The networks formed are studied by the constructing the model on two infinite-dimensional lattices. Our models are built bottom up, based on microscopic information coming from atomistic simulations, and we show that the predictions of the model are consistent with known results from ab initio simulations of liquid water. We show that simple entropic models can predict the correlations and clustering of local-coordination defects around tetrahedral waters observed in the atomistic simulations. We also find that orientational correlations between bonds are longer ranged than density correlations, determine the directional correlations within closed loops, and show that the patterns of water wires within these structures are also consistent with previous atomistic simulations. Our models show the existence of density and compressibility anomalies, as seen in the real liquid, and the phase diagram of these models is consistent with the singularity-free scenario previously proposed by Sastry and coworkers [Phys. Rev. E 53, 6144 (1996), 10.1103/PhysRevE.53.6144].

  8. Constrained range expansion and climate change assessments

    Treesearch

    Yohay Carmel; Curtis H. Flather

    2006-01-01

    Modeling the future distribution of keystone species has proved to be an important approach to assessing the potential ecological consequences of climate change (Loehle and LeBlanc 1996; Hansen et al. 2001). Predictions of range shifts are typically based on empirical models derived from simple correlative relationships between climatic characteristics of occupied and...

  9. A simple, approximate model of parachute inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macha, J.M.

    1992-11-01

    A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluidmore » are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.« less

  10. A simple, approximate model of parachute inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macha, J.M.

    1992-01-01

    A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluidmore » are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.« less

  11. IVGTT-based simple assessment of glucose tolerance in the Zucker fatty rat: Validation against minimal models.

    PubMed

    Morettini, Micaela; Faelli, Emanuela; Perasso, Luisa; Fioretti, Sandro; Burattini, Laura; Ruggeri, Piero; Di Nardo, Francesco

    2017-01-01

    For the assessment of glucose tolerance from IVGTT data in Zucker rat, minimal model methodology is reliable but time- and money-consuming. This study aimed to validate for the first time in Zucker rat, simple surrogate indexes of insulin sensitivity and secretion against the glucose-minimal-model insulin sensitivity index (SI) and against first- (Φ1) and second-phase (Φ2) β-cell responsiveness indexes provided by C-peptide minimal model. Validation of the surrogate insulin sensitivity index (ISI) and of two sets of coupled insulin-based indexes for insulin secretion, differing from the cut-off point between phases (FPIR3-SPIR3, t = 3 min and FPIR5-SPIR5, t = 5 min), was carried out in a population of ten Zucker fatty rats (ZFR) and ten Zucker lean rats (ZLR). Considering the whole rat population (ZLR+ZFR), ISI showed a significant strong correlation with SI (Spearman's correlation coefficient, r = 0.88; P<0.001). Both FPIR3 and FPIR5 showed a significant (P<0.001) strong correlation with Φ1 (r = 0.76 and r = 0.75, respectively). Both SPIR3 and SPIR5 showed a significant (P<0.001) strong correlation with Φ2 (r = 0.85 and r = 0.83, respectively). ISI is able to detect (P<0.001) the well-recognized reduction in insulin sensitivity in ZFRs, compared to ZLRs. The insulin-based indexes of insulin secretion are able to detect in ZFRs (P<0.001) the compensatory increase of first- and second-phase secretion, associated to the insulin-resistant state. The ability of the surrogate indexes in describing glucose tolerance in the ZFRs was confirmed by the Disposition Index analysis. The model-based validation performed in the present study supports the utilization of low-cost, insulin-based indexes for the assessment of glucose tolerance in Zucker rat, reliable animal model of human metabolic syndrome.

  12. Characteristic analysis on UAV-MIMO channel based on normalized correlation matrix.

    PubMed

    Gao, Xi jun; Chen, Zi li; Hu, Yong Jiang

    2014-01-01

    Based on the three-dimensional GBSBCM (geometrically based double bounce cylinder model) channel model of MIMO for unmanned aerial vehicle (UAV), the simple form of UAV space-time-frequency channel correlation function which includes the LOS, SPE, and DIF components is presented. By the methods of channel matrix decomposition and coefficient normalization, the analytic formula of UAV-MIMO normalized correlation matrix is deduced. This formula can be used directly to analyze the condition number of UAV-MIMO channel matrix, the channel capacity, and other characteristic parameters. The simulation results show that this channel correlation matrix can be applied to describe the changes of UAV-MIMO channel characteristics under different parameter settings comprehensively. This analysis method provides a theoretical basis for improving the transmission performance of UAV-MIMO channel. The development of MIMO technology shows practical application value in the field of UAV communication.

  13. Characteristic Analysis on UAV-MIMO Channel Based on Normalized Correlation Matrix

    PubMed Central

    Xi jun, Gao; Zi li, Chen; Yong Jiang, Hu

    2014-01-01

    Based on the three-dimensional GBSBCM (geometrically based double bounce cylinder model) channel model of MIMO for unmanned aerial vehicle (UAV), the simple form of UAV space-time-frequency channel correlation function which includes the LOS, SPE, and DIF components is presented. By the methods of channel matrix decomposition and coefficient normalization, the analytic formula of UAV-MIMO normalized correlation matrix is deduced. This formula can be used directly to analyze the condition number of UAV-MIMO channel matrix, the channel capacity, and other characteristic parameters. The simulation results show that this channel correlation matrix can be applied to describe the changes of UAV-MIMO channel characteristics under different parameter settings comprehensively. This analysis method provides a theoretical basis for improving the transmission performance of UAV-MIMO channel. The development of MIMO technology shows practical application value in the field of UAV communication. PMID:24977185

  14. Dynamical graph theory networks techniques for the analysis of sparse connectivity networks in dementia

    NASA Astrophysics Data System (ADS)

    Tahmassebi, Amirhessam; Pinker-Domenig, Katja; Wengert, Georg; Lobbes, Marc; Stadlbauer, Andreas; Romero, Francisco J.; Morales, Diego P.; Castillo, Encarnacion; Garcia, Antonio; Botella, Guillermo; Meyer-Bäse, Anke

    2017-05-01

    Graph network models in dementia have become an important computational technique in neuroscience to study fundamental organizational principles of brain structure and function of neurodegenerative diseases such as dementia. The graph connectivity is reflected in the connectome, the complete set of structural and functional connections of the graph network, which is mostly based on simple Pearson correlation links. In contrast to simple Pearson correlation networks, the partial correlations (PC) only identify direct correlations while indirect associations are eliminated. In addition to this, the state-of-the-art techniques in brain research are based on static graph theory, which is unable to capture the dynamic behavior of the brain connectivity, as it alters with disease evolution. We propose a new research avenue in neuroimaging connectomics based on combining dynamic graph network theory and modeling strategies at different time scales. We present the theoretical framework for area aggregation and time-scale modeling in brain networks as they pertain to disease evolution in dementia. This novel paradigm is extremely powerful, since we can derive both static parameters pertaining to node and area parameters, as well as dynamic parameters, such as system's eigenvalues. By implementing and analyzing dynamically both disease driven PC-networks and regular concentration networks, we reveal differences in the structure of these network that play an important role in the temporal evolution of this disease. The described research is key to advance biomedical research on novel disease prediction trajectories and dementia therapies.

  15. Development of Maps of Simple and Complex Cells in the Primary Visual Cortex

    PubMed Central

    Antolík, Ján; Bednar, James A.

    2011-01-01

    Hubel and Wiesel (1962) classified primary visual cortex (V1) neurons as either simple, with responses modulated by the spatial phase of a sine grating, or complex, i.e., largely phase invariant. Much progress has been made in understanding how simple-cells develop, and there are now detailed computational models establishing how they can form topographic maps ordered by orientation preference. There are also models of how complex cells can develop using outputs from simple cells with different phase preferences, but no model of how a topographic orientation map of complex cells could be formed based on the actual connectivity patterns found in V1. Addressing this question is important, because the majority of existing developmental models of simple-cell maps group neurons selective to similar spatial phases together, which is contrary to experimental evidence, and makes it difficult to construct complex cells. Overcoming this limitation is not trivial, because mechanisms responsible for map development drive receptive fields (RF) of nearby neurons to be highly correlated, while co-oriented RFs of opposite phases are anti-correlated. In this work, we model V1 as two topographically organized sheets representing cortical layer 4 and 2/3. Only layer 4 receives direct thalamic input. Both sheets are connected with narrow feed-forward and feedback connectivity. Only layer 2/3 contains strong long-range lateral connectivity, in line with current anatomical findings. Initially all weights in the model are random, and each is modified via a Hebbian learning rule. The model develops smooth, matching, orientation preference maps in both sheets. Layer 4 units become simple cells, with phase preference arranged randomly, while those in layer 2/3 are primarily complex cells. To our knowledge this model is the first explaining how simple cells can develop with random phase preference, and how maps of complex cells can develop, using only realistic patterns of connectivity. PMID:21559067

  16. Estimating Phenomenological Parameters in Multi-Assets Markets

    NASA Astrophysics Data System (ADS)

    Raffaelli, Giacomo; Marsili, Matteo

    Financial correlations exhibit a non-trivial dynamic behavior. This is reproduced by a simple phenomenological model of a multi-asset financial market, which takes into account the impact of portfolio investment on price dynamics. This captures the fact that correlations determine the optimal portfolio but are affected by investment based on it. Such a feedback on correlations gives rise to an instability when the volume of investment exceeds a critical value. Close to the critical point the model exhibits dynamical correlations very similar to those observed in real markets. We discuss how the model's parameter can be estimated in real market data with a maximum likelihood principle. This confirms the main conclusion that real markets operate close to a dynamically unstable point.

  17. Nonequilibrium Green's functions and atom-surface dynamics: Simple views from a simple model system

    NASA Astrophysics Data System (ADS)

    Boström, E.; Hopjan, M.; Kartsev, A.; Verdozzi, C.; Almbladh, C.-O.

    2016-03-01

    We employ Non-equilibrium Green's functions (NEGF) to describe the real-time dynamics of an adsorbate-surface model system exposed to ultrafast laser pulses. For a finite number of electronic orbitals, the system is solved exactly and within different levels of approximation. Specifically i) the full exact quantum mechanical solution for electron and nuclear degrees of freedom is used to benchmark ii) the Ehrenfest approximation (EA) for the nuclei, with the electron dynamics still treated exactly. Then, using the EA, electronic correlations are treated with NEGF within iii) 2nd Born and with iv) a recently introduced hybrid scheme, which mixes 2nd Born self-energies with non-perturbative, local exchange- correlation potentials of Density Functional Theory (DFT). Finally, the effect of a semi-infinite substrate is considered: we observe that a macroscopic number of de-excitation channels can hinder desorption. While very preliminary in character and based on a simple and rather specific model system, our results clearly illustrate the large potential of NEGF to investigate atomic desorption, and more generally, the non equilibrium dynamics of material surfaces subject to ultrafast laser fields.

  18. Correlation of recent fission product release data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kress, T.S.; Lorenz, R.A.; Nakamura, T.

    For the calculation of source terms associated with severe accidents, it is necessary to model the release of fission products from fuel as it heats and melts. Perhaps the most definitive model for fission product release is that of the FASTGRASS computer code developed at Argonne National Laboratory. There is persuasive evidence that these processes, as well as additional chemical and gas phase mass transport processes, are important in the release of fission products from fuel. Nevertheless, it has been found convenient to have simplified fission product release correlations that may not be as definitive as models like FASTGRASS butmore » which attempt in some simple way to capture the essence of the mechanisms. One of the most widely used such correlation is called CORSOR-M which is the present fission product/aerosol release model used in the NRC Source Term Code Package. CORSOR has been criticized as having too much uncertainty in the calculated releases and as not accurately reproducing some experimental data. It is currently believed that these discrepancies between CORSOR and the more recent data have resulted because of the better time resolution of the more recent data compared to the data base that went into the CORSOR correlation. This document discusses a simple correlational model for use in connection with NUREG risk uncertainty exercises. 8 refs., 4 figs., 1 tab.« less

  19. Statistical analysis of strait time index and a simple model for trend and trend reversal

    NASA Astrophysics Data System (ADS)

    Chen, Kan; Jayaprakash, C.

    2003-06-01

    We analyze the daily closing prices of the Strait Time Index (STI) as well as the individual stocks traded in Singapore's stock market from 1988 to 2001. We find that the Hurst exponent is approximately 0.6 for both the STI and individual stocks, while the normal correlation functions show the random walk exponent of 0.5. We also investigate the conditional average of the price change in an interval of length T given the price change in the previous interval. We find strong correlations for price changes larger than a threshold value proportional to T; this indicates that there is no uniform crossover to Gaussian behavior. A simple model based on short-time trend and trend reversal is constructed. We show that the model exhibits statistical properties and market swings similar to those of the real market.

  20. Power-Laws and Scaling in Finance: Empirical Evidence and Simple Models

    NASA Astrophysics Data System (ADS)

    Bouchaud, Jean-Philippe

    We discuss several models that may explain the origin of power-law distributions and power-law correlations in financial time series. From an empirical point of view, the exponents describing the tails of the price increments distribution and the decay of the volatility correlations are rather robust and suggest universality. However, many of the models that appear naturally (for example, to account for the distribution of wealth) contain some multiplicative noise, which generically leads to non universal exponents. Recent progress in the empirical study of the volatility suggests that the volatility results from some sort of multiplicative cascade. A convincing `microscopic' (i.e. trader based) model that explains this observation is however not yet available. We discuss a rather generic mechanism for long-ranged volatility correlations based on the idea that agents constantly switch between active and inactive strategies depending on their relative performance.

  1. Predictability of Seasonal Rainfall over the Greater Horn of Africa

    NASA Astrophysics Data System (ADS)

    Ngaina, J. N.

    2016-12-01

    The El Nino-Southern Oscillation (ENSO) is a primary mode of climate variability in the Greater of Africa (GHA). The expected impacts of climate variability and change on water, agriculture, and food resources in GHA underscore the importance of reliable and accurate seasonal climate predictions. The study evaluated different model selection criteria which included the Coefficient of determination (R2), Akaike's Information Criterion (AIC), Bayesian Information Criterion (BIC), and the Fisher information approximation (FIA). A forecast scheme based on the optimal model was developed to predict the October-November-December (OND) and March-April-May (MAM) rainfall. The predictability of GHA rainfall based on ENSO was quantified based on composite analysis, correlations and contingency tables. A test for field-significance considering the properties of finiteness and interdependence of the spatial grid was applied to avoid correlations by chance. The study identified FIA as the optimal model selection criterion. However, complex model selection criteria (FIA followed by BIC) performed better compared to simple approach (R2 and AIC). Notably, operational seasonal rainfall predictions over the GHA makes of simple model selection procedures e.g. R2. Rainfall is modestly predictable based on ENSO during OND and MAM seasons. El Nino typically leads to wetter conditions during OND and drier conditions during MAM. The correlations of ENSO indices with rainfall are statistically significant for OND and MAM seasons. Analysis based on contingency tables shows higher predictability of OND rainfall with the use of ENSO indices derived from the Pacific and Indian Oceans sea surfaces showing significant improvement during OND season. The predictability based on ENSO for OND rainfall is robust on a decadal scale compared to MAM. An ENSO-based scheme based on an optimal model selection criterion can thus provide skillful rainfall predictions over GHA. This study concludes that the negative phase of ENSO (La Niña) leads to dry conditions while the positive phase of ENSO (El Niño) anticipates enhanced wet conditions

  2. Data Fusion of Gridded Snow Products Enhanced with Terrain Covariates and a Simple Snow Model

    NASA Astrophysics Data System (ADS)

    Snauffer, A. M.; Hsieh, W. W.; Cannon, A. J.

    2017-12-01

    Hydrologic planning requires accurate estimates of regional snow water equivalent (SWE), particularly areas with hydrologic regimes dominated by spring melt. While numerous gridded data products provide such estimates, accurate representations are particularly challenging under conditions of mountainous terrain, heavy forest cover and large snow accumulations, contexts which in many ways define the province of British Columbia (BC), Canada. One promising avenue of improving SWE estimates is a data fusion approach which combines field observations with gridded SWE products and relevant covariates. A base artificial neural network (ANN) was constructed using three of the best performing gridded SWE products over BC (ERA-Interim/Land, MERRA and GLDAS-2) and simple location and time covariates. This base ANN was then enhanced to include terrain covariates (slope, aspect and Terrain Roughness Index, TRI) as well as a simple 1-layer energy balance snow model driven by gridded bias-corrected ANUSPLIN temperature and precipitation values. The ANN enhanced with all aforementioned covariates performed better than the base ANN, but most of the skill improvement was attributable to the snow model with very little contribution from the terrain covariates. The enhanced ANN improved station mean absolute error (MAE) by an average of 53% relative to the composing gridded products over the province. Interannual peak SWE correlation coefficient was found to be 0.78, an improvement of 0.05 to 0.18 over the composing products. This nonlinear approach outperformed a comparable multiple linear regression (MLR) model by 22% in MAE and 0.04 in interannual correlation. The enhanced ANN has also been shown to estimate better than the Variable Infiltration Capacity (VIC) hydrologic model calibrated and run for four BC watersheds, improving MAE by 22% and correlation by 0.05. The performance improvements of the enhanced ANN are statistically significant at the 5% level across the province and in four out of five physiographic regions.

  3. Statistical Mechanics of the US Supreme Court

    NASA Astrophysics Data System (ADS)

    Lee, Edward D.; Broedersz, Chase P.; Bialek, William

    2015-07-01

    We build simple models for the distribution of voting patterns in a group, using the Supreme Court of the United States as an example. The maximum entropy model consistent with the observed pairwise correlations among justices' votes, an Ising spin glass, agrees quantitatively with the data. While all correlations (perhaps surprisingly) are positive, the effective pairwise interactions in the spin glass model have both signs, recovering the intuition that ideologically opposite justices negatively influence each another. Despite the competing interactions, a strong tendency toward unanimity emerges from the model, organizing the voting patterns in a relatively simple "energy landscape." Besides unanimity, other energy minima in this landscape, or maxima in probability, correspond to prototypical voting states, such as the ideological split or a tightly correlated, conservative core. The model correctly predicts the correlation of justices with the majority and gives us a measure of their influence on the majority decision. These results suggest that simple models, grounded in statistical physics, can capture essential features of collective decision making quantitatively, even in a complex political context.

  4. Atomic Dynamics in Simple Liquid: de Gennes Narrowing Revisited

    NASA Astrophysics Data System (ADS)

    Wu, Bin; Iwashita, Takuya; Egami, Takeshi

    2018-03-01

    The de Gennes narrowing phenomenon is frequently observed by neutron or x -ray scattering measurements of the dynamics of complex systems, such as liquids, proteins, colloids, and polymers. The characteristic slowing down of dynamics in the vicinity of the maximum of the total scattering intensity is commonly attributed to enhanced cooperativity. In this Letter, we present an alternative view on its origin through the examination of the time-dependent pair correlation function, the van Hove correlation function, for a model liquid in two, three, and four dimensions. We find that the relaxation time increases monotonically with distance and the dependence on distance varies with dimension. We propose a heuristic explanation of this dependence based on a simple geometrical model. This finding sheds new light on the interpretation of the de Gennes narrowing phenomenon and the α -relaxation time.

  5. Correlation Imaging Reveals Specific Crowding Dynamics of Kinesin Motor Proteins

    NASA Astrophysics Data System (ADS)

    Miedema, Daniël M.; Kushwaha, Vandana S.; Denisov, Dmitry V.; Acar, Seyda; Nienhuis, Bernard; Peterman, Erwin J. G.; Schall, Peter

    2017-10-01

    Molecular motor proteins fulfill the critical function of transporting organelles and other building blocks along the biopolymer network of the cell's cytoskeleton, but crowding effects are believed to crucially affect this motor-driven transport due to motor interactions. Physical transport models, like the paradigmatic, totally asymmetric simple exclusion process (TASEP), have been used to predict these crowding effects based on simple exclusion interactions, but verifying them in experiments remains challenging. Here, we introduce a correlation imaging technique to precisely measure the motor density, velocity, and run length along filaments under crowding conditions, enabling us to elucidate the physical nature of crowding and test TASEP model predictions. Using the kinesin motor proteins kinesin-1 and OSM-3, we identify crowding effects in qualitative agreement with TASEP predictions, and we achieve excellent quantitative agreement by extending the model with motor-specific interaction ranges and crowding-dependent detachment probabilities. These results confirm the applicability of basic nonequilibrium models to the intracellular transport and highlight motor-specific strategies to deal with crowding.

  6. Two-particle correlation function and dihadron correlation approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vechernin, V. V., E-mail: v.vechernin@spbu.ru; Ivanov, K. O.; Neverov, D. I.

    It is shown that, in the case of asymmetric nuclear interactions, the application of the traditional dihadron correlation approach to determining a two-particle correlation function C may lead to a form distorted in relation to the canonical pair correlation function {sub C}{sup 2}. This result was obtained both by means of exact analytic calculations of correlation functions within a simple string model for proton–nucleus and deuteron–nucleus collisions and by means of Monte Carlo simulations based on employing the HIJING event generator. It is also shown that the method based on studying multiplicity correlations in two narrow observation windows separated inmore » rapidity makes it possible to determine correctly the canonical pair correlation function C{sub 2} for all cases, including the case where the rapidity distribution of product particles is not uniform.« less

  7. Friend suggestion in social network based on user log

    NASA Astrophysics Data System (ADS)

    Kaviya, R.; Vanitha, M.; Sumaiya Thaseen, I.; Mangaiyarkarasi, R.

    2017-11-01

    Simple friend recommendation algorithms such as similarity, popularity and social aspects is the basic requirement to be explored to methodically form high-performance social friend recommendation. Suggestion of friends is followed. No tags of character were followed. In the proposed system, we use an algorithm for network correlation-based social friend recommendation (NC-based SFR).It includes user activities like where one lives and works. A new friend recommendation method, based on network correlation, by considering the effect of different social roles. To model the correlation between different networks, we develop a method that aligns these networks through important feature selection. We consider by preserving the network structure for a more better recommendations so that it significantly improves the accuracy for better friend-recommendation.

  8. Single Canonical Model of Reflexive Memory and Spatial Attention

    PubMed Central

    Patel, Saumil S.; Red, Stuart; Lin, Eric; Sereno, Anne B.

    2015-01-01

    Many neurons in the dorsal and ventral visual stream have the property that after a brief visual stimulus presentation in their receptive field, the spiking activity in these neurons persists above their baseline levels for several seconds. This maintained activity is not always correlated with the monkey’s task and its origin is unknown. We have previously proposed a simple neural network model, based on shape selective neurons in monkey lateral intraparietal cortex, which predicts the valence and time course of reflexive (bottom-up) spatial attention. In the same simple model, we demonstrate here that passive maintained activity or short-term memory of specific visual events can result without need for an external or top-down modulatory signal. Mutual inhibition and neuronal adaptation play distinct roles in reflexive attention and memory. This modest 4-cell model provides the first simple and unified physiologically plausible mechanism of reflexive spatial attention and passive short-term memory processes. PMID:26493949

  9. The Effect of Error Correlation on Interfactor Correlation in Psychometric Measurement

    ERIC Educational Resources Information Center

    Westfall, Peter H.; Henning, Kevin S. S.; Howell, Roy D.

    2012-01-01

    This article shows how interfactor correlation is affected by error correlations. Theoretical and practical justifications for error correlations are given, and a new equivalence class of models is presented to explain the relationship between interfactor correlation and error correlations. The class allows simple, parsimonious modeling of error…

  10. Modeling of Momentum Correlations in Heavy Ion Collisions

    NASA Astrophysics Data System (ADS)

    Pruneau, Claude; Sharma, Monika

    2010-02-01

    Measurements of transverse momentum (pt) correlations and fluctuations in heavy ion collisions (HIC) are of interest because they provide information on the collision dynamics not readily available from number correlations. For instance, pt fluctuations are expected to diverge for a system near its tri-critical point [1]. Integral momentum correlations may also be used to estimate the shear viscosity of the quark gluon plasma produced in HIC [2]. Integral correlations measured over large fractions of the particle phase space average out several dynamical contributions and as such may be difficult to interpret. It is thus of interest to seek extensions of integral correlation variables that may provide more detailed information about the collision dynamics. We introduce a variety of differential momentum correlations and discuss their basic properties in the light of simple toy models. We also present theoretical predictions based on the PYTHIA, HIJING, AMPT, and EPOS models. Finally, we discuss the interplay of various dynamical effects that may play a role in the determination of the shear viscosity based on the broadening of momentum correlations measured as function of collision centrality. [1] L. Stodolsky, Phys. Rev. Lett. 75 (1995) 1044. [2] S. Gavin and M. A. Aziz, Phys. Rev. Lett. 97 (2006) 162302. )

  11. Representation of limb kinematics in Purkinje cell simple spike discharge is conserved across multiple tasks

    PubMed Central

    Hewitt, Angela L.; Popa, Laurentiu S.; Pasalar, Siavash; Hendrix, Claudia M.

    2011-01-01

    Encoding of movement kinematics in Purkinje cell simple spike discharge has important implications for hypotheses of cerebellar cortical function. Several outstanding questions remain regarding representation of these kinematic signals. It is uncertain whether kinematic encoding occurs in unpredictable, feedback-dependent tasks or kinematic signals are conserved across tasks. Additionally, there is a need to understand the signals encoded in the instantaneous discharge of single cells without averaging across trials or time. To address these questions, this study recorded Purkinje cell firing in monkeys trained to perform a manual random tracking task in addition to circular tracking and center-out reach. Random tracking provides for extensive coverage of kinematic workspaces. Direction and speed errors are significantly greater during random than circular tracking. Cross-correlation analyses comparing hand and target velocity profiles show that hand velocity lags target velocity during random tracking. Correlations between simple spike firing from 120 Purkinje cells and hand position, velocity, and speed were evaluated with linear regression models including a time constant, τ, as a measure of the firing lead/lag relative to the kinematic parameters. Across the population, velocity accounts for the majority of simple spike firing variability (63 ± 30% of Radj2), followed by position (28 ± 24% of Radj2) and speed (11 ± 19% of Radj2). Simple spike firing often leads hand kinematics. Comparison of regression models based on averaged vs. nonaveraged firing and kinematics reveals lower Radj2 values for nonaveraged data; however, regression coefficients and τ values are highly similar. Finally, for most cells, model coefficients generated from random tracking accurately estimate simple spike firing in either circular tracking or center-out reach. These findings imply that the cerebellum controls movement kinematics, consistent with a forward internal model that predicts upcoming limb kinematics. PMID:21795616

  12. Method of frequency dependent correlations: investigating the variability of total solar irradiance

    NASA Astrophysics Data System (ADS)

    Pelt, J.; Käpylä, M. J.; Olspert, N.

    2017-04-01

    Context. This paper contributes to the field of modeling and hindcasting of the total solar irradiance (TSI) based on different proxy data that extend further back in time than the TSI that is measured from satellites. Aims: We introduce a simple method to analyze persistent frequency-dependent correlations (FDCs) between the time series and use these correlations to hindcast missing historical TSI values. We try to avoid arbitrary choices of the free parameters of the model by computing them using an optimization procedure. The method can be regarded as a general tool for pairs of data sets, where correlating and anticorrelating components can be separated into non-overlapping regions in frequency domain. Methods: Our method is based on low-pass and band-pass filtering with a Gaussian transfer function combined with de-trending and computation of envelope curves. Results: We find a major controversy between the historical proxies and satellite-measured targets: a large variance is detected between the low-frequency parts of targets, while the low-frequency proxy behavior of different measurement series is consistent with high precision. We also show that even though the rotational signal is not strongly manifested in the targets and proxies, it becomes clearly visible in FDC spectrum. A significant part of the variability can be explained by a very simple model consisting of two components: the original proxy describing blanketing by sunspots, and the low-pass-filtered curve describing the overall activity level. The models with the full library of the different building blocks can be applied to hindcasting with a high level of confidence, Rc ≈ 0.90. The usefulness of these models is limited by the major target controversy. Conclusions: The application of the new method to solar data allows us to obtain important insights into the different TSI modeling procedures and their capabilities for hindcasting based on the directly observed time intervals.

  13. Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.

    PubMed

    Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P

    2017-03-01

    The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Bacterial genomes lacking long-range correlations may not be modeled by low-order Markov chains: the role of mixing statistics and frame shift of neighboring genes.

    PubMed

    Cocho, Germinal; Miramontes, Pedro; Mansilla, Ricardo; Li, Wentian

    2014-12-01

    We examine the relationship between exponential correlation functions and Markov models in a bacterial genome in detail. Despite the well known fact that Markov models generate sequences with correlation function that decays exponentially, simply constructed Markov models based on nearest-neighbor dimer (first-order), trimer (second-order), up to hexamer (fifth-order), and treating the DNA sequence as being homogeneous all fail to predict the value of exponential decay rate. Even reading-frame-specific Markov models (both first- and fifth-order) could not explain the fact that the exponential decay is very slow. Starting with the in-phase coding-DNA-sequence (CDS), we investigated correlation within a fixed-codon-position subsequence, and in artificially constructed sequences by packing CDSs with out-of-phase spacers, as well as altering CDS length distribution by imposing an upper limit. From these targeted analyses, we conclude that the correlation in the bacterial genomic sequence is mainly due to a mixing of heterogeneous statistics at different codon positions, and the decay of correlation is due to the possible out-of-phase between neighboring CDSs. There are also small contributions to the correlation from bases at the same codon position, as well as by non-coding sequences. These show that the seemingly simple exponential correlation functions in bacterial genome hide a complexity in correlation structure which is not suitable for a modeling by Markov chain in a homogeneous sequence. Other results include: use of the (absolute value) second largest eigenvalue to represent the 16 correlation functions and the prediction of a 10-11 base periodicity from the hexamer frequencies. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Atomic Dynamics in Simple Liquid: de Gennes Narrowing Revisited

    DOE PAGES

    Wu, Bin; Iwashita, Takuya; Egami, Takeshi

    2018-03-27

    The de Gennes narrowing phenomenon is frequently observed by neutron or x-ray scattering measurements of the dynamics of complex systems, such as liquids, proteins, colloids, and polymers. The characteristic slowing down of dynamics in the vicinity of the maximum of the total scattering intensity is commonly attributed to enhanced cooperativity. In this Letter, we present an alternative view on its origin through the examination of the time-dependent pair correlation function, the van Hove correlation function, for a model liquid in two, three, and four dimensions. We find that the relaxation time increases monotonically with distance and the dependence on distancemore » varies with dimension. We propose a heuristic explanation of this dependence based on a simple geometrical model. Furthermore, this finding sheds new light on the interpretation of the de Gennes narrowing phenomenon and the α-relaxation time.« less

  16. Atomic Dynamics in Simple Liquid: de Gennes Narrowing Revisited

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Bin; Iwashita, Takuya; Egami, Takeshi

    The de Gennes narrowing phenomenon is frequently observed by neutron or x-ray scattering measurements of the dynamics of complex systems, such as liquids, proteins, colloids, and polymers. The characteristic slowing down of dynamics in the vicinity of the maximum of the total scattering intensity is commonly attributed to enhanced cooperativity. In this Letter, we present an alternative view on its origin through the examination of the time-dependent pair correlation function, the van Hove correlation function, for a model liquid in two, three, and four dimensions. We find that the relaxation time increases monotonically with distance and the dependence on distancemore » varies with dimension. We propose a heuristic explanation of this dependence based on a simple geometrical model. Furthermore, this finding sheds new light on the interpretation of the de Gennes narrowing phenomenon and the α-relaxation time.« less

  17. Decoding spike timing: the differential reverse correlation method

    PubMed Central

    Tkačik, Gašper; Magnasco, Marcelo O.

    2009-01-01

    It is widely acknowledged that detailed timing of action potentials is used to encode information, for example in auditory pathways; however the computational tools required to analyze encoding through timing are still in their infancy. We present a simple example of encoding, based on a recent model of time-frequency analysis, in which units fire action potentials when a certain condition is met, but the timing of the action potential depends also on other features of the stimulus. We show that, as a result, spike-triggered averages are smoothed so much they do not represent the true features of the encoding. Inspired by this example, we present a simple method, differential reverse correlations, that can separate an analysis of what causes a neuron to spike, and what controls its timing. We analyze with this method the leaky integrate-and-fire neuron and show the method accurately reconstructs the model's kernel. PMID:18597928

  18. Associative memory in phasing neuron networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nair, Niketh S; Bochove, Erik J.; Braiman, Yehuda

    2014-01-01

    We studied pattern formation in a network of coupled Hindmarsh-Rose model neurons and introduced a new model for associative memory retrieval using networks of Kuramoto oscillators. Hindmarsh-Rose Neural Networks can exhibit a rich set of collective dynamics that can be controlled by their connectivity. Specifically, we showed an instance of Hebb's rule where spiking was correlated with network topology. Based on this, we presented a simple model of associative memory in coupled phase oscillators.

  19. Molecular basis of LFER. Modeling of the electronic substituent effect using fragment quantum self-similarity measures.

    PubMed

    Gironés, Xavier; Carbó-Dorca, Ramon; Ponec, Robert

    2003-01-01

    A new approach allowing the theoretical modeling of the electronic substituent effect is proposed. The approach is based on the use of fragment Quantum Self-Similarity Measures (MQS-SM) calculated from domain averaged Fermi Holes as new theoretical descriptors allowing for the replacement of Hammett sigma constants in QSAR models. To demonstrate the applicability of this new approach its formalism was applied to the description of the substituent effect on the dissociation of a broad series of meta and para substituted benzoic acids. The accuracy and the predicting power of this new approach was tested on the comparison with a recent exhaustive study by Sullivan et al. It has been shown that the accuracy and the predicting power of both procedures is comparable, but, in contrast to a five-parameter correlation equation necessary to describe the data in the study, our approach is more simple and, in fact, only a simple one-parameter correlation equation is required.

  20. Deep Correlated Holistic Metric Learning for Sketch-Based 3D Shape Retrieval.

    PubMed

    Dai, Guoxian; Xie, Jin; Fang, Yi

    2018-07-01

    How to effectively retrieve desired 3D models with simple queries is a long-standing problem in computer vision community. The model-based approach is quite straightforward but nontrivial, since people could not always have the desired 3D query model available by side. Recently, large amounts of wide-screen electronic devices are prevail in our daily lives, which makes the sketch-based 3D shape retrieval a promising candidate due to its simpleness and efficiency. The main challenge of sketch-based approach is the huge modality gap between sketch and 3D shape. In this paper, we proposed a novel deep correlated holistic metric learning (DCHML) method to mitigate the discrepancy between sketch and 3D shape domains. The proposed DCHML trains two distinct deep neural networks (one for each domain) jointly, which learns two deep nonlinear transformations to map features from both domains into a new feature space. The proposed loss, including discriminative loss and correlation loss, aims to increase the discrimination of features within each domain as well as the correlation between different domains. In the new feature space, the discriminative loss minimizes the intra-class distance of the deep transformed features and maximizes the inter-class distance of the deep transformed features to a large margin within each domain, while the correlation loss focused on mitigating the distribution discrepancy across different domains. Different from existing deep metric learning methods only with loss at the output layer, our proposed DCHML is trained with loss at both hidden layer and output layer to further improve the performance by encouraging features in the hidden layer also with desired properties. Our proposed method is evaluated on three benchmarks, including 3D Shape Retrieval Contest 2013, 2014, and 2016 benchmarks, and the experimental results demonstrate the superiority of our proposed method over the state-of-the-art methods.

  1. Topology of correlation-based minimal spanning trees in real and model markets

    NASA Astrophysics Data System (ADS)

    Bonanno, Giovanni; Caldarelli, Guido; Lillo, Fabrizio; Mantegna, Rosario N.

    2003-10-01

    We compare the topological properties of the minimal spanning tree obtained from a large group of stocks traded at the New York Stock Exchange during a 12-year trading period with the one obtained from surrogated data simulated by using simple market models. We find that the empirical tree has features of a complex network that cannot be reproduced, even as a first approximation, by a random market model and by the widespread one-factor model.

  2. General order parameter based correlation analysis of protein backbone motions between experimental NMR relaxation measurements and molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Qing; Shi, Chaowei; Yu, Lu

    Internal backbone dynamic motions are essential for different protein functions and occur on a wide range of time scales, from femtoseconds to seconds. Molecular dynamic (MD) simulations and nuclear magnetic resonance (NMR) spin relaxation measurements are valuable tools to gain access to fast (nanosecond) internal motions. However, there exist few reports on correlation analysis between MD and NMR relaxation data. Here, backbone relaxation measurements of {sup 15}N-labeled SH3 (Src homology 3) domain proteins in aqueous buffer were used to generate general order parameters (S{sup 2}) using a model-free approach. Simultaneously, 80 ns MD simulations of SH3 domain proteins in amore » defined hydrated box at neutral pH were conducted and the general order parameters (S{sup 2}) were derived from the MD trajectory. Correlation analysis using the Gromos force field indicated that S{sup 2} values from NMR relaxation measurements and MD simulations were significantly different. MD simulations were performed on models with different charge states for three histidine residues, and with different water models, which were SPC (simple point charge) water model and SPC/E (extended simple point charge) water model. S{sup 2} parameters from MD simulations with charges for all three histidines and with the SPC/E water model correlated well with S{sup 2} calculated from the experimental NMR relaxation measurements, in a site-specific manner. - Highlights: • Correlation analysis between NMR relaxation measurements and MD simulations. • General order parameter (S{sup 2}) as common reference between the two methods. • Different protein dynamics with different Histidine charge states in neutral pH. • Different protein dynamics with different water models.« less

  3. What Is a Simple Liquid?

    NASA Astrophysics Data System (ADS)

    Ingebrigtsen, Trond S.; Schrøder, Thomas B.; Dyre, Jeppe C.

    2012-01-01

    This paper is an attempt to identify the real essence of simplicity of liquids in John Locke’s understanding of the term. Simple liquids are traditionally defined as many-body systems of classical particles interacting via radially symmetric pair potentials. We suggest that a simple liquid should be defined instead by the property of having strong correlations between virial and potential-energy equilibrium fluctuations in the NVT ensemble. There is considerable overlap between the two definitions, but also some notable differences. For instance, in the new definition simplicity is not a direct property of the intermolecular potential because a liquid is usually only strongly correlating in part of its phase diagram. Moreover, not all simple liquids are atomic (i.e., with radially symmetric pair potentials) and not all atomic liquids are simple. The main part of the paper motivates the new definition of liquid simplicity by presenting evidence that a liquid is strongly correlating if and only if its intermolecular interactions may be ignored beyond the first coordination shell (FCS). This is demonstrated by NVT simulations of the structure and dynamics of several atomic and three molecular model liquids with a shifted-forces cutoff placed at the first minimum of the radial distribution function. The liquids studied are inverse power-law systems (r-n pair potentials with n=18,6,4), Lennard-Jones (LJ) models (the standard LJ model, two generalized Kob-Andersen binary LJ mixtures, and the Wahnstrom binary LJ mixture), the Buckingham model, the Dzugutov model, the LJ Gaussian model, the Gaussian core model, the Hansen-McDonald molten salt model, the Lewis-Wahnstrom ortho-terphenyl model, the asymmetric dumbbell model, and the single-point charge water model. The final part of the paper summarizes properties of strongly correlating liquids, emphasizing that these are simpler than liquids in general. Simple liquids, as defined here, may be characterized in three quite different ways: (1) chemically by the fact that the liquid’s properties are fully determined by interactions from the molecules within the FCS, (2) physically by the fact that there are isomorphs in the phase diagram, i.e., curves along which several properties like excess entropy, structure, and dynamics, are invariant in reduced units, and (3) mathematically by the fact that throughout the phase diagram the reduced-coordinate constant-potential-energy hypersurfaces define a one-parameter family of compact Riemannian manifolds. No proof is given that the chemical characterization follows from the strong correlation property, but we show that this FCS characterization is consistent with the existence of isomorphs in strongly correlating liquids’ phase diagram. Finally, we note that the FCS characterization of simple liquids calls into question the physical basis of standard perturbation theory, according to which the repulsive and attractive forces play fundamentally different roles for the physics of liquids.

  4. Rates of profit as correlated sums of random variables

    NASA Astrophysics Data System (ADS)

    Greenblatt, R. E.

    2013-10-01

    Profit realization is the dominant feature of market-based economic systems, determining their dynamics to a large extent. Rather than attaining an equilibrium, profit rates vary widely across firms, and the variation persists over time. Differing definitions of profit result in differing empirical distributions. To study the statistical properties of profit rates, I used data from a publicly available database for the US Economy for 2009-2010 (Risk Management Association). For each of three profit rate measures, the sample space consists of 771 points. Each point represents aggregate data from a small number of US manufacturing firms of similar size and type (NAICS code of principal product). When comparing the empirical distributions of profit rates, significant ‘heavy tails’ were observed, corresponding principally to a number of firms with larger profit rates than would be expected from simple models. An apparently novel correlated sum of random variables statistical model was used to model the data. In the case of operating and net profit rates, a number of firms show negative profits (losses), ruling out simple gamma or lognormal distributions as complete models for these data.

  5. Possible biomechanical origins of the long-range correlations in stride intervals of walking

    NASA Astrophysics Data System (ADS)

    Gates, Deanna H.; Su, Jimmy L.; Dingwell, Jonathan B.

    2007-07-01

    When humans walk, the time duration of each stride varies from one stride to the next. These temporal fluctuations exhibit long-range correlations. It has been suggested that these correlations stem from higher nervous system centers in the brain that control gait cycle timing. Existing proposed models of this phenomenon have focused on neurophysiological mechanisms that might give rise to these long-range correlations, and generally ignored potential alternative mechanical explanations. We hypothesized that a simple mechanical system could also generate similar long-range correlations in stride times. We modified a very simple passive dynamic model of bipedal walking to incorporate forward propulsion through an impulsive force applied to the trailing leg at each push-off. Push-off forces were varied from step to step by incorporating both “sensory” and “motor” noise terms that were regulated by a simple proportional feedback controller. We generated 400 simulations of walking, with different combinations of sensory noise, motor noise, and feedback gain. The stride time data from each simulation were analyzed using detrended fluctuation analysis to compute a scaling exponent, α. This exponent quantified how each stride interval was correlated with previous and subsequent stride intervals over different time scales. For different variations of the noise terms and feedback gain, we obtained short-range correlations (α<0.5), uncorrelated time series (α=0.5), long-range correlations (0.5<α<1.0), or Brownian motion (α>1.0). Our results indicate that a simple biomechanical model of walking can generate long-range correlations and thus perhaps these correlations are not a complex result of higher level neuronal control, as has been previously suggested.

  6. Applications of the Simple Multi-Fluid Model to Correlations of the Vapor-Liquid Equilibrium of Refrigerant Mixtures Containing Carbon Dioxide

    NASA Astrophysics Data System (ADS)

    Akasaka, Ryo

    This study presents a simple multi-fluid model for Helmholtz energy equations of state. The model contains only three parameters, whereas rigorous multi-fluid models developed for several industrially important mixtures usually have more than 10 parameters and coefficients. Therefore, the model can be applied to mixtures where experimental data is limited. Vapor-liquid equilibrium (VLE) of the following seven mixtures have been successfully correlated with the model: CO2 + difluoromethane (R-32), CO2 + trifluoromethane (R-23), CO2 + fluoromethane (R-41), CO2 + 1,1,1,2- tetrafluoroethane (R-134a), CO2 + pentafluoroethane (R-125), CO2 + 1,1-difluoroethane (R-152a), and CO2 + dimethyl ether (DME). The best currently available equations of state for the pure refrigerants were used for the correlations. For all mixtures, average deviations in calculated bubble-point pressures from experimental values are within 2%. The simple multi-fluid model will be helpful for design and simulations of heat pumps and refrigeration systems using the mixtures as working fluid.

  7. The Webcam system: a simple, automated, computer-based video system for quantitative measurement of movement in nonhuman primates.

    PubMed

    Togasaki, Daniel M; Hsu, Albert; Samant, Meghana; Farzan, Bijan; DeLanney, Louis E; Langston, J William; Di Monte, Donato A; Quik, Maryka

    2005-06-30

    Investigations using models of neurologic disease frequently involve quantifying animal motor activity. We developed a simple method for measuring motor activity using a computer-based video system (the Webcam system) consisting of an inexpensive video camera connected to a personal computer running customized software. Images of the animals are captured at half-second intervals and movement is quantified as the number of pixel changes between consecutive images. The Webcam system allows measurement of motor activity of the animals in their home cages, without devices affixed to their bodies. Webcam quantification of movement was validated by correlation with measures simultaneously obtained by two other methods: measurement of locomotion by interruption of infrared beams; and measurement of general motor activity using portable accelerometers. In untreated squirrel monkeys, correlations of Webcam and locomotor activity exceeded 0.79, and correlations with general activity counts exceeded 0.65. Webcam activity decreased after the monkeys were rendered parkinsonian by treatment with 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP), but the correlations with the other measures of motor activity were maintained. Webcam activity also correlated with clinical ratings of parkinsonism. These results indicate that the Webcam system is reliable under both untreated and experimental conditions and is an excellent method for quantifying motor activity in animals.

  8. Local density approximation in site-occupation embedding theory

    NASA Astrophysics Data System (ADS)

    Senjean, Bruno; Tsuchiizu, Masahisa; Robert, Vincent; Fromager, Emmanuel

    2017-01-01

    Site-occupation embedding theory (SOET) is a density functional theory (DFT)-based method which aims at modelling strongly correlated electrons. It is in principle exact and applicable to model and quantum chemical Hamiltonians. The theory is presented here for the Hubbard Hamiltonian. In contrast to conventional DFT approaches, the site (or orbital) occupations are deduced in SOET from a partially interacting system consisting of one (or more) impurity site(s) and non-interacting bath sites. The correlation energy of the bath is then treated implicitly by means of a site-occupation functional. In this work, we propose a simple impurity-occupation functional approximation based on the two-level (2L) Hubbard model which is referred to as two-level impurity local density approximation (2L-ILDA). Results obtained on a prototypical uniform eight-site Hubbard ring are promising. The extension of the method to larger systems and more sophisticated model Hamiltonians is currently in progress.

  9. Modified signal-to-noise: a new simple and practical gene filtering approach based on the concept of projective adaptive resonance theory (PART) filtering method.

    PubMed

    Takahashi, Hiro; Honda, Hiroyuki

    2006-07-01

    Considering the recent advances in and the benefits of DNA microarray technologies, many gene filtering approaches have been employed for the diagnosis and prognosis of diseases. In our previous study, we developed a new filtering method, namely, the projective adaptive resonance theory (PART) filtering method. This method was effective in subclass discrimination. In the PART algorithm, the genes with a low variance in gene expression in either class, not both classes, were selected as important genes for modeling. Based on this concept, we developed novel simple filtering methods such as modified signal-to-noise (S2N') in the present study. The discrimination model constructed using these methods showed higher accuracy with higher reproducibility as compared with many conventional filtering methods, including the t-test, S2N, NSC and SAM. The reproducibility of prediction was evaluated based on the correlation between the sets of U-test p-values on randomly divided datasets. With respect to leukemia, lymphoma and breast cancer, the correlation was high; a difference of >0.13 was obtained by the constructed model by using <50 genes selected by S2N'. Improvement was higher in the smaller genes and such higher correlation was observed when t-test, NSC and SAM were used. These results suggest that these modified methods, such as S2N', have high potential to function as new methods for marker gene selection in cancer diagnosis using DNA microarray data. Software is available upon request.

  10. Robust optical flow using adaptive Lorentzian filter for image reconstruction under noisy condition

    NASA Astrophysics Data System (ADS)

    Kesrarat, Darun; Patanavijit, Vorapoj

    2017-02-01

    In optical flow for motion allocation, the efficient result in Motion Vector (MV) is an important issue. Several noisy conditions may cause the unreliable result in optical flow algorithms. We discover that many classical optical flows algorithms perform better result under noisy condition when combined with modern optimized model. This paper introduces effective robust models of optical flow by using Robust high reliability spatial based optical flow algorithms using the adaptive Lorentzian norm influence function in computation on simple spatial temporal optical flows algorithm. Experiment on our proposed models confirm better noise tolerance in optical flow's MV under noisy condition when they are applied over simple spatial temporal optical flow algorithms as a filtering model in simple frame-to-frame correlation technique. We illustrate the performance of our models by performing an experiment on several typical sequences with differences in movement speed of foreground and background where the experiment sequences are contaminated by the additive white Gaussian noise (AWGN) at different noise decibels (dB). This paper shows very high effectiveness of noise tolerance models that they are indicated by peak signal to noise ratio (PSNR).

  11. Free-space optical channel simulator for weak-turbulence conditions.

    PubMed

    Bykhovsky, Dima

    2015-11-01

    Free-space optical (FSO) communication may be severely influenced by the inevitable turbulence effect that results in channel gain fluctuations and fading. The objective of this paper is to provide a simple and effective simulator of the weak-turbulence FSO channel that emulates the influence of the temporal covariance effect. Specifically, the proposed model is based on lognormal distributed samples with a corresponding correlation time. The simulator is based on the solution of the first-order stochastic differential equation (SDE). The results of the provided SDE analysis reveal its efficacy for turbulent channel modeling.

  12. Accurate calculation and modeling of the adiabatic connection in density functional theory

    NASA Astrophysics Data System (ADS)

    Teale, A. M.; Coriani, S.; Helgaker, T.

    2010-04-01

    Using a recently implemented technique for the calculation of the adiabatic connection (AC) of density functional theory (DFT) based on Lieb maximization with respect to the external potential, the AC is studied for atoms and molecules containing up to ten electrons: the helium isoelectronic series, the hydrogen molecule, the beryllium isoelectronic series, the neon atom, and the water molecule. The calculation of AC curves by Lieb maximization at various levels of electronic-structure theory is discussed. For each system, the AC curve is calculated using Hartree-Fock (HF) theory, second-order Møller-Plesset (MP2) theory, coupled-cluster singles-and-doubles (CCSD) theory, and coupled-cluster singles-doubles-perturbative-triples [CCSD(T)] theory, expanding the molecular orbitals and the effective external potential in large Gaussian basis sets. The HF AC curve includes a small correlation-energy contribution in the context of DFT, arising from orbital relaxation as the electron-electron interaction is switched on under the constraint that the wave function is always a single determinant. The MP2 and CCSD AC curves recover the bulk of the dynamical correlation energy and their shapes can be understood in terms of a simple energy model constructed from a consideration of the doubles-energy expression at different interaction strengths. Differentiation of this energy expression with respect to the interaction strength leads to a simple two-parameter doubles model (AC-D) for the AC integrand (and hence the correlation energy of DFT) as a function of the interaction strength. The structure of the triples-energy contribution is considered in a similar fashion, leading to a quadratic model for the triples correction to the AC curve (AC-T). From a consideration of the structure of a two-level configuration-interaction (CI) energy expression of the hydrogen molecule, a simple two-parameter CI model (AC-CI) is proposed to account for the effects of static correlation on the AC. When parametrized in terms of the same input data, the AC-CI model offers improved performance over the corresponding AC-D model, which is shown to be the lowest-order contribution to the AC-CI model. The utility of the accurately calculated AC curves for the analysis of standard density functionals is demonstrated for the BLYP exchange-correlation functional and the interaction-strength-interpolation (ISI) model AC integrand. From the results of this analysis, we investigate the performance of our proposed two-parameter AC-D and AC-CI models when a simple density functional for the AC at infinite interaction strength is employed in place of information at the fully interacting point. The resulting two-parameter correlation functionals offer a qualitatively correct behavior of the AC integrand with much improved accuracy over previous attempts. The AC integrands in the present work are recommended as a basis for further work, generating functionals that avoid spurious error cancellations between exchange and correlation energies and give good accuracy for the range of densities and types of correlation contained in the systems studied here.

  13. Modeling Simple Driving Tasks with a One-Boundary Diffusion Model

    PubMed Central

    Ratcliff, Roger; Strayer, David

    2014-01-01

    A one-boundary diffusion model was applied to the data from two experiments in which subjects were performing a simple simulated driving task. In the first experiment, the same subjects were tested on two driving tasks using a PC-based driving simulator and the psychomotor vigilance test (PVT). The diffusion model fit the response time (RT) distributions for each task and individual subject well. Model parameters were found to correlate across tasks which suggests common component processes were being tapped in the three tasks. The model was also fit to a distracted driving experiment of Cooper and Strayer (2008). Results showed that distraction altered performance by affecting the rate of evidence accumulation (drift rate) and/or increasing the boundary settings. This provides an interpretation of cognitive distraction whereby conversing on a cell phone diverts attention from the normal accumulation of information in the driving environment. PMID:24297620

  14. Representation of limb kinematics in Purkinje cell simple spike discharge is conserved across multiple tasks.

    PubMed

    Hewitt, Angela L; Popa, Laurentiu S; Pasalar, Siavash; Hendrix, Claudia M; Ebner, Timothy J

    2011-11-01

    Encoding of movement kinematics in Purkinje cell simple spike discharge has important implications for hypotheses of cerebellar cortical function. Several outstanding questions remain regarding representation of these kinematic signals. It is uncertain whether kinematic encoding occurs in unpredictable, feedback-dependent tasks or kinematic signals are conserved across tasks. Additionally, there is a need to understand the signals encoded in the instantaneous discharge of single cells without averaging across trials or time. To address these questions, this study recorded Purkinje cell firing in monkeys trained to perform a manual random tracking task in addition to circular tracking and center-out reach. Random tracking provides for extensive coverage of kinematic workspaces. Direction and speed errors are significantly greater during random than circular tracking. Cross-correlation analyses comparing hand and target velocity profiles show that hand velocity lags target velocity during random tracking. Correlations between simple spike firing from 120 Purkinje cells and hand position, velocity, and speed were evaluated with linear regression models including a time constant, τ, as a measure of the firing lead/lag relative to the kinematic parameters. Across the population, velocity accounts for the majority of simple spike firing variability (63 ± 30% of R(adj)(2)), followed by position (28 ± 24% of R(adj)(2)) and speed (11 ± 19% of R(adj)(2)). Simple spike firing often leads hand kinematics. Comparison of regression models based on averaged vs. nonaveraged firing and kinematics reveals lower R(adj)(2) values for nonaveraged data; however, regression coefficients and τ values are highly similar. Finally, for most cells, model coefficients generated from random tracking accurately estimate simple spike firing in either circular tracking or center-out reach. These findings imply that the cerebellum controls movement kinematics, consistent with a forward internal model that predicts upcoming limb kinematics.

  15. Correlation and simple linear regression.

    PubMed

    Eberly, Lynn E

    2007-01-01

    This chapter highlights important steps in using correlation and simple linear regression to address scientific questions about the association of two continuous variables with each other. These steps include estimation and inference, assessing model fit, the connection between regression and ANOVA, and study design. Examples in microbiology are used throughout. This chapter provides a framework that is helpful in understanding more complex statistical techniques, such as multiple linear regression, linear mixed effects models, logistic regression, and proportional hazards regression.

  16. Deterministic diffusion in flower-shaped billiards.

    PubMed

    Harayama, Takahisa; Klages, Rainer; Gaspard, Pierre

    2002-08-01

    We propose a flower-shaped billiard in order to study the irregular parameter dependence of chaotic normal diffusion. Our model is an open system consisting of periodically distributed obstacles in the shape of a flower, and it is strongly chaotic for almost all parameter values. We compute the parameter dependent diffusion coefficient of this model from computer simulations and analyze its functional form using different schemes, all generalizing the simple random walk approximation of Machta and Zwanzig. The improved methods we use are based either on heuristic higher-order corrections to the simple random walk model, on lattice gas simulation methods, or they start from a suitable Green-Kubo formula for diffusion. We show that dynamical correlations, or memory effects, are of crucial importance in reproducing the precise parameter dependence of the diffusion coefficent.

  17. Markov Decision Process Measurement Model.

    PubMed

    LaMar, Michelle M

    2018-03-01

    Within-task actions can provide additional information on student competencies but are challenging to model. This paper explores the potential of using a cognitive model for decision making, the Markov decision process, to provide a mapping between within-task actions and latent traits of interest. Psychometric properties of the model are explored, and simulation studies report on parameter recovery within the context of a simple strategy game. The model is then applied to empirical data from an educational game. Estimates from the model are found to correlate more strongly with posttest results than a partial-credit IRT model based on outcome data alone.

  18. On one-parametric formula relating the frequencies of twin-peak quasi-periodic oscillations

    NASA Astrophysics Data System (ADS)

    Török, Gabriel; Goluchová, Kateřina; Šrámková, Eva; Horák, Jiří; Bakala, Pavel; Urbanec, Martin

    2018-01-01

    Twin-peak quasi-periodic oscillations (QPOs) are observed in several low-mass X-ray binary systems containing neutron stars. Timing the analysis of X-ray fluxes of more than dozen of such systems reveals remarkable correlations between the frequencies of two characteristic peaks present in the power density spectra. The individual correlations clearly differ, but they roughly follow a common individual pattern. High values of measured QPO frequencies and strong modulation of the X-ray flux both suggest that the observed correlations are connected to orbital motion in the innermost part of an accretion disc. Several attempts to model these correlations with simple geodesic orbital models or phenomenological relations have failed in the past. We find and explore a surprisingly simple analytic relation that reproduces individual correlations for a group of several sources through a single parameter. When an additional free parameter is considered within our relation, it well reproduces the data of a large group of 14 sources. The very existence and form of this simple relation support the hypothesis of the orbital origin of QPOs and provide the key for further development of QPO models. We discuss a possible physical interpretation of our relation's parameters and their links to concrete QPO models.

  19. Locating the quantum critical point of the Bose-Hubbard model through singularities of simple observables.

    PubMed

    Łącki, Mateusz; Damski, Bogdan; Zakrzewski, Jakub

    2016-12-02

    We show that the critical point of the two-dimensional Bose-Hubbard model can be easily found through studies of either on-site atom number fluctuations or the nearest-neighbor two-point correlation function (the expectation value of the tunnelling operator). Our strategy to locate the critical point is based on the observation that the derivatives of these observables with respect to the parameter that drives the superfluid-Mott insulator transition are singular at the critical point in the thermodynamic limit. Performing the quantum Monte Carlo simulations of the two-dimensional Bose-Hubbard model, we show that this technique leads to the accurate determination of the position of its critical point. Our results can be easily extended to the three-dimensional Bose-Hubbard model and different Hubbard-like models. They provide a simple experimentally-relevant way of locating critical points in various cold atomic lattice systems.

  20. Elucidation of spin echo small angle neutron scattering correlation functions through model studies.

    PubMed

    Shew, Chwen-Yang; Chen, Wei-Ren

    2012-02-14

    Several single-modal Debye correlation functions to approximate part of the overall Debey correlation function of liquids are closely examined for elucidating their behavior in the corresponding spin echo small angle neutron scattering (SESANS) correlation functions. We find that the maximum length scale of a Debye correlation function is identical to that of its SESANS correlation function. For discrete Debye correlation functions, the peak of SESANS correlation function emerges at their first discrete point, whereas for continuous Debye correlation functions with greater width, the peak position shifts to a greater value. In both cases, the intensity and shape of the peak of the SESANS correlation function are determined by the width of the Debye correlation functions. Furthermore, we mimic the intramolecular and intermolecular Debye correlation functions of liquids composed of interacting particles based on a simple model to elucidate their competition in the SESANS correlation function. Our calculations show that the first local minimum of a SESANS correlation function can be negative and positive. By adjusting the spatial distribution of the intermolecular Debye function in the model, the calculated SESANS spectra exhibit the profile consistent with that of hard-sphere and sticky-hard-sphere liquids predicted by more sophisticated liquid state theory and computer simulation. © 2012 American Institute of Physics

  1. A Single Mechanism Can Account for Human Perception of Depth in Mixed Correlation Random Dot Stereograms

    PubMed Central

    Cumming, Bruce G.

    2016-01-01

    In order to extract retinal disparity from a visual scene, the brain must match corresponding points in the left and right retinae. This computationally demanding task is known as the stereo correspondence problem. The initial stage of the solution to the correspondence problem is generally thought to consist of a correlation-based computation. However, recent work by Doi et al suggests that human observers can see depth in a class of stimuli where the mean binocular correlation is 0 (half-matched random dot stereograms). Half-matched random dot stereograms are made up of an equal number of correlated and anticorrelated dots, and the binocular energy model—a well-known model of V1 binocular complex cells—fails to signal disparity here. This has led to the proposition that a second, match-based computation must be extracting disparity in these stimuli. Here we show that a straightforward modification to the binocular energy model—adding a point output nonlinearity—is by itself sufficient to produce cells that are disparity-tuned to half-matched random dot stereograms. We then show that a simple decision model using this single mechanism can reproduce psychometric functions generated by human observers, including reduced performance to large disparities and rapidly updating dot patterns. The model makes predictions about how performance should change with dot size in half-matched stereograms and temporal alternation in correlation, which we test in human observers. We conclude that a single correlation-based computation, based directly on already-known properties of V1 neurons, can account for the literature on mixed correlation random dot stereograms. PMID:27196696

  2. Human mammary epithelial cells exhibit a bimodal correlated random walk pattern.

    PubMed

    Potdar, Alka A; Jeon, Junhwan; Weaver, Alissa M; Quaranta, Vito; Cummings, Peter T

    2010-03-10

    Organisms, at scales ranging from unicellular to mammals, have been known to exhibit foraging behavior described by random walks whose segments confirm to Lévy or exponential distributions. For the first time, we present evidence that single cells (mammary epithelial cells) that exist in multi-cellular organisms (humans) follow a bimodal correlated random walk (BCRW). Cellular tracks of MCF-10A pBabe, neuN and neuT random migration on 2-D plastic substrates, analyzed using bimodal analysis, were found to reveal the BCRW pattern. We find two types of exponentially distributed correlated flights (corresponding to what we refer to as the directional and re-orientation phases) each having its own correlation between move step-lengths within flights. The exponential distribution of flight lengths was confirmed using different analysis methods (logarithmic binning with normalization, survival frequency plots and maximum likelihood estimation). Because of the presence of non-uniform turn angle distribution of move step-lengths within a flight and two different types of flights, we propose that the epithelial random walk is a BCRW comprising of two alternating modes with varying degree of correlations, rather than a simple persistent random walk. A BCRW model rather than a simple persistent random walk correctly matches the super-diffusivity in the cell migration paths as indicated by simulations based on the BCRW model.

  3. Excess entropy and crystallization in Stillinger-Weber and Lennard-Jones fluids

    NASA Astrophysics Data System (ADS)

    Dhabal, Debdas; Nguyen, Andrew Huy; Singh, Murari; Khatua, Prabir; Molinero, Valeria; Bandyopadhyay, Sanjoy; Chakravarty, Charusita

    2015-10-01

    Molecular dynamics simulations are used to contrast the supercooling and crystallization behaviour of monatomic liquids that exemplify the transition from simple to anomalous, tetrahedral liquids. As examples of simple fluids, we use the Lennard-Jones (LJ) liquid and a pair-dominated Stillinger-Weber liquid (SW16). As examples of tetrahedral, water-like fluids, we use the Stillinger-Weber model with variable tetrahedrality parameterized for germanium (SW20), silicon (SW21), and water (SW23.15 or mW model). The thermodynamic response functions show clear qualitative differences between simple and water-like liquids. For simple liquids, the compressibility and the heat capacity remain small on isobaric cooling. The tetrahedral liquids in contrast show a very sharp rise in these two response functions as the lower limit of liquid-phase stability is reached. While the thermal expansivity decreases with temperature but never crosses zero in simple liquids, in all three tetrahedral liquids at the studied pressure, there is a temperature of maximum density below which thermal expansivity is negative. In contrast to the thermodynamic response functions, the excess entropy on isobaric cooling does not show qualitatively different features for simple and water-like liquids; however, the slope and curvature of the entropy-temperature plots reflect the heat capacity trends. Two trajectory-based computational estimation methods for the entropy and the heat capacity are compared for possible structural insights into supercooling, with the entropy obtained from thermodynamic integration. The two-phase thermodynamic estimator for the excess entropy proves to be fairly accurate in comparison to the excess entropy values obtained by thermodynamic integration, for all five Lennard-Jones and Stillinger-Weber liquids. The entropy estimator based on the multiparticle correlation expansion that accounts for both pair and triplet correlations, denoted by Strip, is also studied. Strip is a good entropy estimator for liquids where pair and triplet correlations are important such as Ge and Si, but loses accuracy for purely pair-dominated liquids, like LJ fluid, or near the crystallization temperature (Tthr). Since local tetrahedral order is compatible with both liquid and crystalline states, the reorganisation of tetrahedral liquids is accompanied by a clear rise in the pair, triplet, and thermodynamic contributions to the heat capacity, resulting in the heat capacity anomaly. In contrast, the pair-dominated liquids show increasing dominance of triplet correlations on approaching crystallization but no sharp rise in either the pair or thermodynamic heat capacities.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dhabal, Debdas; Chakravarty, Charusita, E-mail: charus@chemistry.iitd.ac.in; Nguyen, Andrew Huy

    Molecular dynamics simulations are used to contrast the supercooling and crystallization behaviour of monatomic liquids that exemplify the transition from simple to anomalous, tetrahedral liquids. As examples of simple fluids, we use the Lennard-Jones (LJ) liquid and a pair-dominated Stillinger-Weber liquid (SW{sub 16}). As examples of tetrahedral, water-like fluids, we use the Stillinger-Weber model with variable tetrahedrality parameterized for germanium (SW{sub 20}), silicon (SW{sub 21}), and water (SW{sub 23.15} or mW model). The thermodynamic response functions show clear qualitative differences between simple and water-like liquids. For simple liquids, the compressibility and the heat capacity remain small on isobaric cooling. Themore » tetrahedral liquids in contrast show a very sharp rise in these two response functions as the lower limit of liquid-phase stability is reached. While the thermal expansivity decreases with temperature but never crosses zero in simple liquids, in all three tetrahedral liquids at the studied pressure, there is a temperature of maximum density below which thermal expansivity is negative. In contrast to the thermodynamic response functions, the excess entropy on isobaric cooling does not show qualitatively different features for simple and water-like liquids; however, the slope and curvature of the entropy-temperature plots reflect the heat capacity trends. Two trajectory-based computational estimation methods for the entropy and the heat capacity are compared for possible structural insights into supercooling, with the entropy obtained from thermodynamic integration. The two-phase thermodynamic estimator for the excess entropy proves to be fairly accurate in comparison to the excess entropy values obtained by thermodynamic integration, for all five Lennard-Jones and Stillinger-Weber liquids. The entropy estimator based on the multiparticle correlation expansion that accounts for both pair and triplet correlations, denoted by S{sub trip}, is also studied. S{sub trip} is a good entropy estimator for liquids where pair and triplet correlations are important such as Ge and Si, but loses accuracy for purely pair-dominated liquids, like LJ fluid, or near the crystallization temperature (T{sub thr}). Since local tetrahedral order is compatible with both liquid and crystalline states, the reorganisation of tetrahedral liquids is accompanied by a clear rise in the pair, triplet, and thermodynamic contributions to the heat capacity, resulting in the heat capacity anomaly. In contrast, the pair-dominated liquids show increasing dominance of triplet correlations on approaching crystallization but no sharp rise in either the pair or thermodynamic heat capacities.« less

  5. Benchmarking novel approaches for modelling species range dynamics

    PubMed Central

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H.; Moore, Kara A.; Zimmermann, Niklaus E.

    2016-01-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species’ range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species’ response to climate change but also emphasise several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches operational for large numbers of species. PMID:26872305

  6. Benchmarking novel approaches for modelling species range dynamics.

    PubMed

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E

    2016-08-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches operational for large numbers of species. © 2016 John Wiley & Sons Ltd.

  7. Modeling Electronic-Nuclear Interactions for Excitation Energy Transfer Processes in Light-Harvesting Complexes.

    PubMed

    Lee, Mi Kyung; Coker, David F

    2016-08-18

    An accurate approach for computing intermolecular and intrachromophore contributions to spectral densities to describe the electronic-nuclear interactions relevant for modeling excitation energy transfer processes in light harvesting systems is presented. The approach is based on molecular dynamics (MD) calculations of classical correlation functions of long-range contributions to excitation energy fluctuations and a separate harmonic analysis and single-point gradient quantum calculations for electron-intrachromophore vibrational couplings. A simple model is also presented that enables detailed analysis of the shortcomings of standard MD-based excitation energy fluctuation correlation function approaches. The method introduced here avoids these problems, and its reliability is demonstrated in accurate predictions for bacteriochlorophyll molecules in the Fenna-Matthews-Olson pigment-protein complex, where excellent agreement with experimental spectral densities is found. This efficient approach can provide instantaneous spectral densities for treating the influence of fluctuations in environmental dissipation on fast electronic relaxation.

  8. Simplified Estimation and Testing in Unbalanced Repeated Measures Designs.

    PubMed

    Spiess, Martin; Jordan, Pascal; Wendt, Mike

    2018-05-07

    In this paper we propose a simple estimator for unbalanced repeated measures design models where each unit is observed at least once in each cell of the experimental design. The estimator does not require a model of the error covariance structure. Thus, circularity of the error covariance matrix and estimation of correlation parameters and variances are not necessary. Together with a weak assumption about the reason for the varying number of observations, the proposed estimator and its variance estimator are unbiased. As an alternative to confidence intervals based on the normality assumption, a bias-corrected and accelerated bootstrap technique is considered. We also propose the naive percentile bootstrap for Wald-type tests where the standard Wald test may break down when the number of observations is small relative to the number of parameters to be estimated. In a simulation study we illustrate the properties of the estimator and the bootstrap techniques to calculate confidence intervals and conduct hypothesis tests in small and large samples under normality and non-normality of the errors. The results imply that the simple estimator is only slightly less efficient than an estimator that correctly assumes a block structure of the error correlation matrix, a special case of which is an equi-correlation matrix. Application of the estimator and the bootstrap technique is illustrated using data from a task switch experiment based on an experimental within design with 32 cells and 33 participants.

  9. Financial model calibration using consistency hints.

    PubMed

    Abu-Mostafa, Y S

    2001-01-01

    We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.

  10. Simple, empirical approach to predict neutron capture cross sections from nuclear masses

    NASA Astrophysics Data System (ADS)

    Couture, A.; Casten, R. F.; Cakirli, R. B.

    2017-12-01

    Background: Neutron capture cross sections are essential to understanding the astrophysical s and r processes, the modeling of nuclear reactor design and performance, and for a wide variety of nuclear forensics applications. Often, cross sections are needed for nuclei where experimental measurements are difficult. Enormous effort, over many decades, has gone into attempting to develop sophisticated statistical reaction models to predict these cross sections. Such work has met with some success but is often unable to reproduce measured cross sections to better than 40 % , and has limited predictive power, with predictions from different models rapidly differing by an order of magnitude a few nucleons from the last measurement. Purpose: To develop a new approach to predicting neutron capture cross sections over broad ranges of nuclei that accounts for their values where known and which has reliable predictive power with small uncertainties for many nuclei where they are unknown. Methods: Experimental neutron capture cross sections were compared to empirical mass observables in regions of similar structure. Results: We present an extremely simple method, based solely on empirical mass observables, that correlates neutron capture cross sections in the critical energy range from a few keV to a couple hundred keV. We show that regional cross sections are compactly correlated in medium and heavy mass nuclei with the two-neutron separation energy. These correlations are easily amenable to predict unknown cross sections, often converting the usual extrapolations to more reliable interpolations. It almost always reproduces existing data to within 25 % and estimated uncertainties are below about 40 % up to 10 nucleons beyond known data. Conclusions: Neutron capture cross sections display a surprisingly strong connection to the two-neutron separation energy, a nuclear structure property. The simple, empirical correlations uncovered provide model-independent predictions of neutron capture cross sections, extending far from stability, including for nuclei of the highest sensitivity to r -process nucleosynthesis.

  11. Modeling Surface Climate in US Cities Using Simple Biosphere Model Sib2

    NASA Technical Reports Server (NTRS)

    Zhang, Ping; Bounoua, Lahouari; Thome, Kurtis; Wolfe, Robert; Imhoff, Marc

    2015-01-01

    We combine Landsat- and the Moderate Resolution Imaging Spectroradiometer (MODIS)-based products in the Simple Biosphere model (SiB2) to assess the effects of urbanized land on the continental US (CONUS) surface climate. Using National Land Cover Database (NLCD) Impervious Surface Area (ISA), we define more than 300 urban settlements and their surrounding suburban and rural areas over the CONUS. The SiB2 modeled Gross Primary Production (GPP) over the CONUS of 7.10 PgC (1 PgC= 10(exp 15) grams of Carbon) is comparable to the MODIS improved GPP of 6.29 PgC. At state level, SiB2 GPP is highly correlated with MODIS GPP with a correlation coefficient of 0.94. An increasing horizontal GPP gradient is shown from the urban out to the rural area, with, on average, rural areas fixing 30% more GPP than urbans. Cities built in forested biomes have stronger UHI magnitude than those built in short vegetation with low biomass. Mediterranean climate cities have a stronger UHI in wet season than dry season. Our results also show that for urban areas built within forests, 39% of the precipitation is discharged as surface runoff during summer versus 23% in rural areas.

  12. Data requirements to model creep in 9Cr-1Mo-V steel

    NASA Technical Reports Server (NTRS)

    Swindeman, R. W.

    1988-01-01

    Models for creep behavior are helpful in predicting response of components experiencing stress redistributions due to cyclic loads, and often the analyst would like information that correlates strain rate with history assuming simple hardening rules such as those based on time or strain. On the one hand, much progress has been made in the development of unified constitutive equations that include both hardening and softening through the introduction of state variables whose evolutions are history dependent. Although it is difficult to estimate specific data requirements for general application, there are several simple measurements that can be made in the course of creep testing and results reported in data bases. The issue is whether or not such data could be helpful in developing unified equations, and, if so, how should such data be reported. Data produced on a martensitic 9Cr-1Mo-V-Nb steel were examined with these issues in mind.

  13. Hydrogeomorphology explains acidification-driven variation in aquatic biological communities in the Neversink Basin, USA

    USGS Publications Warehouse

    Harpold, Adrian A.; Burns, Douglas A.; Walter, M.T.; Steenhuis, Tammo S.

    2013-01-01

    Describing the distribution of aquatic habitats and the health of biological communities can be costly and time-consuming; therefore, simple, inexpensive methods to scale observations of aquatic biota to watersheds that lack data would be useful. In this study, we explored the potential of a simple “hydrogeomorphic” model to predict the effects of acid deposition on macroinvertebrate, fish, and diatom communities in 28 sub-watersheds of the 176-km2 Neversink River basin in the Catskill Mountains of New York State. The empirical model was originally developed to predict stream-water acid neutralizing capacity (ANC) using the watershed slope and drainage density. Because ANC is known to be strongly related to aquatic biological communities in the Neversink, we speculated that the model might correlate well with biotic indicators of ANC response. The hydrogeomorphic model was strongly correlated to several measures of macroinvertebrate and fish community richness and density, but less strongly correlated to diatom acid tolerance. The model was also strongly correlated to biological communities in 18 sub-watersheds independent of the model development, with the linear correlation capturing the strongly acidic nature of small upland watersheds (2). Overall, we demonstrated the applicability of geospatial data sets and a simple hydrogeomorphic model for estimating aquatic biological communities in areas with stream-water acidification, allowing estimates where no direct field observations are available. Similar modeling approaches have the potential to complement or refine expensive and time-consuming measurements of aquatic biota populations and to aid in regional assessments of aquatic health.

  14. Statistical Mechanics of US Supreme Court

    NASA Astrophysics Data System (ADS)

    Lee, Edward; Broedersz, Chase; Bialek, William; Biophysics Theory Group Team

    2014-03-01

    We build simple models for the distribution of voting patterns in a group, using the Supreme Court of the United States as an example. The least structured, or maximum entropy, model that is consistent with the observed pairwise correlations among justices' votes is equivalent to an Ising spin glass. While all correlations (perhaps surprisingly) are positive, the effective pairwise interactions in the spin glass model have both signs, recovering some of our intuition that justices on opposite sides of the ideological spectrum should have a negative influence on one another. Despite the competing interactions, a strong tendency toward unanimity emerges from the model, and this agrees quantitatively with the data. The model shows that voting patterns are organized in a relatively simple ``energy landscape,'' correctly predicts the extent to which each justice is correlated with the majority, and gives us a measure of the influence that justices exert on one another. These results suggest that simple models, grounded in statistical physics, can capture essential features of collective decision making quantitatively, even in a complex political context. Funded by National Science Foundation Grants PHY-0957573 and CCF-0939370, WM Keck Foundation, Lewis-Sigler Fellowship, Burroughs Wellcome Fund, and Winston Foundation.

  15. The impact of electrostatic correlations on Dielectrophoresis of Non-conducting Particles

    NASA Astrophysics Data System (ADS)

    Alidoosti, Elaheh; Zhao, Hui

    2017-11-01

    The dipole moment of a charged, dielectric, spherical particle under the influence of a uniform alternating electric field is computed theoretically and numerically by solving the modified continuum Poisson-Nernst-Planck (PNP) equations accounting for ion-ion electrostatic correlations that is important at concentrated electrolytes (Phys. Rev. Lett. 106, 2011). The dependence on the frequency, zeta potential, electrostatic correlation lengths, and double layer thickness is thoroughly investigated. In the limit of thin double layers, we carry out asymptotic analysis to develop simple models which are in good agreement with the modified PNP model. Our results suggest that the electrostatic correlations have a complicated impact on the dipole moment. As the electrostatic correlations length increases, the dipole moment decreases, initially, reach a minimum, and then increases since the surface conduction first decreases and then increases due to the ion-ion correlations. The modified PNP model can improve the theoretical predictions particularly at low frequencies where the simple model can't qualitatively predict the dipole moment. This work was supported, in part, by NIH R15GM116039.

  16. A simple approach to quantitative analysis using three-dimensional spectra based on selected Zernike moments.

    PubMed

    Zhai, Hong Lin; Zhai, Yue Yuan; Li, Pei Zhen; Tian, Yue Li

    2013-01-21

    A very simple approach to quantitative analysis is proposed based on the technology of digital image processing using three-dimensional (3D) spectra obtained by high-performance liquid chromatography coupled with a diode array detector (HPLC-DAD). As the region-based shape features of a grayscale image, Zernike moments with inherently invariance property were employed to establish the linear quantitative models. This approach was applied to the quantitative analysis of three compounds in mixed samples using 3D HPLC-DAD spectra, and three linear models were obtained, respectively. The correlation coefficients (R(2)) for training and test sets were more than 0.999, and the statistical parameters and strict validation supported the reliability of established models. The analytical results suggest that the Zernike moment selected by stepwise regression can be used in the quantitative analysis of target compounds. Our study provides a new idea for quantitative analysis using 3D spectra, which can be extended to the analysis of other 3D spectra obtained by different methods or instruments.

  17. United3D: a protein model quality assessment program that uses two consensus based methods.

    PubMed

    Terashi, Genki; Oosawa, Makoto; Nakamura, Yuuki; Kanou, Kazuhiko; Takeda-Shitaka, Mayuko

    2012-01-01

    In protein structure prediction, such as template-based modeling and free modeling (ab initio modeling), the step that assesses the quality of protein models is very important. We have developed a model quality assessment (QA) program United3D that uses an optimized clustering method and a simple Cα atom contact-based potential. United3D automatically estimates the quality scores (Qscore) of predicted protein models that are highly correlated with the actual quality (GDT_TS). The performance of United3D was tested in the ninth Critical Assessment of protein Structure Prediction (CASP9) experiment. In CASP9, United3D showed the lowest average loss of GDT_TS (5.3) among the QA methods participated in CASP9. This result indicates that the performance of United3D to identify the high quality models from the models predicted by CASP9 servers on 116 targets was best among the QA methods that were tested in CASP9. United3D also produced high average Pearson correlation coefficients (0.93) and acceptable Kendall rank correlation coefficients (0.68) between the Qscore and GDT_TS. This performance was competitive with the other top ranked QA methods that were tested in CASP9. These results indicate that United3D is a useful tool for selecting high quality models from many candidate model structures provided by various modeling methods. United3D will improve the accuracy of protein structure prediction.

  18. Applying the multivariate time-rescaling theorem to neural population models

    PubMed Central

    Gerhard, Felipe; Haslinger, Robert; Pipa, Gordon

    2011-01-01

    Statistical models of neural activity are integral to modern neuroscience. Recently, interest has grown in modeling the spiking activity of populations of simultaneously recorded neurons to study the effects of correlations and functional connectivity on neural information processing. However any statistical model must be validated by an appropriate goodness-of-fit test. Kolmogorov-Smirnov tests based upon the time-rescaling theorem have proven to be useful for evaluating point-process-based statistical models of single-neuron spike trains. Here we discuss the extension of the time-rescaling theorem to the multivariate (neural population) case. We show that even in the presence of strong correlations between spike trains, models which neglect couplings between neurons can be erroneously passed by the univariate time-rescaling test. We present the multivariate version of the time-rescaling theorem, and provide a practical step-by-step procedure for applying it towards testing the sufficiency of neural population models. Using several simple analytically tractable models and also more complex simulated and real data sets, we demonstrate that important features of the population activity can only be detected using the multivariate extension of the test. PMID:21395436

  19. A new modelling approach for zooplankton behaviour

    NASA Astrophysics Data System (ADS)

    Keiyu, A. Y.; Yamazaki, H.; Strickler, J. R.

    We have developed a new simulation technique to model zooplankton behaviour. The approach utilizes neither the conventional artificial intelligence nor neural network methods. We have designed an adaptive behaviour network, which is similar to BEER [(1990) Intelligence as an adaptive behaviour: an experiment in computational neuroethology, Academic Press], based on observational studies of zooplankton behaviour. The proposed method is compared with non- "intelligent" models—random walk and correlated walk models—as well as observed behaviour in a laboratory tank. Although the network is simple, the model exhibits rich behavioural patterns similar to live copepods.

  20. Read-only high accuracy volume holographic optical correlator

    NASA Astrophysics Data System (ADS)

    Zhao, Tian; Li, Jingming; Cao, Liangcai; He, Qingsheng; Jin, Guofan

    2011-10-01

    A read-only volume holographic correlator (VHC) is proposed. After the recording of all of the correlation database pages by angular multiplexing, a stand-alone read-only high accuracy VHC will be separated from the VHC recording facilities which include the high-power laser and the angular multiplexing system. The stand-alone VHC has its own low power readout laser and very compact and simple structure. Since there are two lasers that are employed for recording and readout, respectively, the optical alignment tolerance of the laser illumination on the SLM is very sensitive. The twodimensional angular tolerance is analyzed based on the theoretical model of the volume holographic correlator. The experimental demonstration of the proposed read-only VHC is introduced and discussed.

  1. A Bayes linear Bayes method for estimation of correlated event rates.

    PubMed

    Quigley, John; Wilson, Kevin J; Walls, Lesley; Bedford, Tim

    2013-12-01

    Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well-known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates. © 2013 Society for Risk Analysis.

  2. Volatility and correlation-based systemic risk measures in the US market

    NASA Astrophysics Data System (ADS)

    Civitarese, Jamil

    2016-10-01

    This paper deals with the problem of how to use simple systemic risk measures to assess portfolio risk characteristics. Using three simple examples taken from previous literature, one based on raw and partial correlations, another based on the eigenvalue decomposition of the covariance matrix and the last one based on an eigenvalue entropy, a Granger-causation analysis revealed some of them are not always a good measure of risk in the S&P 500 and in the VIX. The measures selected do not Granger-cause the VIX index in all windows selected; therefore, in the sense of risk as volatility, the indicators are not always suitable. Nevertheless, their results towards returns are similar to previous works that accept them. A deeper analysis has shown that any symmetric measure based on eigenvalue decomposition of correlation matrices, however, is not useful as a measure of "correlation" risk. The empirical counterpart analysis of this proposition stated that negative correlations are usually small and, therefore, do not heavily distort the behavior of the indicator.

  3. Modeling velocity space-time correlations in wind farms

    NASA Astrophysics Data System (ADS)

    Lukassen, Laura J.; Stevens, Richard J. A. M.; Meneveau, Charles; Wilczek, Michael

    2016-11-01

    Turbulent fluctuations of wind velocities cause power-output fluctuations in wind farms. The statistics of velocity fluctuations can be described by velocity space-time correlations in the atmospheric boundary layer. In this context, it is important to derive simple physics-based models. The so-called Tennekes-Kraichnan random sweeping hypothesis states that small-scale velocity fluctuations are passively advected by large-scale velocity perturbations in a random fashion. In the present work, this hypothesis is used with an additional mean wind velocity to derive a model for the spatial and temporal decorrelation of velocities in wind farms. It turns out that in the framework of this model, space-time correlations are a convolution of the spatial correlation function with a temporal decorrelation kernel. In this presentation, first results on the comparison to large eddy simulations will be presented and the potential of the approach to characterize power output fluctuations of wind farms will be discussed. Acknowledgements: 'Fellowships for Young Energy Scientists' (YES!) of FOM, the US National Science Foundation Grant IIA 1243482, and support by the Max Planck Society.

  4. Forecasting plant phenology: evaluating the phenological models for Betula pendula and Padus racemosa spring phases, Latvia.

    PubMed

    Kalvāns, Andis; Bitāne, Māra; Kalvāne, Gunta

    2015-02-01

    A historical phenological record and meteorological data of the period 1960-2009 are used to analyse the ability of seven phenological models to predict leaf unfolding and beginning of flowering for two tree species-silver birch Betula pendula and bird cherry Padus racemosa-in Latvia. Model stability is estimated performing multiple model fitting runs using half of the data for model training and the other half for evaluation. Correlation coefficient, mean absolute error and mean squared error are used to evaluate model performance. UniChill (a model using sigmoidal development rate and temperature relationship and taking into account the necessity for dormancy release) and DDcos (a simple degree-day model considering the diurnal temperature fluctuations) are found to be the best models for describing the considered spring phases. A strong collinearity between base temperature and required heat sum is found for several model fitting runs of the simple degree-day based models. Large variation of the model parameters between different model fitting runs in case of more complex models indicates similar collinearity and over-parameterization of these models. It is suggested that model performance can be improved by incorporating the resolved daily temperature fluctuations of the DDcos model into the framework of the more complex models (e.g. UniChill). The average base temperature, as found by DDcos model, for B. pendula leaf unfolding is 5.6 °C and for the start of the flowering 6.7 °C; for P. racemosa, the respective base temperatures are 3.2 °C and 3.4 °C.

  5. Correlation between cystatin C-based formulas, Schwartz formula and urinary creatinine clearance for glomerular filtration rate estimation in children with kidney disease.

    PubMed

    Safaei-Asl, Afshin; Enshaei, Mercede; Heydarzadeh, Abtin; Maleknejad, Shohreh

    2016-01-01

    Assessment of glomerular filtration rate (GFR) is an important tool for monitoring renal function. Regarding to limitations in available methods, we intended to calculate GFR by cystatin C (Cys C) based formulas and determine correlation rate of them with current methods. We studied 72 children (38 boys and 34 girls) with renal disorders. The 24 hour urinary creatinine (Cr) clearance was the gold standard method. GFR was measured with Schwartz formula and Cys C-based formulas (Grubb, Hoek, Larsson and Simple). Then correlation rates of these formulas were determined. Using Pearson correlation coefficient, a significant positive correlation between all formulas and the standard method was seen (R(2) for Schwartz, Hoek, Larsson, Grubb and Simple formula was 0.639, 0.722, 0.705, 0.712, 0.722, respectively) (P<0.001). Cys C-based formulas could predict the variance of standard method results with high power. These formulas had correlation with Schwarz formula by R(2) 0.62-0.65 (intermediate correlation). Using linear regression and constant (y-intercept), it revealed that Larsson, Hoek and Grubb formulas can estimate GFR amounts with no statistical difference compared with standard method; but Schwartz and Simple formulas overestimate GFR. This study shows that Cys C-based formulas have strong relationship with 24 hour urinary Cr clearance. Hence, they can determine GFR in children with kidney injury, easier and with enough accuracy. It helps the physician to diagnosis of renal disease in early stages and improves the prognosis.

  6. Monte Carlo simulations of lattice models for single polymer systems

    NASA Astrophysics Data System (ADS)

    Hsu, Hsiao-Ping

    2014-10-01

    Single linear polymer chains in dilute solutions under good solvent conditions are studied by Monte Carlo simulations with the pruned-enriched Rosenbluth method up to the chain length N ˜ O(10^4). Based on the standard simple cubic lattice model (SCLM) with fixed bond length and the bond fluctuation model (BFM) with bond lengths in a range between 2 and sqrt{10}, we investigate the conformations of polymer chains described by self-avoiding walks on the simple cubic lattice, and by random walks and non-reversible random walks in the absence of excluded volume interactions. In addition to flexible chains, we also extend our study to semiflexible chains for different stiffness controlled by a bending potential. The persistence lengths of chains extracted from the orientational correlations are estimated for all cases. We show that chains based on the BFM are more flexible than those based on the SCLM for a fixed bending energy. The microscopic differences between these two lattice models are discussed and the theoretical predictions of scaling laws given in the literature are checked and verified. Our simulations clarify that a different mapping ratio between the coarse-grained models and the atomistically realistic description of polymers is required in a coarse-graining approach due to the different crossovers to the asymptotic behavior.

  7. Imprints of spherical nontrivial topologies on the cosmic microwave background.

    PubMed

    Niarchou, Anastasia; Jaffe, Andrew

    2007-08-24

    The apparent low power in the cosmic microwave background (CMB) temperature anisotropy power spectrum derived from the Wilkinson Microwave Anisotropy Probe motivated us to consider the possibility of a nontrivial topology. We focus on simple spherical multiconnected manifolds and discuss their implications for the CMB in terms of the power spectrum, maps, and the correlation matrix. We perform a Bayesian model comparison against the fiducial best-fit cold dark matter model with a cosmological constant based both on the power spectrum and the correlation matrix to assess their statistical significance. We find that the first-year power spectrum shows a slight preference for the truncated cube space, but the three-year data show no evidence for any of these spaces.

  8. A study on assimilating potential vorticity data

    NASA Astrophysics Data System (ADS)

    Li, Yong; Ménard, Richard; Riishøjgaard, Lars Peter; Cohn, Stephen E.; Rood, Richard B.

    1998-08-01

    The correlation that exists between the potential vorticity (PV) field and the distribution of chemical tracers such as ozone suggests the possibility of using tracer observations as proxy PV data in atmospheric data assimilation systems. Especially in the stratosphere, there are plentiful tracer observations but a general lack of reliable wind observations, and the correlation is most pronounced. The issue investigated in this study is how model dynamics would respond to the assimilation of PV data. First, numerical experiments of identical-twin type were conducted with a simple univariate nuding algorithm and a global shallow water model based on PV and divergence (PV-D model). All model fields are successfully reconstructed through the insertion of complete PV data alone if an appropriate value for the nudging coefficient is used. A simple linear analysis suggests that slow modes are recovered rapidly, at a rate nearly independent of spatial scale. In a more realistic experiment, appropriately scaled total ozone data from the NIMBUS-7 TOMS instrument were assimilated as proxy PV data into the PV-D model over a 10-day period. The resulting model PV field matches the observed total ozone field relatively well on large spatial scales, and the PV, geopotential and divergence fields are dynamically consistent. These results indicate the potential usefulness that tracer observations, as proxy PV data, may offer in a data assimilation system.

  9. Exploring the Dynamics of Cell Processes through Simulations of Fluorescence Microscopy Experiments

    PubMed Central

    Angiolini, Juan; Plachta, Nicolas; Mocskos, Esteban; Levi, Valeria

    2015-01-01

    Fluorescence correlation spectroscopy (FCS) methods are powerful tools for unveiling the dynamical organization of cells. For simple cases, such as molecules passively moving in a homogeneous media, FCS analysis yields analytical functions that can be fitted to the experimental data to recover the phenomenological rate parameters. Unfortunately, many dynamical processes in cells do not follow these simple models, and in many instances it is not possible to obtain an analytical function through a theoretical analysis of a more complex model. In such cases, experimental analysis can be combined with Monte Carlo simulations to aid in interpretation of the data. In response to this need, we developed a method called FERNET (Fluorescence Emission Recipes and Numerical routines Toolkit) based on Monte Carlo simulations and the MCell-Blender platform, which was designed to treat the reaction-diffusion problem under realistic scenarios. This method enables us to set complex geometries of the simulation space, distribute molecules among different compartments, and define interspecies reactions with selected kinetic constants, diffusion coefficients, and species brightness. We apply this method to simulate single- and multiple-point FCS, photon-counting histogram analysis, raster image correlation spectroscopy, and two-color fluorescence cross-correlation spectroscopy. We believe that this new program could be very useful for predicting and understanding the output of fluorescence microscopy experiments. PMID:26039162

  10. Trait-Dependent Biogeography: (Re)Integrating Biology into Probabilistic Historical Biogeographical Models.

    PubMed

    Sukumaran, Jeet; Knowles, L Lacey

    2018-06-01

    The development of process-based probabilistic models for historical biogeography has transformed the field by grounding it in modern statistical hypothesis testing. However, most of these models abstract away biological differences, reducing species to interchangeable lineages. We present here the case for reintegration of biology into probabilistic historical biogeographical models, allowing a broader range of questions about biogeographical processes beyond ancestral range estimation or simple correlation between a trait and a distribution pattern, as well as allowing us to assess how inferences about ancestral ranges themselves might be impacted by differential biological traits. We show how new approaches to inference might cope with the computational challenges resulting from the increased complexity of these trait-based historical biogeographical models. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. A Complementary Note to 'A Lag-1 Smoother Approach to System-Error Estimation': The Intrinsic Limitations of Residual Diagnostics

    NASA Technical Reports Server (NTRS)

    Todling, Ricardo

    2015-01-01

    Recently, this author studied an approach to the estimation of system error based on combining observation residuals derived from a sequential filter and fixed lag-1 smoother. While extending the methodology to a variational formulation, experimenting with simple models and making sure consistency was found between the sequential and variational formulations, the limitations of the residual-based approach came clearly to the surface. This note uses the sequential assimilation application to simple nonlinear dynamics to highlight the issue. Only when some of the underlying error statistics are assumed known is it possible to estimate the unknown component. In general, when considerable uncertainties exist in the underlying statistics as a whole, attempts to obtain separate estimates of the various error covariances are bound to lead to misrepresentation of errors. The conclusions are particularly relevant to present-day attempts to estimate observation-error correlations from observation residual statistics. A brief illustration of the issue is also provided by comparing estimates of error correlations derived from a quasi-operational assimilation system and a corresponding Observing System Simulation Experiments framework.

  12. Four-body correlation embedded in antisymmetrized geminal power wave function.

    PubMed

    Kawasaki, Airi; Sugino, Osamu

    2016-12-28

    We extend the Coleman's antisymmetrized geminal power (AGP) to develop a wave function theory that can incorporate up to four-body correlation in a region of strong correlation. To facilitate the variational determination of the wave function, the total energy is rewritten in terms of the traces of geminals. This novel trace formula is applied to a simple model system consisting of one dimensional Hubbard ring with a site of strong correlation. Our scheme significantly improves the result obtained by the AGP-configuration interaction scheme of Uemura et al. and also achieves more efficient compression of the degrees of freedom of the wave function. We regard the result as a step toward a first-principles wave function theory for a strongly correlated point defect or adsorbate embedded in an AGP-based mean-field medium.

  13. Optical depth in particle-laden turbulent flows

    NASA Astrophysics Data System (ADS)

    Frankel, A.; Iaccarino, G.; Mani, A.

    2017-11-01

    Turbulent clustering of particles causes an increase in the radiation transmission through gas-particle mixtures. Attempts to capture the ensemble-averaged transmission lead to a closure problem called the turbulence-radiation interaction. A simple closure model based on the particle radial distribution function is proposed to capture the effect of turbulent fluctuations in the concentration on radiation intensity. The model is validated against a set of particle-resolved ray tracing experiments through particle fields from direct numerical simulations of particle-laden turbulence. The form of the closure model is generalizable to arbitrary stochastic media with known two-point correlation functions.

  14. Keep it simple - A case study of model development in the context of the Dynamic Stocks and Flows (DSF) task

    NASA Astrophysics Data System (ADS)

    Halbrügge, Marc

    2010-12-01

    This paper describes the creation of a cognitive model submitted to the ‘Dynamic Stocks and Flows’ (DSF) modeling challenge. This challenge aims at comparing computational cognitive models for human behavior during an open ended control task. Participants in the modeling competition were provided with a simulation environment and training data for benchmarking their models while the actual specification of the competition task was withheld. To meet this challenge, the cognitive model described here was designed and optimized for generalizability. Only two simple assumptions about human problem solving were used to explain the empirical findings of the training data. In-depth analysis of the data set prior to the development of the model led to the dismissal of correlations or other parametric statistics as goodness-of-fit indicators. A new statistical measurement based on rank orders and sequence matching techniques is being proposed instead. This measurement, when being applied to the human sample, also identifies clusters of subjects that use different strategies for the task. The acceptability of the fits achieved by the model is verified using permutation tests.

  15. EIT Noise Resonance Power Broadening: a probe for coherence dynamics

    NASA Astrophysics Data System (ADS)

    Crescimanno, Michael; O'Leary, Shannon; Snider, Charles

    2012-06-01

    EIT noise correlation spectroscopy holds promise as a simple, robust method for performing high resolution spectroscopy used in devices as diverse as magnetometers and clocks. One useful feature of these noise correlation resonances is that they do not power broaden with the EIT window. We report on measurements of the eventual power broadening (at higher optical powers) of these resonances and a simple, quantitative theoretical model that relates the observed power broadening slope with processes such as two-photon detuning gradients and coherence diffusion. These processes reduce the ground state coherence relative to that of a homogeneous system, and thus the power broadening slope of the EIT noise correlation resonance may be a simple, useful probe for coherence dynamics.

  16. Regimes of stability and scaling relations for the removal time in the asteroid belt: a simple kinetic model and numerical tests

    NASA Astrophysics Data System (ADS)

    Cubrovic, Mihailo

    2005-02-01

    We report on our theoretical and numerical results concerning the transport mechanisms in the asteroid belt. We first derive a simple kinetic model of chaotic diffusion and show how it gives rise to some simple correlations (but not laws) between the removal time (the time for an asteroid to experience a qualitative change of dynamical behavior and enter a wide chaotic zone) and the Lyapunov time. The correlations are shown to arise in two different regimes, characterized by exponential and power-law scalings. We also show how is the so-called “stable chaos” (exponential regime) related to anomalous diffusion. Finally, we check our results numerically and discuss their possible applications in analyzing the motion of particular asteroids.

  17. Crack Growth Modeling in an Advanced Powder Metallurgy Alloy

    DTIC Science & Technology

    1980-07-01

    Figure ~ ~ ~ ~ ~ 1 90. SpcmnCniuain CorMtra ulfcto Experiments. .= is5.9 mm__ C ,, . 625 inch) , so t’ 0 1 ".6 12.7 mm (0.50 inch) Figure 5. Configuration...best simple correlation of hold time and stress ratio (R = 0.05 through 0.8) effects on Inconel 718 at 650* C (1200" F) was by the maximum stress...in the work done in another studyt22) on Inconel 718. Based on these room-temperature studies, the interpolative model was ex- pected to have a

  18. Temporal correlation functions of concentration fluctuations: an anomalous case.

    PubMed

    Lubelski, Ariel; Klafter, Joseph

    2008-10-09

    We calculate, within the framework of the continuous time random walk (CTRW) model, multiparticle temporal correlation functions of concentration fluctuations (CCF) in systems that display anomalous subdiffusion. The subdiffusion stems from the nonstationary nature of the CTRW waiting times, which also lead to aging and ergodicity breaking. Due to aging, a system of diffusing particles tends to slow down as time progresses, and therefore, the temporal correlation functions strongly depend on the initial time of measurement. As a consequence, time averages of the CCF differ from ensemble averages, displaying therefore ergodicity breaking. We provide a simple example that demonstrates the difference between these two averages, a difference that might be amenable to experimental tests. We focus on the case of ensemble averaging and assume that the preparation time of the system coincides with the starting time of the measurement. Our analytical calculations are supported by computer simulations based on the CTRW model.

  19. The Problem of Auto-Correlation in Parasitology

    PubMed Central

    Pollitt, Laura C.; Reece, Sarah E.; Mideo, Nicole; Nussey, Daniel H.; Colegrave, Nick

    2012-01-01

    Explaining the contribution of host and pathogen factors in driving infection dynamics is a major ambition in parasitology. There is increasing recognition that analyses based on single summary measures of an infection (e.g., peak parasitaemia) do not adequately capture infection dynamics and so, the appropriate use of statistical techniques to analyse dynamics is necessary to understand infections and, ultimately, control parasites. However, the complexities of within-host environments mean that tracking and analysing pathogen dynamics within infections and among hosts poses considerable statistical challenges. Simple statistical models make assumptions that will rarely be satisfied in data collected on host and parasite parameters. In particular, model residuals (unexplained variance in the data) should not be correlated in time or space. Here we demonstrate how failure to account for such correlations can result in incorrect biological inference from statistical analysis. We then show how mixed effects models can be used as a powerful tool to analyse such repeated measures data in the hope that this will encourage better statistical practices in parasitology. PMID:22511865

  20. Ultrasound hepatic/renal ratio and hepatic attenuation rate for quantifying liver fat content.

    PubMed

    Zhang, Bo; Ding, Fang; Chen, Tian; Xia, Liang-Hua; Qian, Juan; Lv, Guo-Yi

    2014-12-21

    To establish and validate a simple quantitative assessment method for nonalcoholic fatty liver disease (NAFLD) based on a combination of the ultrasound hepatic/renal ratio and hepatic attenuation rate. A total of 170 subjects were enrolled in this study. All subjects were examined by ultrasound and (1)H-magnetic resonance spectroscopy ((1)H-MRS) on the same day. The ultrasound hepatic/renal echo-intensity ratio and ultrasound hepatic echo-intensity attenuation rate were obtained from ordinary ultrasound images using the MATLAB program. Correlation analysis revealed that the ultrasound hepatic/renal ratio and hepatic echo-intensity attenuation rate were significantly correlated with (1)H-MRS liver fat content (ultrasound hepatic/renal ratio: r = 0.952, P = 0.000; hepatic echo-intensity attenuation r = 0.850, P = 0.000). The equation for predicting liver fat content by ultrasound (quantitative ultrasound model) is: liver fat content (%) = 61.519 × ultrasound hepatic/renal ratio + 167.701 × hepatic echo-intensity attenuation rate -26.736. Spearman correlation analysis revealed that the liver fat content ratio of the quantitative ultrasound model was positively correlated with serum alanine aminotransferase, aspartate aminotransferase, and triglyceride, but negatively correlated with high density lipoprotein cholesterol. Receiver operating characteristic curve analysis revealed that the optimal point for diagnosing fatty liver was 9.15% in the quantitative ultrasound model. Furthermore, in the quantitative ultrasound model, fatty liver diagnostic sensitivity and specificity were 94.7% and 100.0%, respectively, showing that the quantitative ultrasound model was better than conventional ultrasound methods or the combined ultrasound hepatic/renal ratio and hepatic echo-intensity attenuation rate. If the (1)H-MRS liver fat content had a value < 15%, the sensitivity and specificity of the ultrasound quantitative model would be 81.4% and 100%, which still shows that using the model is better than the other methods. The quantitative ultrasound model is a simple, low-cost, and sensitive tool that can accurately assess hepatic fat content in clinical practice. It provides an easy and effective parameter for the early diagnosis of mild hepatic steatosis and evaluation of the efficacy of NAFLD treatment.

  1. Density Driven Removal of Sediment from a Buoyant Muddy Plume

    NASA Astrophysics Data System (ADS)

    Rouhnia, M.; Strom, K.

    2014-12-01

    Experiments were conducted to study the effect of settling driven instabilities on sediment removal from hypopycnal plumes. Traditional approaches scale removal rates with particle settling velocity however, it has been suggested that the removal from buoyant suspensions happens at higher rates. The enhancement of removal is likely due to gravitational instabilities, such as fingering, at two-fluid interface. Previous studies have all sought to suppress flocculation, and no simple model exists to predict the removal rates under the effect of such instabilities. This study examines whether or not flocculation hampers instability formation and presents a simple removal rate model accounting for gravitational instabilities. A buoyant suspension of flocculated Kaolinite overlying a base of clear saltwater was investigated in a laboratory tank. Concentration was continuously measured in both layers with a pair of OBS sensors, and interface was monitored with digital cameras. Snapshots from the video were used to measure finger velocity. Samples of flocculated particles at the interface were extracted to retrieve floc size data using a floc camera. Flocculation did not stop creation of settling-driven fingers. A simple cylinder-based force balance model was capable of predicting finger velocity. Analogy of fingering process of fine grained suspensions to thermal plume formation and the concept of Grashof number enabled us to model finger spacing as a function of initial concentration. Finally, from geometry, the effective cross-sectional area was correlated to finger spacing. Reformulating the outward flux expression was done by substitution of finger velocity, rather than particle settling velocity, and finger area instead of total area. A box model along with the proposed outward flux was used to predict the SSC in buoyant layer. The model quantifies removal flux based on the initial SSC and is in good agreement with the experimental data.

  2. Unified theory for stochastic modelling of hydroclimatic processes: Preserving marginal distributions, correlation structures, and intermittency

    NASA Astrophysics Data System (ADS)

    Papalexiou, Simon Michael

    2018-05-01

    Hydroclimatic processes come in all "shapes and sizes". They are characterized by different spatiotemporal correlation structures and probability distributions that can be continuous, mixed-type, discrete or even binary. Simulating such processes by reproducing precisely their marginal distribution and linear correlation structure, including features like intermittency, can greatly improve hydrological analysis and design. Traditionally, modelling schemes are case specific and typically attempt to preserve few statistical moments providing inadequate and potentially risky distribution approximations. Here, a single framework is proposed that unifies, extends, and improves a general-purpose modelling strategy, based on the assumption that any process can emerge by transforming a specific "parent" Gaussian process. A novel mathematical representation of this scheme, introducing parametric correlation transformation functions, enables straightforward estimation of the parent-Gaussian process yielding the target process after the marginal back transformation, while it provides a general description that supersedes previous specific parameterizations, offering a simple, fast and efficient simulation procedure for every stationary process at any spatiotemporal scale. This framework, also applicable for cyclostationary and multivariate modelling, is augmented with flexible parametric correlation structures that parsimoniously describe observed correlations. Real-world simulations of various hydroclimatic processes with different correlation structures and marginals, such as precipitation, river discharge, wind speed, humidity, extreme events per year, etc., as well as a multivariate example, highlight the flexibility, advantages, and complete generality of the method.

  3. A theoretically based determination of bowen-ratio fetch requirements

    USGS Publications Warehouse

    Stannard, D.I.

    1997-01-01

    Determination of fetch requirements for accurate Bowen-ratio measurements of latent- and sensible-heat fluxes is more involved than for eddy-correlation measurements because Bowen-ratio sensors are located at two heights, rather than just one. A simple solution to the diffusion equation is used to derive an expression for Bowen-ratio fetch requirements, downwind of a step change in surface fluxes. These requirements are then compared to eddy-correlation fetch requirements based on the same diffusion equation solution. When the eddy-correlation and upper Bowen-ratio sensor heights are equal, and the available energy upwind and downwind of the step change is constant, the Bowen-ratio method requires less fetch than does eddy correlation. Differences in fetch requirements between the two methods are greatest over relatively smooth surfaces. Bowen-ratio fetch can be reduced significantly by lowering the lower sensor, as well as the upper sensor. The Bowen-ratio fetch model was tested using data from a field experiment where multiple Bowen-ratio systems were deployed simultaneously at various fetches and heights above a field of bermudagrass. Initial comparisons were poor, but improved greatly when the model was modified (and operated numerically) to account for the large roughness of the upwind cotton field.

  4. Low Speed and High Speed Correlation of SMART Active Flap Rotor Loads

    NASA Technical Reports Server (NTRS)

    Kottapalli, Sesi B. R.

    2010-01-01

    Measured, open loop and closed loop data from the SMART rotor test in the NASA Ames 40- by 80- Foot Wind Tunnel are compared with CAMRAD II calculations. One open loop high-speed case and four closed loop cases are considered. The closed loop cases include three high-speed cases and one low-speed case. Two of these high-speed cases include a 2 deg flap deflection at 5P case and a test maximum-airspeed case. This study follows a recent, open loop correlation effort that used a simple correction factor for the airfoil pitching moment Mach number. Compared to the earlier effort, the current open loop study considers more fundamental corrections based on advancing blade aerodynamic conditions. The airfoil tables themselves have been studied. Selected modifications to the HH-06 section flap airfoil pitching moment table are implemented. For the closed loop condition, the effect of the flap actuator is modeled by increased flap hinge stiffness. Overall, the open loop correlation is reasonable, thus confirming the basic correctness of the current semi-empirical modifications; the closed loop correlation is also reasonable considering that the current flap model is a first generation model. Detailed correlation results are given in the paper.

  5. Grading apical vertebral rotation without a computed tomography scan: a clinically relevant system based on the radiographic appearance of bilateral pedicle screws.

    PubMed

    Upasani, Vidyadhar V; Chambers, Reid C; Dalal, Ali H; Shah, Suken A; Lehman, Ronald A; Newton, Peter O

    2009-08-01

    Bench-top and retrospective analysis to assess vertebral rotation based on the appearance of bilateral pedicle screws in patients with adolescent idiopathic scoliosis (AIS). To develop a clinically relevant radiographic grading system for evaluating postoperative thoracic apical vertebral rotation that would correlate with computed tomography (CT) measures of rotation. The 3-column vertebral body control provided by bilateral pedicle screws has enabled scoliosis surgeons to develop advanced techniques of direct vertebral derotation. Our ability to accurately quantify spinal deformity in the axial plane, however, continues to be limited. Trigonometry was used to define the relationship between the position of bilateral pedicle screws and vertebral rotation. This relationship was validated using digital photographs of a bench-top model. The mathematical relationships were then used to calculate vertebral rotation from standing postoperative, posteroanterior radiographs in AIS patients and correlated with postoperative CT measures of rotation. Fourteen digital photographs of the bench-top model were independently analyzed twice by 3 coauthors. The mathematically calculated degree of rotation was found to correlate significantly with the actual degree of rotation (r = 0.99; P < 0.001) and the intra- and interobserver reliability for these measurements were both excellent (kappa = 0.98 and kappa = 0.97, respectively). In the retrospective analysis of 17 AIS patients, the average absolute difference between the radiographic measurement of rotation and the CT measure was only 1.9 degrees +/- 2.0 degrees (r = 0.92; P < 0.001). Based on these correlations a simple radiographic grading system for postoperative apical vertebral rotation was developed. An accurate assessment of vertebral rotation can be performed radiographically, using screw lengths and screw tip-to-rod distances of bilateral segmental pedicle screws and a trigonometric calculation. These data support the use of a simple radiographic grading system to approximate apical vertebral rotation in AIS patients treated with bilateral apical pedicle screws.

  6. Model validation of simple-graph representations of metabolism

    PubMed Central

    Holme, Petter

    2009-01-01

    The large-scale properties of chemical reaction systems, such as metabolism, can be studied with graph-based methods. To do this, one needs to reduce the information, lists of chemical reactions, available in databases. Even for the simplest type of graph representation, this reduction can be done in several ways. We investigate different simple network representations by testing how well they encode information about one biologically important network structure—network modularity (the propensity for edges to be clustered into dense groups that are sparsely connected between each other). To achieve this goal, we design a model of reaction systems where network modularity can be controlled and measure how well the reduction to simple graphs captures the modular structure of the model reaction system. We find that the network types that best capture the modular structure of the reaction system are substrate–product networks (where substrates are linked to products of a reaction) and substance networks (with edges between all substances participating in a reaction). Furthermore, we argue that the proposed model for reaction systems with tunable clustering is a general framework for studies of how reaction systems are affected by modularity. To this end, we investigate statistical properties of the model and find, among other things, that it recreates correlations between degree and mass of the molecules. PMID:19158012

  7. Internal (Annular) and Compressible External (Flat Plate) Turbulent Flow Heat Transfer Correlations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dechant, Lawrence; Smith, Justin

    Here we provide a discussion regarding the applicability of a family of traditional heat transfer correlation based models for several (unit level) heat transfer problems associated with flight heat transfer estimates and internal flow heat transfer associated with an experimental simulation design (Dobranich 2014). Variability between semi-empirical free-flight models suggests relative differences for heat transfer coefficients on the order of 10%, while the internal annular flow behavior is larger with differences on the order of 20%. We emphasize that these expressions are strictly valid only for the geometries they have been derived for e.g. the fully developed annular flow ormore » simple external flow problems. Though, the application of flat plate skin friction estimate to cylindrical bodies is a traditional procedure to estimate skin friction and heat transfer, an over-prediction bias is often observed using these approximations for missile type bodies. As a correction for this over-estimate trend, we discuss a simple scaling reduction factor for flat plate turbulent skin friction and heat transfer solutions (correlations) applied to blunt bodies of revolution at zero angle of attack. The method estimates the ratio between axisymmetric and 2-d stagnation point heat transfer skin friction and Stanton number solution expressions for sub-turbulent Reynolds numbers %3C1x10 4 . This factor is assumed to also directly influence the flat plate results applied to the cylindrical portion of the flow and the flat plate correlations are modified by« less

  8. Influence of free-stream disturbances on boundary-layer transition

    NASA Technical Reports Server (NTRS)

    Harvey, W. D.

    1978-01-01

    Considerable experimental evidence exists which shows that free stream disturbances (the ratio of root-mean-square pressure fluctuations to mean values) in conventional wind tunnels increase with increasing Mach number at low supersonic to moderate hypersonic speeds. In addition to local conditions, the free stream disturbance level influences transition behavior on simple test models. Based on this observation, existing noise transition data obtained in the same test facility were correlated for a large number of reference sharp cones and flat plates and are shown to collapse along a single curve. This result is a significant improvement over previous attempts to correlate noise transition data.

  9. Quantifying predictability in a model with statistical features of the atmosphere

    PubMed Central

    Kleeman, Richard; Majda, Andrew J.; Timofeyev, Ilya

    2002-01-01

    The Galerkin truncated inviscid Burgers equation has recently been shown by the authors to be a simple model with many degrees of freedom, with many statistical properties similar to those occurring in dynamical systems relevant to the atmosphere. These properties include long time-correlated, large-scale modes of low frequency variability and short time-correlated “weather modes” at smaller scales. The correlation scaling in the model extends over several decades and may be explained by a simple theory. Here a thorough analysis of the nature of predictability in the idealized system is developed by using a theoretical framework developed by R.K. This analysis is based on a relative entropy functional that has been shown elsewhere by one of the authors to measure the utility of statistical predictions precisely. The analysis is facilitated by the fact that most relevant probability distributions are approximately Gaussian if the initial conditions are assumed to be so. Rather surprisingly this holds for both the equilibrium (climatological) and nonequilibrium (prediction) distributions. We find that in most cases the absolute difference in the first moments of these two distributions (the “signal” component) is the main determinant of predictive utility variations. Contrary to conventional belief in the ensemble prediction area, the dispersion of prediction ensembles is generally of secondary importance in accounting for variations in utility associated with different initial conditions. This conclusion has potentially important implications for practical weather prediction, where traditionally most attention has focused on dispersion and its variability. PMID:12429863

  10. A new adaptive estimation method of spacecraft thermal mathematical model with an ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    Akita, T.; Takaki, R.; Shima, E.

    2012-04-01

    An adaptive estimation method of spacecraft thermal mathematical model is presented. The method is based on the ensemble Kalman filter, which can effectively handle the nonlinearities contained in the thermal model. The state space equations of the thermal mathematical model is derived, where both temperature and uncertain thermal characteristic parameters are considered as the state variables. In the method, the thermal characteristic parameters are automatically estimated as the outputs of the filtered state variables, whereas, in the usual thermal model correlation, they are manually identified by experienced engineers using trial-and-error approach. A numerical experiment of a simple small satellite is provided to verify the effectiveness of the presented method.

  11. Ab initio optimization principle for the ground states of translationally invariant strongly correlated quantum lattice models.

    PubMed

    Ran, Shi-Ju

    2016-05-01

    In this work, a simple and fundamental numeric scheme dubbed as ab initio optimization principle (AOP) is proposed for the ground states of translational invariant strongly correlated quantum lattice models. The idea is to transform a nondeterministic-polynomial-hard ground-state simulation with infinite degrees of freedom into a single optimization problem of a local function with finite number of physical and ancillary degrees of freedom. This work contributes mainly in the following aspects: (1) AOP provides a simple and efficient scheme to simulate the ground state by solving a local optimization problem. Its solution contains two kinds of boundary states, one of which play the role of the entanglement bath that mimics the interactions between a supercell and the infinite environment, and the other gives the ground state in a tensor network (TN) form. (2) In the sense of TN, a novel decomposition named as tensor ring decomposition (TRD) is proposed to implement AOP. Instead of following the contraction-truncation scheme used by many existing TN-based algorithms, TRD solves the contraction of a uniform TN in an opposite way by encoding the contraction in a set of self-consistent equations that automatically reconstruct the whole TN, making the simulation simple and unified; (3) AOP inherits and develops the ideas of different well-established methods, including the density matrix renormalization group (DMRG), infinite time-evolving block decimation (iTEBD), network contractor dynamics, density matrix embedding theory, etc., providing a unified perspective that is previously missing in this fields. (4) AOP as well as TRD give novel implications to existing TN-based algorithms: A modified iTEBD is suggested and the two-dimensional (2D) AOP is argued to be an intrinsic 2D extension of DMRG that is based on infinite projected entangled pair state. This paper is focused on one-dimensional quantum models to present AOP. The benchmark is given on a transverse Ising chain and 2D classical Ising model, showing the remarkable efficiency and accuracy of the AOP.

  12. Memory-Based Simple Heuristics as Attribute Substitution: Competitive Tests of Binary Choice Inference Models.

    PubMed

    Honda, Hidehito; Matsuka, Toshihiko; Ueda, Kazuhiro

    2017-05-01

    Some researchers on binary choice inference have argued that people make inferences based on simple heuristics, such as recognition, fluency, or familiarity. Others have argued that people make inferences based on available knowledge. To examine the boundary between heuristic and knowledge usage, we examine binary choice inference processes in terms of attribute substitution in heuristic use (Kahneman & Frederick, 2005). In this framework, it is predicted that people will rely on heuristic or knowledge-based inference depending on the subjective difficulty of the inference task. We conducted competitive tests of binary choice inference models representing simple heuristics (fluency and familiarity heuristics) and knowledge-based inference models. We found that a simple heuristic model (especially a familiarity heuristic model) explained inference patterns for subjectively difficult inference tasks, and that a knowledge-based inference model explained subjectively easy inference tasks. These results were consistent with the predictions of the attribute substitution framework. Issues on usage of simple heuristics and psychological processes are discussed. Copyright © 2016 Cognitive Science Society, Inc.

  13. On the simple random-walk models of ion-channel gate dynamics reflecting long-term memory.

    PubMed

    Wawrzkiewicz, Agata; Pawelek, Krzysztof; Borys, Przemyslaw; Dworakowska, Beata; Grzywna, Zbigniew J

    2012-06-01

    Several approaches to ion-channel gating modelling have been proposed. Although many models describe the dwell-time distributions correctly, they are incapable of predicting and explaining the long-term correlations between the lengths of adjacent openings and closings of a channel. In this paper we propose two simple random-walk models of the gating dynamics of voltage and Ca(2+)-activated potassium channels which qualitatively reproduce the dwell-time distributions, and describe the experimentally observed long-term memory quite well. Biological interpretation of both models is presented. In particular, the origin of the correlations is associated with fluctuations of channel mass density. The long-term memory effect, as measured by Hurst R/S analysis of experimental single-channel patch-clamp recordings, is close to the behaviour predicted by our models. The flexibility of the models enables their use as templates for other types of ion channel.

  14. Mechanistic simulation of normal-tissue damage in radiotherapy—implications for dose-volume analyses

    NASA Astrophysics Data System (ADS)

    Rutkowska, Eva; Baker, Colin; Nahum, Alan

    2010-04-01

    A radiobiologically based 3D model of normal tissue has been developed in which complications are generated when 'irradiated'. The aim is to provide insight into the connection between dose-distribution characteristics, different organ architectures and complication rates beyond that obtainable with simple DVH-based analytical NTCP models. In this model the organ consists of a large number of functional subunits (FSUs), populated by stem cells which are killed according to the LQ model. A complication is triggered if the density of FSUs in any 'critical functioning volume' (CFV) falls below some threshold. The (fractional) CFV determines the organ architecture and can be varied continuously from small (series-like behaviour) to large (parallel-like). A key feature of the model is its ability to account for the spatial dependence of dose distributions. Simulations were carried out to investigate correlations between dose-volume parameters and the incidence of 'complications' using different pseudo-clinical dose distributions. Correlations between dose-volume parameters and outcome depended on characteristics of the dose distributions and on organ architecture. As anticipated, the mean dose and V20 correlated most strongly with outcome for a parallel organ, and the maximum dose for a serial organ. Interestingly better correlation was obtained between the 3D computer model and the LKB model with dose distributions typical for serial organs than with those typical for parallel organs. This work links the results of dose-volume analyses to dataset characteristics typical for serial and parallel organs and it may help investigators interpret the results from clinical studies.

  15. Some properties of correlations of quantum lattice systems in thermal equilibrium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fröhlich, Jürg, E-mail: juerg@phys.ethz.ch; Ueltschi, Daniel, E-mail: daniel@ueltschi.org

    Simple proofs of uniqueness of the thermodynamic limit of KMS states and of the decay of equilibrium correlations are presented for a large class of quantum lattice systems at high temperatures. New quantum correlation inequalities for general Heisenberg models are described. Finally, a simplified derivation of a general result on power-law decay of correlations in 2D quantum lattice systems with continuous symmetries is given, extending results of McBryan and Spencer for the 2D classical XY model.

  16. Algebraic perturbation theory for dense liquids with discrete potentials

    NASA Astrophysics Data System (ADS)

    Adib, Artur B.

    2007-06-01

    A simple theory for the leading-order correction g1(r) to the structure of a hard-sphere liquid with discrete (e.g., square-well) potential perturbations is proposed. The theory makes use of a general approximation that effectively eliminates four-particle correlations from g1(r) with good accuracy at high densities. For the particular case of discrete perturbations, the remaining three-particle correlations can be modeled with a simple volume-exclusion argument, resulting in an algebraic and surprisingly accurate expression for g1(r) . The structure of a discrete “core-softened” model for liquids with anomalous thermodynamic properties is reproduced as an application.

  17. Cervical Vertebral Body's Volume as a New Parameter for Predicting the Skeletal Maturation Stages.

    PubMed

    Choi, Youn-Kyung; Kim, Jinmi; Yamaguchi, Tetsutaro; Maki, Koutaro; Ko, Ching-Chang; Kim, Yong-Il

    2016-01-01

    This study aimed to determine the correlation between the volumetric parameters derived from the images of the second, third, and fourth cervical vertebrae by using cone beam computed tomography with skeletal maturation stages and to propose a new formula for predicting skeletal maturation by using regression analysis. We obtained the estimation of skeletal maturation levels from hand-wrist radiographs and volume parameters derived from the second, third, and fourth cervical vertebrae bodies from 102 Japanese patients (54 women and 48 men, 5-18 years of age). We performed Pearson's correlation coefficient analysis and simple regression analysis. All volume parameters derived from the second, third, and fourth cervical vertebrae exhibited statistically significant correlations (P < 0.05). The simple regression model with the greatest R-square indicated the fourth-cervical-vertebra volume as an independent variable with a variance inflation factor less than ten. The explanation power was 81.76%. Volumetric parameters of cervical vertebrae using cone beam computed tomography are useful in regression models. The derived regression model has the potential for clinical application as it enables a simple and quantitative analysis to evaluate skeletal maturation level.

  18. Cervical Vertebral Body's Volume as a New Parameter for Predicting the Skeletal Maturation Stages

    PubMed Central

    Choi, Youn-Kyung; Kim, Jinmi; Maki, Koutaro; Ko, Ching-Chang

    2016-01-01

    This study aimed to determine the correlation between the volumetric parameters derived from the images of the second, third, and fourth cervical vertebrae by using cone beam computed tomography with skeletal maturation stages and to propose a new formula for predicting skeletal maturation by using regression analysis. We obtained the estimation of skeletal maturation levels from hand-wrist radiographs and volume parameters derived from the second, third, and fourth cervical vertebrae bodies from 102 Japanese patients (54 women and 48 men, 5–18 years of age). We performed Pearson's correlation coefficient analysis and simple regression analysis. All volume parameters derived from the second, third, and fourth cervical vertebrae exhibited statistically significant correlations (P < 0.05). The simple regression model with the greatest R-square indicated the fourth-cervical-vertebra volume as an independent variable with a variance inflation factor less than ten. The explanation power was 81.76%. Volumetric parameters of cervical vertebrae using cone beam computed tomography are useful in regression models. The derived regression model has the potential for clinical application as it enables a simple and quantitative analysis to evaluate skeletal maturation level. PMID:27340668

  19. Statistical fluctuations in pedestrian evacuation times and the effect of social contagion

    NASA Astrophysics Data System (ADS)

    Nicolas, Alexandre; Bouzat, Sebastián; Kuperman, Marcelo N.

    2016-08-01

    Mathematical models of pedestrian evacuation and the associated simulation software have become essential tools for the assessment of the safety of public facilities and buildings. While a variety of models is now available, their calibration and test against empirical data are generally restricted to global averaged quantities; the statistics compiled from the time series of individual escapes ("microscopic" statistics) measured in recent experiments are thus overlooked. In the same spirit, much research has primarily focused on the average global evacuation time, whereas the whole distribution of evacuation times over some set of realizations should matter. In the present paper we propose and discuss the validity of a simple relation between this distribution and the microscopic statistics, which is theoretically valid in the absence of correlations. To this purpose, we develop a minimal cellular automaton, with features that afford a semiquantitative reproduction of the experimental microscopic statistics. We then introduce a process of social contagion of impatient behavior in the model and show that the simple relation under test may dramatically fail at high contagion strengths, the latter being responsible for the emergence of strong correlations in the system. We conclude with comments on the potential practical relevance for safety science of calculations based on microscopic statistics.

  20. Revisiting Temporal Markov Chains for Continuum modeling of Transport in Porous Media

    NASA Astrophysics Data System (ADS)

    Delgoshaie, A. H.; Jenny, P.; Tchelepi, H.

    2017-12-01

    The transport of fluids in porous media is dominated by flow­-field heterogeneity resulting from the underlying permeability field. Due to the high uncertainty in the permeability field, many realizations of the reference geological model are used to describe the statistics of the transport phenomena in a Monte Carlo (MC) framework. There has been strong interest in working with stochastic formulations of the transport that are different from the standard MC approach. Several stochastic models based on a velocity process for tracer particle trajectories have been proposed. Previous studies have shown that for high variances of the log-conductivity, the stochastic models need to account for correlations between consecutive velocity transitions to predict dispersion accurately. The correlated velocity models proposed in the literature can be divided into two general classes of temporal and spatial Markov models. Temporal Markov models have been applied successfully to tracer transport in both the longitudinal and transverse directions. These temporal models are Stochastic Differential Equations (SDEs) with very specific drift and diffusion terms tailored for a specific permeability correlation structure. The drift and diffusion functions devised for a certain setup would not necessarily be suitable for a different scenario, (e.g., a different permeability correlation structure). The spatial Markov models are simple discrete Markov chains that do not require case specific assumptions. However, transverse spreading of contaminant plumes has not been successfully modeled with the available correlated spatial models. Here, we propose a temporal discrete Markov chain to model both the longitudinal and transverse dispersion in a two-dimensional domain. We demonstrate that these temporal Markov models are valid for different correlation structures without modification. Similar to the temporal SDEs, the proposed model respects the limited asymptotic transverse spreading of the plume in two-dimensional problems.

  1. Lateral sesamoid position in hallux valgus: correlation with the conventional radiological assessment.

    PubMed

    Agrawal, Yuvraj; Desai, Aravind; Mehta, Jaysheel

    2011-12-01

    We aimed to quantify the severity of the hallux valgus based on the lateral sesamoid position and to establish a correlation of our simple assessment method with the conventional radiological assessments. We reviewed one hundred and twenty two dorso-plantar weight bearing radiographs of feet. The intermetatarsal and hallux valgus angles were measured by the conventional methods; and the position of lateral sesamoid in relation to first metatarsal neck was assessed by our new and simple method. Significant correlation was noted between intermetatarsal angle and lateral sesamoid position (Rho 0.74, p < 0.0001); lateral sesamoid position and hallux valgus angle (Rho 0.56, p < 0.0001). Similar trends were noted in different grades of severity of hallux valgus in all the three methods of assessment. Our method of assessing hallux valgus deformity based on the lateral sesamoid position is simple, less time consuming and has statistically significant correlation with that of the established conventional radiological measurements. Copyright © 2011 European Foot and Ankle Society. Published by Elsevier Ltd. All rights reserved.

  2. Statistical validity of using ratio variables in human kinetics research.

    PubMed

    Liu, Yuanlong; Schutz, Robert W

    2003-09-01

    The purposes of this study were to investigate the validity of the simple ratio and three alternative deflation models and examine how the variation of the numerator and denominator variables affects the reliability of a ratio variable. A simple ratio and three alternative deflation models were fitted to four empirical data sets, and common criteria were applied to determine the best model for deflation. Intraclass correlation was used to examine the component effect on the reliability of a ratio variable. The results indicate that the validity, of a deflation model depends on the statistical characteristics of the particular component variables used, and an optimal deflation model for all ratio variables may not exist. Therefore, it is recommended that different models be fitted to each empirical data set to determine the best deflation model. It was found that the reliability of a simple ratio is affected by the coefficients of variation and the within- and between-trial correlations between the numerator and denominator variables. It was recommended that researchers should compute the reliability of the derived ratio scores and not assume that strong reliabilities in the numerator and denominator measures automatically lead to high reliability in the ratio measures.

  3. Exciton transport in the PE545 complex: insight from atomistic QM/MM-based quantum master equations and elastic network models

    NASA Astrophysics Data System (ADS)

    Pouyandeh, Sima; Iubini, Stefano; Jurinovich, Sandro; Omar, Yasser; Mennucci, Benedetta; Piazza, Francesco

    2017-12-01

    In this paper, we work out a parameterization of environmental noise within the Haken-Strobl-Reinenker (HSR) model for the PE545 light-harvesting complex, based on atomic-level quantum mechanics/molecular mechanics (QM/MM) simulations. We use this approach to investigate the role of various auto- and cross-correlations in the HSR noise tensor, confirming that site-energy autocorrelations (pure dephasing) terms dominate the noise-induced exciton mobility enhancement, followed by site energy-coupling cross-correlations for specific triplets of pigments. Interestingly, several cross-correlations of the latter kind, together with coupling-coupling cross-correlations, display clear low-frequency signatures in their spectral densities in the 30-70 cm-1 region. These slow components lie at the limits of validity of the HSR approach, which requires that environmental fluctuations be faster than typical exciton transfer time scales. We show that a simple coarse-grained elastic-network-model (ENM) analysis of the PE545 protein naturally spotlights collective normal modes in this frequency range that represent specific concerted motions of the subnetwork of cysteines covalenty linked to the pigments. This analysis strongly suggests that protein scaffolds in light-harvesting complexes are able to express specific collective, low-frequency normal modes providing a fold-rooted blueprint of exciton transport pathways. We speculate that ENM-based mixed quantum classical methods, such as Ehrenfest dynamics, might be promising tools to disentangle the fundamental designing principles of these dynamical processes in natural and artificial light-harvesting structures.

  4. The effect of clouds on the earth's radiation budget

    NASA Technical Reports Server (NTRS)

    Ziskin, Daniel; Strobel, Darrell F.

    1991-01-01

    The radiative fluxes from the Earth Radiation Budget Experiment (ERBE) and the cloud properties from the International Satellite Cloud Climatology Project (ISCCP) over Indonesia for the months of June and July of 1985 and 1986 were analyzed to determine the cloud sensitivity coefficients. The method involved a linear least squares regression between co-incident flux and cloud coverage measurements. The calculated slope is identified as the cloud sensitivity. It was found that the correlations between the total cloud fraction and radiation parameters were modest. However, correlations between cloud fraction and IR flux were improved by separating clouds by height. Likewise, correlations between the visible flux and cloud fractions were improved by distinguishing clouds based on optical depth. Calculating correlations between the net fluxes and either height or optical depth segregated cloud fractions were somewhat improved. When clouds were classified in terms of their height and optical depth, correlations among all the radiation components were improved. Mean cloud sensitivities based on the regression of radiative fluxes against height and optical depth separated cloud types are presented. Results are compared to a one-dimensional radiation model with a simple cloud parameterization scheme.

  5. Under-reported data analysis with INAR-hidden Markov chains.

    PubMed

    Fernández-Fontelo, Amanda; Cabaña, Alejandra; Puig, Pedro; Moriña, David

    2016-11-20

    In this work, we deal with correlated under-reported data through INAR(1)-hidden Markov chain models. These models are very flexible and can be identified through its autocorrelation function, which has a very simple form. A naïve method of parameter estimation is proposed, jointly with the maximum likelihood method based on a revised version of the forward algorithm. The most-probable unobserved time series is reconstructed by means of the Viterbi algorithm. Several examples of application in the field of public health are discussed illustrating the utility of the models. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Spatial correlations in driven-dissipative photonic lattices

    NASA Astrophysics Data System (ADS)

    Biondi, Matteo; Lienhard, Saskia; Blatter, Gianni; Türeci, Hakan E.; Schmidt, Sebastian

    2017-12-01

    We study the nonequilibrium steady-state of interacting photons in cavity arrays as described by the driven-dissipative Bose–Hubbard and spin-1/2 XY model. For this purpose, we develop a self-consistent expansion in the inverse coordination number of the array (∼ 1/z) to solve the Lindblad master equation of these systems beyond the mean-field approximation. Our formalism is compared and benchmarked with exact numerical methods for small systems based on an exact diagonalization of the Liouvillian and a recently developed corner-space renormalization technique. We then apply this method to obtain insights beyond mean-field in two particular settings: (i) we show that the gas–liquid transition in the driven-dissipative Bose–Hubbard model is characterized by large density fluctuations and bunched photon statistics. (ii) We study the antibunching–bunching transition of the nearest-neighbor correlator in the driven-dissipative spin-1/2 XY model and provide a simple explanation of this phenomenon.

  7. Statistical Modeling of Robotic Random Walks on Different Terrain

    NASA Astrophysics Data System (ADS)

    Naylor, Austin; Kinnaman, Laura

    Issues of public safety, especially with crowd dynamics and pedestrian movement, have been modeled by physicists using methods from statistical mechanics over the last few years. Complex decision making of humans moving on different terrains can be modeled using random walks (RW) and correlated random walks (CRW). The effect of different terrains, such as a constant increasing slope, on RW and CRW was explored. LEGO robots were programmed to make RW and CRW with uniform step sizes. Level ground tests demonstrated that the robots had the expected step size distribution and correlation angles (for CRW). The mean square displacement was calculated for each RW and CRW on different terrains and matched expected trends. The step size distribution was determined to change based on the terrain; theoretical predictions for the step size distribution were made for various simple terrains. It's Dr. Laura Kinnaman, not sure where to put the Prefix.

  8. Mouse epileptic seizure detection with multiple EEG features and simple thresholding technique

    NASA Astrophysics Data System (ADS)

    Tieng, Quang M.; Anbazhagan, Ashwin; Chen, Min; Reutens, David C.

    2017-12-01

    Objective. Epilepsy is a common neurological disorder characterized by recurrent, unprovoked seizures. The search for new treatments for seizures and epilepsy relies upon studies in animal models of epilepsy. To capture data on seizures, many applications require prolonged electroencephalography (EEG) with recordings that generate voluminous data. The desire for efficient evaluation of these recordings motivates the development of automated seizure detection algorithms. Approach. A new seizure detection method is proposed, based on multiple features and a simple thresholding technique. The features are derived from chaos theory, information theory and the power spectrum of EEG recordings and optimally exploit both linear and nonlinear characteristics of EEG data. Main result. The proposed method was tested with real EEG data from an experimental mouse model of epilepsy and distinguished seizures from other patterns with high sensitivity and specificity. Significance. The proposed approach introduces two new features: negative logarithm of adaptive correlation integral and power spectral coherence ratio. The combination of these new features with two previously described features, entropy and phase coherence, improved seizure detection accuracy significantly. Negative logarithm of adaptive correlation integral can also be used to compute the duration of automatically detected seizures.

  9. Interlocking Mechanism between Molecular Gears Attached to Surfaces.

    PubMed

    Zhao, Rundong; Zhao, Yan-Ling; Qi, Fei; Hermann, Klaus E; Zhang, Rui-Qin; Van Hove, Michel A

    2018-03-27

    While molecular machines play an increasingly significant role in nanoscience research and applications, there remains a shortage of investigations and understanding of the molecular gear (cogwheel), which is an indispensable and fundamental component to drive a larger correlated molecular machine system. Employing ab initio calculations, we investigate model systems consisting of molecules adsorbed on metal or graphene surfaces, ranging from very simple triple-arm gears such as PF 3 and NH 3 to larger multiarm gears based on carbon rings. We explore in detail the transmission of slow rotational motion from one gear to the next by these relatively simple molecules, so as to isolate and reveal the mechanisms of the relevant intermolecular interactions. Several characteristics of molecular gears are discussed, in particular the flexibility of the arms and the slipping and skipping between interlocking arms of adjacent gears, which differ from familiar macroscopic rigid gears. The underlying theoretical concepts suggest strongly that other analogous structures may also exhibit similar behavior which may inspire future exploration in designing large correlated molecular machines.

  10. Redundant correlation effect on personalized recommendation

    NASA Astrophysics Data System (ADS)

    Qiu, Tian; Han, Teng-Yue; Zhong, Li-Xin; Zhang, Zi-Ke; Chen, Guang

    2014-02-01

    The high-order redundant correlation effect is investigated for a hybrid algorithm of heat conduction and mass diffusion (HHM), through both heat conduction biased (HCB) and mass diffusion biased (MDB) correlation redundancy elimination processes. The HCB and MDB algorithms do not introduce any additional tunable parameters, but keep the simple character of the original HHM. Based on two empirical datasets, the Netflix and MovieLens, the HCB and MDB are found to show better recommendation accuracy for both the overall objects and the cold objects than the HHM algorithm. Our work suggests that properly eliminating the high-order redundant correlations can provide a simple and effective approach to accurate recommendation.

  11. Microstructure representations for sound absorbing fibrous media: 3D and 2D multiscale modelling and experiments

    NASA Astrophysics Data System (ADS)

    Zieliński, Tomasz G.

    2017-11-01

    The paper proposes and investigates computationally-efficient microstructure representations for sound absorbing fibrous media. Three-dimensional volume elements involving non-trivial periodic arrangements of straight fibres are examined as well as simple two-dimensional cells. It has been found that a simple 2D quasi-representative cell can provide similar predictions as a volume element which is in general much more geometrically accurate for typical fibrous materials. The multiscale modelling allowed to determine the effective speeds and damping of acoustic waves propagating in such media, which brings up a discussion on the correlation between the speed, penetration range and attenuation of sound waves. Original experiments on manufactured copper-wire samples are presented and the microstructure-based calculations of acoustic absorption are compared with the corresponding experimental results. In fact, the comparison suggested the microstructure modifications leading to representations with non-uniformly distributed fibres.

  12. Block sparsity-based joint compressed sensing recovery of multi-channel ECG signals.

    PubMed

    Singh, Anurag; Dandapat, Samarendra

    2017-04-01

    In recent years, compressed sensing (CS) has emerged as an effective alternative to conventional wavelet based data compression techniques. This is due to its simple and energy-efficient data reduction procedure, which makes it suitable for resource-constrained wireless body area network (WBAN)-enabled electrocardiogram (ECG) telemonitoring applications. Both spatial and temporal correlations exist simultaneously in multi-channel ECG (MECG) signals. Exploitation of both types of correlations is very important in CS-based ECG telemonitoring systems for better performance. However, most of the existing CS-based works exploit either of the correlations, which results in a suboptimal performance. In this work, within a CS framework, the authors propose to exploit both types of correlations simultaneously using a sparse Bayesian learning-based approach. A spatiotemporal sparse model is employed for joint compression/reconstruction of MECG signals. Discrete wavelets transform domain block sparsity of MECG signals is exploited for simultaneous reconstruction of all the channels. Performance evaluations using Physikalisch-Technische Bundesanstalt MECG diagnostic database show a significant gain in the diagnostic reconstruction quality of the MECG signals compared with the state-of-the art techniques at reduced number of measurements. Low measurement requirement may lead to significant savings in the energy-cost of the existing CS-based WBAN systems.

  13. Hidden Order and Symmetry Protected Topological States in Quantum Link Ladders

    NASA Astrophysics Data System (ADS)

    Cardarelli, L.; Greschner, S.; Santos, L.

    2017-11-01

    We show that, whereas spin-1 /2 one-dimensional U(1) quantum-link models (QLMs) are topologically trivial, when implemented in ladderlike lattices these models may present an intriguing ground-state phase diagram, which includes a symmetry protected topological (SPT) phase that may be readily revealed by analyzing long-range string spin correlations along the ladder legs. We propose a simple scheme for the realization of spin-1 /2 U(1) QLMs based on single-component fermions loaded in an optical lattice with s and p bands, showing that the SPT phase may be experimentally realized by adiabatic preparation.

  14. Evaluation of Phytoavailability of Heavy Metals to Chinese Cabbage (Brassica chinensis L.) in Rural Soils

    PubMed Central

    Hseu, Zeng-Yei; Zehetner, Franz

    2014-01-01

    This study compared the extractability of Cd, Cu, Ni, Pb, and Zn by 8 extraction protocols for 22 representative rural soils in Taiwan and correlated the extractable amounts of the metals with their uptake by Chinese cabbage for developing an empirical model to predict metal phytoavailability based on soil properties. Chemical agents in these protocols included dilute acids, neutral salts, and chelating agents, in addition to water and the Rhizon soil solution sampler. The highest concentrations of extractable metals were observed in the HCl extraction and the lowest in the Rhizon sampling method. The linear correlation coefficients between extractable metals in soil pools and metals in shoots were higher than those in roots. Correlations between extractable metal concentrations and soil properties were variable; soil pH, clay content, total metal content, and extractable metal concentration were considered together to simulate their combined effects on crop uptake by an empirical model. This combination improved the correlations to different extents for different extraction methods, particularly for Pb, for which the extractable amounts with any extraction protocol did not correlate with crop uptake by simple correlation analysis. PMID:25295297

  15. Covariate-adjusted Spearman's rank correlation with probability-scale residuals.

    PubMed

    Liu, Qi; Li, Chun; Wanga, Valentine; Shepherd, Bryan E

    2018-06-01

    It is desirable to adjust Spearman's rank correlation for covariates, yet existing approaches have limitations. For example, the traditionally defined partial Spearman's correlation does not have a sensible population parameter, and the conditional Spearman's correlation defined with copulas cannot be easily generalized to discrete variables. We define population parameters for both partial and conditional Spearman's correlation through concordance-discordance probabilities. The definitions are natural extensions of Spearman's rank correlation in the presence of covariates and are general for any orderable random variables. We show that they can be neatly expressed using probability-scale residuals (PSRs). This connection allows us to derive simple estimators. Our partial estimator for Spearman's correlation between X and Y adjusted for Z is the correlation of PSRs from models of X on Z and of Y on Z, which is analogous to the partial Pearson's correlation derived as the correlation of observed-minus-expected residuals. Our conditional estimator is the conditional correlation of PSRs. We describe estimation and inference, and highlight the use of semiparametric cumulative probability models, which allow preservation of the rank-based nature of Spearman's correlation. We conduct simulations to evaluate the performance of our estimators and compare them with other popular measures of association, demonstrating their robustness and efficiency. We illustrate our method in two applications, a biomarker study and a large survey. © 2017, The International Biometric Society.

  16. Modelling nematode movement using time-fractional dynamics.

    PubMed

    Hapca, Simona; Crawford, John W; MacMillan, Keith; Wilson, Mike J; Young, Iain M

    2007-09-07

    We use a correlated random walk model in two dimensions to simulate the movement of the slug parasitic nematode Phasmarhabditis hermaphrodita in homogeneous environments. The model incorporates the observed statistical distributions of turning angle and speed derived from time-lapse studies of individual nematode trails. We identify strong temporal correlations between the turning angles and speed that preclude the case of a simple random walk in which successive steps are independent. These correlated random walks are appropriately modelled using an anomalous diffusion model, more precisely using a fractional sub-diffusion model for which the associated stochastic process is characterised by strong memory effects in the probability density function.

  17. Comparison of 3D Joint Angles Measured With the Kinect 2.0 Skeletal Tracker Versus a Marker-Based Motion Capture System.

    PubMed

    Guess, Trent M; Razu, Swithin; Jahandar, Amirhossein; Skubic, Marjorie; Huo, Zhiyu

    2017-04-01

    The Microsoft Kinect is becoming a widely used tool for inexpensive, portable measurement of human motion, with the potential to support clinical assessments of performance and function. In this study, the relative osteokinematic Cardan joint angles of the hip and knee were calculated using the Kinect 2.0 skeletal tracker. The pelvis segments of the default skeletal model were reoriented and 3-dimensional joint angles were compared with a marker-based system during a drop vertical jump and a hip abduction motion. Good agreement between the Kinect and marker-based system were found for knee (correlation coefficient = 0.96, cycle RMS error = 11°, peak flexion difference = 3°) and hip (correlation coefficient = 0.97, cycle RMS = 12°, peak flexion difference = 12°) flexion during the landing phase of the drop vertical jump and for hip abduction/adduction (correlation coefficient = 0.99, cycle RMS error = 7°, peak flexion difference = 8°) during isolated hip motion. Nonsagittal hip and knee angles did not correlate well for the drop vertical jump. When limited to activities in the optimal capture volume and with simple modifications to the skeletal model, the Kinect 2.0 skeletal tracker can provide limited 3-dimensional kinematic information of the lower limbs that may be useful for functional movement assessment.

  18. Analysis of changes in tornadogenesis conditions over Northern Eurasia based on a simple index of atmospheric convective instability

    NASA Astrophysics Data System (ADS)

    Chernokulsky, A. V.; Kurgansky, M. V.; Mokhov, I. I.

    2017-12-01

    A simple index of convective instability (3D-index) is used for analysis of weather and climate processes that favor to the occurrence of severe convective events including tornadoes. The index is based on information on the surface air temperature and humidity. The prognostic ability of the index to reproduce severe convective events (thunderstorms, showers, tornadoes) is analyzed. It is shown that most tornadoes in North Eurasia are characterized by high values of the 3D-index; furthermore, the 3D-index is significantly correlated with the available convective potential energy. Reanalysis data (for recent decades) and global climate model simulations (for the 21st century) show an increase in the frequency of occurrence of favorable for tornado formation meteorological conditions in the regions of Northern Eurasia. The most significant increase is found on the Black Sea coast and in the south of the Far East.

  19. The pressure recovery ratio: The invasive index of LV relaxation during filling. Model-based prediction with in-vivo validation.

    PubMed

    Zhang, Wei; Shmuylovich, Leonid; Kovacs, Sandor J

    2009-01-01

    Using a simple harmonic oscillator model (PDF formalism), every early filling E-wave can be uniquely described by a set of parameters, (x(0), c, and k). Parameter c in the PDF formalism is a damping or relaxation parameter that measures the energy loss during the filling process. Based on Bernoulli's equation and kinematic modeling, we derived a causal correlation between the relaxation parameter c in the PDF formalism and a feature of the pressure contour during filling - the pressure recovery ratio defined by the left ventricular pressure difference between diastasis and minimum pressure, normalized to the pressure difference between a fiducial pressure and minimum pressure [PRR = (P(Diastasis)-P(Min))/(P(Fiducial)-P(Min))]. We analyzed multiple heart beats from one human subject to validate the correlation. Further validation among more patients is warranted. PRR is the invasive causal analogue of the noninvasive E-wave relaxation parameter c. PRR has the potential to be calculated using automated methodology in the catheterization lab in real time.

  20. Patch-Based Generative Shape Model and MDL Model Selection for Statistical Analysis of Archipelagos

    NASA Astrophysics Data System (ADS)

    Ganz, Melanie; Nielsen, Mads; Brandt, Sami

    We propose a statistical generative shape model for archipelago-like structures. These kind of structures occur, for instance, in medical images, where our intention is to model the appearance and shapes of calcifications in x-ray radio graphs. The generative model is constructed by (1) learning a patch-based dictionary for possible shapes, (2) building up a time-homogeneous Markov model to model the neighbourhood correlations between the patches, and (3) automatic selection of the model complexity by the minimum description length principle. The generative shape model is proposed as a probability distribution of a binary image where the model is intended to facilitate sequential simulation. Our results show that a relatively simple model is able to generate structures visually similar to calcifications. Furthermore, we used the shape model as a shape prior in the statistical segmentation of calcifications, where the area overlap with the ground truth shapes improved significantly compared to the case where the prior was not used.

  1. Trunk-acceleration based assessment of gait parameters in older persons: a comparison of reliability and validity of four inverted pendulum based estimations.

    PubMed

    Zijlstra, Agnes; Zijlstra, Wiebren

    2013-09-01

    Inverted pendulum (IP) models of human walking allow for wearable motion-sensor based estimations of spatio-temporal gait parameters during unconstrained walking in daily-life conditions. At present it is unclear to what extent different IP based estimations yield different results, and reliability and validity have not been investigated in older persons without a specific medical condition. The aim of this study was to compare reliability and validity of four different IP based estimations of mean step length in independent-living older persons. Participants were assessed twice and walked at different speeds while wearing a tri-axial accelerometer at the lower back. For all step-length estimators, test-retest intra-class correlations approached or were above 0.90. Intra-class correlations with reference step length were above 0.92 with a mean error of 0.0 cm when (1) multiplying the estimated center-of-mass displacement during a step by an individual correction factor in a simple IP model, or (2) adding an individual constant for bipedal stance displacement to the estimated displacement during single stance in a 2-phase IP model. When applying generic corrections or constants in all subjects (i.e. multiplication by 1.25, or adding 75% of foot length), correlations were above 0.75 with a mean error of respectively 2.0 and 1.2 cm. Although the results indicate that an individual adjustment of the IP models provides better estimations of mean step length, the ease of a generic adjustment can be favored when merely evaluating intra-individual differences. Further studies should determine the validity of these IP based estimations for assessing gait in daily life. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Transverse momentum correlations of quarks in recursive jet models

    NASA Astrophysics Data System (ADS)

    Artru, X.; Belghobsi, Z.; Redouane-Salah, E.

    2016-08-01

    In the symmetric string fragmentation recipe adopted by PYTHIA for jet simulations, the transverse momenta of successive quarks are uncorrelated. This is a simplification but has no theoretical basis. Transverse momentum correlations are naturally expected, for instance, in a covariant multiperipheral model of quark hadronization. We propose a simple recipe of string fragmentation which leads to such correlations. The definition of the jet axis and its relation with the primordial transverse momentum of the quark is also discussed.

  3. A Simple and Robust Statistical Test for Detecting the Presence of Recombination

    PubMed Central

    Bruen, Trevor C.; Philippe, Hervé; Bryant, David

    2006-01-01

    Recombination is a powerful evolutionary force that merges historically distinct genotypes. But the extent of recombination within many organisms is unknown, and even determining its presence within a set of homologous sequences is a difficult question. Here we develop a new statistic, Φw, that can be used to test for recombination. We show through simulation that our test can discriminate effectively between the presence and absence of recombination, even in diverse situations such as exponential growth (star-like topologies) and patterns of substitution rate correlation. A number of other tests, Max χ2, NSS, a coalescent-based likelihood permutation test (from LDHat), and correlation of linkage disequilibrium (both r2 and |D′|) with distance, all tend to underestimate the presence of recombination under strong population growth. Moreover, both Max χ2 and NSS falsely infer the presence of recombination under a simple model of mutation rate correlation. Results on empirical data show that our test can be used to detect recombination between closely as well as distantly related samples, regardless of the suspected rate of recombination. The results suggest that Φw is one of the best approaches to distinguish recurrent mutation from recombination in a wide variety of circumstances. PMID:16489234

  4. Development of feedforward receptive field structure of a simple cell and its contribution to the orientation selectivity: a modeling study.

    PubMed

    Garg, Akhil R; Obermayer, Klaus; Bhaumik, Basabi

    2005-01-01

    Recent experimental studies of hetero-synaptic interactions in various systems have shown the role of signaling in the plasticity, challenging the conventional understanding of Hebb's rule. It has also been found that activity plays a major role in plasticity, with neurotrophins acting as molecular signals translating activity into structural changes. Furthermore, role of synaptic efficacy in biasing the outcome of competition has also been revealed recently. Motivated by these experimental findings we present a model for the development of simple cell receptive field structure based on the competitive hetero-synaptic interactions for neurotrophins combined with cooperative hetero-synaptic interactions in the spatial domain. We find that with proper balance in competition and cooperation, the inputs from two populations (ON/OFF) of LGN cells segregate starting from the homogeneous state. We obtain segregated ON and OFF regions in simple cell receptive field. Our modeling study supports the experimental findings, suggesting the role of synaptic efficacy and the role of spatial signaling. We find that using this model we obtain simple cell RF, even for positively correlated activity of ON/OFF cells. We also compare different mechanism of finding the response of cortical cell and study their possible role in the sharpening of orientation selectivity. We find that degree of selectivity improvement in individual cells varies from case to case depending upon the structure of RF field and type of sharpening mechanism.

  5. Correlation of spacecraft thermal mathematical models to reference data

    NASA Astrophysics Data System (ADS)

    Torralbo, Ignacio; Perez-Grande, Isabel; Sanz-Andres, Angel; Piqueras, Javier

    2018-03-01

    Model-to-test correlation is a frequent problem in spacecraft-thermal control design. The idea is to determine the values of the parameters of the thermal mathematical model (TMM) that allows reaching a good fit between the TMM results and test data, in order to reduce the uncertainty of the mathematical model. Quite often, this task is performed manually, mainly because a good engineering knowledge and experience is needed to reach a successful compromise, but the use of a mathematical tool could facilitate this work. The correlation process can be considered as the minimization of the error of the model results with regard to the reference data. In this paper, a simple method is presented suitable to solve the TMM-to-test correlation problem, using Jacobian matrix formulation and Moore-Penrose pseudo-inverse, generalized to include several load cases. Aside, in simple cases, this method also allows for analytical solutions to be obtained, which helps to analyze some problems that appear when the Jacobian matrix is singular. To show the implementation of the method, two problems have been considered, one more academic, and the other one the TMM of an electronic box of PHI instrument of ESA Solar Orbiter mission, to be flown in 2019. The use of singular value decomposition of the Jacobian matrix to analyze and reduce these models is also shown. The error in parameter space is used to assess the quality of the correlation results in both models.

  6. Imaging, microscopic analysis, and modeling of a CdTe module degraded by heat and light

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, Steve; Albin, David; Hacke, Peter

    Photoluminescence (PL), electroluminescence (EL), and dark lock-in thermography are collected during stressing of a CdTe module under one-Sun light at an elevated temperature of 100 degrees C. The PL imaging system is simple and economical. The PL images show differing degrees of degradation across the module and are less sensitive to effects of shunting and resistance that appear on the EL images. Regions of varying degradation are chosen based on avoiding pre-existing shunt defects. These regions are evaluated using time-of-flight secondary ion-mass spectrometry and Kelvin probe force microscopy. Reduced PL intensity correlates to increased Cu concentration at the front interface.more » Numerical modeling and measurements agree that the increased Cu concentration at the junction also correlates to a reduced space charge region.« less

  7. Imaging, microscopic analysis, and modeling of a CdTe module degraded by heat and light

    DOE PAGES

    Johnston, Steve; Albin, David; Hacke, Peter; ...

    2018-01-12

    Photoluminescence (PL), electroluminescence (EL), and dark lock-in thermography are collected during stressing of a CdTe module under one-Sun light at an elevated temperature of 100 degrees C. The PL imaging system is simple and economical. The PL images show differing degrees of degradation across the module and are less sensitive to effects of shunting and resistance that appear on the EL images. Regions of varying degradation are chosen based on avoiding pre-existing shunt defects. These regions are evaluated using time-of-flight secondary ion-mass spectrometry and Kelvin probe force microscopy. Reduced PL intensity correlates to increased Cu concentration at the front interface.more » Numerical modeling and measurements agree that the increased Cu concentration at the junction also correlates to a reduced space charge region.« less

  8. Application of Artificial Boundary Conditions in Sensitivity-Based Updating of Finite Element Models

    DTIC Science & Technology

    2007-06-01

    is known as the impedance matrix[ ]( )Z Ω . [ ] [ ] 1( ) ( )Z H −Ω = Ω (12) where [ ] 2( )Z K M j C ⎡ ⎤Ω = −Ω + Ω⎣ ⎦ (13) A. REDUCED ORDER...D.L. A correlation coefficient for modal vector analysis. Proceedings of 1st International Modal Analysis Conference, 1982, 110-116. Anton , H ... Rorres , C ., (2005). Elementary Linear Algebra. New York: John Wiley and Sons. Avitable, Peter (2001, January) Experimental Modal Analysis, A Simple

  9. A multiple-objective optimal exploration strategy

    USGS Publications Warehouse

    Christakos, G.; Olea, R.A.

    1988-01-01

    Exploration for natural resources is accomplished through partial sampling of extensive domains. Such imperfect knowledge is subject to sampling error. Complex systems of equations resulting from modelling based on the theory of correlated random fields are reduced to simple analytical expressions providing global indices of estimation variance. The indices are utilized by multiple objective decision criteria to find the best sampling strategies. The approach is not limited by geometric nature of the sampling, covers a wide range in spatial continuity and leads to a step-by-step procedure. ?? 1988.

  10. Azimuthal anisotropy and correlations at large transverse momenta in p + p and Au + Au collisions at square root sNN=200 GeV.

    PubMed

    Adams, J; Aggarwal, M M; Ahammed, Z; Amonett, J; Anderson, B D; Arkhipkin, D; Averichev, G S; Badyal, S K; Bai, Y; Balewski, J; Barannikova, O; Barnby, L S; Baudot, J; Bekele, S; Belaga, V V; Bellwied, R; Berger, J; Bezverkhny, B I; Bharadwaj, S; Bhasin, A; Bhati, A K; Bhatia, V S; Bichsel, H; Billmeier, A; Bland, L C; Blyth, C O; Bonner, B E; Botje, M; Boucham, A; Brandin, A V; Bravar, A; Bystersky, M; Cadman, R V; Cai, X Z; Caines, H; Calderón de la Barca Sánchez, M; Carroll, J; Castillo, J; Cebra, D; Chajecki, Z; Chaloupka, P; Chattopdhyay, S; Chen, H F; Chen, Y; Cheng, J; Cherney, M; Chikanian, A; Christie, W; Coffin, J P; Cormier, T M; Cramer, J G; Crawford, H J; Das, D; Das, S; de Moura, M M; Derevschikov, A A; Didenko, L; Dietel, T; Dogra, S M; Dong, W J; Dong, X; Draper, J E; Du, F; Dubey, A K; Dunin, V B; Dunlop, J C; Dutta Mazumdar, M R; Eckardt, V; Edwards, W R; Efimov, L G; Emelianov, V; Engelage, J; Eppley, G; Erazmus, B; Estienne, M; Fachini, P; Faivre, J; Fatemi, R; Fedorisin, J; Filimonov, K; Filip, P; Finch, E; Fine, V; Fisyak, Y; Foley, K J; Fomenko, K; Fu, J; Gagliardi, C A; Gans, J; Ganti, M S; Gaudichet, L; Geurts, F; Ghazikhanian, V; Ghosh, P; Gonzalez, J E; Grachov, O; Grebenyuk, O; Grosnick, D; Guertin, S M; Guo, Y; Gupta, A; Gutierrez, T D; Hallman, T J; Hamed, A; Hardtke, D; Harris, J W; Heinz, M; Henry, T W; Hepplemann, S; Hippolyte, B; Hirsch, A; Hjort, E; Hoffmann, G W; Huang, H Z; Huang, S L; Hughes, E W; Humanic, T J; Igo, G; Ishihara, A; Jacobs, P; Jacobs, W W; Janik, M; Jiang, H; Jones, P G; Judd, E G; Kabana, S; Kang, K; Kaplan, M; Keane, D; Khodyrev, V Yu; Kiryluk, J; Kisiel, A; Kislov, E M; Klay, J; Klein, S R; Klyachko, A; Koetke, D D; Kollegger, T; Kopytine, M; Kotchenda, L; Kramer, M; Kravtsov, P; Kravtsov, V I; Krueger, K; Kuhn, C; Kulikov, A I; Kumar, A; Kunz, C L; Kutuev, R Kh; Kuznetsov, A A; Lamont, M A C; Landgraf, J M; Lange, S; Laue, F; Lauret, J; Lebedev, A; Lednicky, R; Lehocka, S; LeVine, M J; Li, C; Li, Q; Li, Y; Lindenbaum, S J; Lisa, M A; Liu, F; Liu, L; Liu, Q J; Liu, Z; Ljubicic, T; Llope, W J; Long, H; Longacre, R S; Lopez-Noriega, M; Love, W A; Lu, Y; Ludlam, T; Lynn, D; Ma, G L; Ma, J G; Ma, Y G; Magestro, D; Mahajan, S; Mahapatra, D P; Majka, R; Mangotra, L K; Manweiler, R; Margetis, S; Markert, C; Martin, L; Marx, J N; Matis, H S; Matulenko, Yu A; McClain, C J; McShane, T S; Meissner, F; Melnick, Yu; Meschanin, A; Miller, M L; Milosevich, Z; Minaev, N G; Mironov, C; Mischke, A; Mishra, D K; Mitchell, J; Mohanty, B; Molnar, L; Moore, C F; Morozov, D A; Munhoz, M G; Nandi, B K; Nayak, S K; Nayak, T K; Nelson, J M; Netrakanti, P K; Nikitin, V A; Nogach, L V; Nurushev, S B; Odyniec, G; Ogawa, A; Okorokov, V; Oldenburg, M; Olson, D; Pal, S K; Panebratsev, Y; Panitkin, S Y; Pavlinov, A I; Pawlak, T; Peitzmann, T; Perevoztchikov, V; Perkins, C; Peryt, W; Petrov, V A; Phatak, S C; Picha, R; Planinic, M; Pluta, J; Porile, N; Porter, J; Poskanzer, A M; Potekhin, M; Potrebenikova, E; Potukuchi, B V K S; Prindle, D; Pruneau, C; Putschke, J; Rai, G; Rakness, G; Raniwala, R; Raniwala, S; Ravel, O; Ray, R L; Razin, S V; Reichhold, D; Reid, J G; Renault, G; Retiere, F; Ridiger, A; Ritter, H G; Roberts, J B; Rogachevskiy, O V; Romero, J L; Rose, A; Roy, C; Ruan, L; Sahoo, R; Sakrejda, I; Salur, S; Sandweiss, J; Savin, I; Sazhin, P S; Schambach, J; Scharenberg, R P; Schmitz, N; Schroeder, L S; Schweda, K; Seger, J; Seyboth, P; Shahaliev, E; Shao, M; Shao, W; Sharma, M; Shen, W Q; Shestermanov, K E; Shimanskiy, S S; Sichtermann, E; Simon, F; Singaraju, R N; Skoro, G; Smirnov, N; Snellings, R; Sood, G; Sorensen, P; Sowinski, J; Speltz, J; Spinka, H M; Srivastava, B; Stadnik, A; Stanislaus, T D S; Stock, R; Stolpovsky, A; Strikhanov, M; Stringfellow, B; Suaide, A A P; Sugarbaker, E; Suire, C; Sumbera, M; Surrow, B; Symons, T J M; Szanto de Toledo, A; Szarwas, P; Tai, A; Takahashi, J; Tang, A H; Tarnowsky, T; Thein, D; Thomas, J H; Timoshenko, S; Tokarev, M; Trentalange, S; Tribble, R E; Tsai, O D; Ulery, J; Ullrich, T; Underwood, D G; Urkinbaev, A; Van Buren, G; van Leeuwen, M; Vander Molen, A M; Varma, R; Vasilevski, I M; Vasiliev, A N; Vernet, R; Vigdor, S E; Viyogi, Y P; Vokal, S; Voloshin, S A; Vznuzdaev, M; Waggoner, W T; Wang, F; Wang, G; Wang, G; Wang, X L; Wang, Y; Wang, Y; Wang, Z M; Ward, H; Watson, J W; Webb, J C; Wells, R; Westfall, G D; Wetzler, A; Whitten, C; Wieman, H; Wissink, S W; Witt, R; Wood, J; Wu, J; Xu, N; Xu, Z; Xu, Z Z; Yamamoto, E; Yepes, P; Yurevich, V I; Zanevsky, Y V; Zhang, H; Zhang, W M; Zhang, Z P; Zolnierczuk, P A; Zoulkarneev, R; Zoulkarneeva, Y; Zubarev, A N

    2004-12-17

    Results on high transverse momentum charged particle emission with respect to the reaction plane are presented for Au + Au collisions at square root s(NN)=200 GeV. Two- and four-particle correlations results are presented as well as a comparison of azimuthal correlations in Au + Au collisions to those in p + p at the same energy. The elliptic anisotropy v(2) is found to reach its maximum at p(t) approximately 3 GeV/c, then decrease slowly and remain significant up to p(t) approximately 7-10 GeV/c. Stronger suppression is found in the back-to-back high-p(t) particle correlations for particles emitted out of plane compared to those emitted in plane. The centrality dependence of v(2) at intermediate p(t) is compared to simple models based on jet quenching.

  11. Decomposing decision components in the Stop-signal task: A model-based approach to individual differences in inhibitory control

    PubMed Central

    White, Corey N.; Congdon, Eliza; Mumford, Jeanette A.; Karlsgodt, Katherine H.; Sabb, Fred W.; Freimer, Nelson B.; London, Edythe D.; Cannon, Tyrone D.; Bilder, Robert M.; Poldrack, Russell A.

    2014-01-01

    The Stop-signal task (SST), in which participants must inhibit prepotent responses, has been used to identify neural systems that vary with individual differences in inhibitory control. To explore how these differences relate to other aspects of decision-making, a drift diffusion model of simple decisions was fitted to SST data from Go trials to extract measures of caution, motor execution time, and stimulus processing speed for each of 123 participants. These values were used to probe fMRI data to explore individual differences in neural activation. Faster processing of the Go stimulus correlated with greater activation in the right frontal pole for both Go and Stop trials. On Stop trials stimulus processing speed also correlated with regions implicated in inhibitory control, including the right inferior frontal gyrus, medial frontal gyrus, and basal ganglia. Individual differences in motor execution time correlated with activation of the right parietal cortex. These findings suggest a robust relationship between the speed of stimulus processing and inhibitory processing at the neural level. This model-based approach provides novel insight into the interrelationships among decision components involved in inhibitory control, and raises interesting questions about strategic adjustments in performance and inhibitory deficits associated with psychopathology. PMID:24405185

  12. A study of helicopter stability and control including blade dynamics

    NASA Technical Reports Server (NTRS)

    Zhao, Xin; Curtiss, H. C., Jr.

    1988-01-01

    A linearized model of rotorcraft dynamics has been developed through the use of symbolic automatic equation generating techniques. The dynamic model has been formulated in a unique way such that it can be used to analyze a variety of rotor/body coupling problems including a rotor mounted on a flexible shaft with a number of modes as well as free-flight stability and control characteristics. Direct comparison of the time response to longitudinal, lateral and directional control inputs at various trim conditions shows that the linear model yields good to very good correlation with flight test. In particular it is shown that a dynamic inflow model is essential to obtain good time response correlation, especially for the hover trim condition. It also is shown that the main rotor wake interaction with the tail rotor and fixed tail surfaces is a significant contributor to the response at translational flight trim conditions. A relatively simple model for the downwash and sidewash at the tail surfaces based on flat vortex wake theory is shown to produce good agreement. Then, the influence of rotor flap and lag dynamics on automatic control systems feedback gain limitations is investigated with the model. It is shown that the blade dynamics, especially lagging dynamics, can severly limit the useable values of the feedback gain for simple feedback control and that multivariable optimal control theory is a powerful tool to design high gain augmentation control system. The frequency-shaped optimal control design can offer much better flight dynamic characteristics and a stable margin for the feedback system without need to model the lagging dynamics.

  13. Quantum Impurity Models as Reference Systems for Strongly Correlated Materials: The Road from the Kondo Impurity Model to First Principles Electronic Structure Calculations with Dynamical Mean-Field Theory

    NASA Astrophysics Data System (ADS)

    Kotliar, Gabriel

    2005-01-01

    Dynamical mean field theory (DMFT) relates extended systems (bulk solids, surfaces and interfaces) to quantum impurity models (QIM) satisfying a self-consistency condition. This mapping provides an economic description of correlated electron materials. It is currently used in practical computations of physical properties of real materials. It has also great conceptual value, providing a simple picture of correlated electron phenomena on the lattice, using concepts derived from quantum impurity models such as the Kondo effect. DMFT can also be formulated as a first principles electronic structure method and is applicable to correlated materials.

  14. Density correlators in a self-similar cascade

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyz˙; Ewski, J.

    1999-09-01

    Multivariate density moments (correlators) of arbitrary order are obtained for the multiplicative self-similar cascade. This result is based on the calculation by Greiner, Eggers and Lipa where the correlators of the logarithms of the particle densities have been obtained. The density correlators, more suitable for comparison with multiparticle data, appear to have a simple factorizable form.

  15. A method for estimating the incident PAR on inclined surfaces

    NASA Astrophysics Data System (ADS)

    Xie, Xiaoping; Gao, Wei; Gao, Zhiqiang

    2008-08-01

    A new simple model has been developed that incorporates Digital Elevation Model (DEM) and Moderate Resolution Imaging Spectroradiometer (MODIS) products to produce incident photosynthetically active radiation (PAR) for tilted surface. The method is based on a simplification of the general radiative transfer equation, which considers five major processes of attenuation of solar radiation: 1) Rayleigh scattering, 2) absorption by ozone and water vapor, 3) aerosol scattering, 4) multiple reflectance between surface and atmosphere, and 5) three terrain factors: slope and aspect, isotropic sky view factor, and additional radiation by neighbor reflectance. A comparison of the model results with observational data from the Yucheng and Changbai Mountain sites of the Chinese Ecosystem Research Network (CERN) shows the correlation coefficient as 0.929 and 0.904, respectively. A comparison of the model results with the 2006 filed measured PAR in the Yucheng and Changbai sites shows the correlation coefficient as 0.929 and 0.904, respectively, and the average percent error as 10% and 15%, respectively.

  16. Chromatin conformation in living cells: support for a zig-zag model of the 30 nm chromatin fiber

    NASA Technical Reports Server (NTRS)

    Rydberg, B.; Holley, W. R.; Mian, I. S.; Chatterjee, A.

    1998-01-01

    A new method was used to probe the conformation of chromatin in living mammalian cells. The method employs ionizing radiation and is based on the concept that such radiation induces correlated breaks in DNA strands that are in spatial proximity. Human dermal fibroblasts in G0 phase of the cell cycle and Chinese hamster ovary cells in mitosis were irradiated by X-rays or accelerated ions. Following lysis of the cells, DNA fragments induced by correlated breaks were end-labeled and separated according to size on denaturing polyacrylamide gels. A characteristic peak was obtained for a fragment size of 78 bases, which is the size that corresponds to one turn of DNA around the nucleosome. Additional peaks between 175 and 450 bases reflect the relative position of nearest-neighbor nucleosomes. Theoretical calculations that simulate the indirect and direct effect of radiation on DNA demonstrate that the fragment size distributions are closely related to the chromatin structure model used. Comparison of the experimental data with theoretical results support a zig-zag model of the chromatin fiber rather than a simple helical model. Thus, radiation-induced damage analysis can provide information on chromatin structure in the living cell. Copyright 1998 Academic Press.

  17. Demarcation of continental-oceanic transition zone using angular differences between gradients of geophysical fields

    NASA Astrophysics Data System (ADS)

    Jilinski, Pavel; Meju, Max A.; Fontes, Sergio L.

    2013-10-01

    The commonest technique for determination of the continental-oceanic crustal boundary or transition (COB) zone is based on locating and visually correlating bathymetric and potential field anomalies and constructing crustal models constrained by seismic data. In this paper, we present a simple method for spatial correlation of bathymetric and potential field geophysical anomalies. Angular differences between gradient directions are used to determine different types of correlation between gravity and bathymetric or magnetic data. It is found that the relationship between bathymetry and gravity anomalies can be correctly identified using this method. It is demonstrated, by comparison with previously published models for the southwest African margin, that this method enables the demarcation of the zone of transition from oceanic to continental crust assuming that this it is associated with geophysical anomalies, which can be correlated using gradient directions rather than magnitudes. We also applied this method, supported by 2-D gravity modelling, to the more complex Liberia and Cote d'Ivoire-Ghana sectors of the West African transform margin and obtained results that are in remarkable agreement with past predictions of the COB in that region. We suggest the use of this method for a first-pass interpretation as a prelude to rigorous modelling of the COB in frontier areas.

  18. Stylized facts in social networks: Community-based static modeling

    NASA Astrophysics Data System (ADS)

    Jo, Hang-Hyun; Murase, Yohsuke; Török, János; Kertész, János; Kaski, Kimmo

    2018-06-01

    The past analyses of datasets of social networks have enabled us to make empirical findings of a number of aspects of human society, which are commonly featured as stylized facts of social networks, such as broad distributions of network quantities, existence of communities, assortative mixing, and intensity-topology correlations. Since the understanding of the structure of these complex social networks is far from complete, for deeper insight into human society more comprehensive datasets and modeling of the stylized facts are needed. Although the existing dynamical and static models can generate some stylized facts, here we take an alternative approach by devising a community-based static model with heterogeneous community sizes and larger communities having smaller link density and weight. With these few assumptions we are able to generate realistic social networks that show most stylized facts for a wide range of parameters, as demonstrated numerically and analytically. Since our community-based static model is simple to implement and easily scalable, it can be used as a reference system, benchmark, or testbed for further applications.

  19. Surface Transient Binding-Based Fluorescence Correlation Spectroscopy (STB-FCS), a Simple and Easy-to-Implement Method to Extend the Upper Limit of the Time Window to Seconds.

    PubMed

    Peng, Sijia; Wang, Wenjuan; Chen, Chunlai

    2018-05-10

    Fluorescence correlation spectroscopy is a powerful single-molecule tool that is able to capture kinetic processes occurring at the nanosecond time scale. However, the upper limit of its time window is restricted by the dwell time of the molecule of interest in the confocal detection volume, which is usually around submilliseconds for a freely diffusing biomolecule. Here, we present a simple and easy-to-implement method, named surface transient binding-based fluorescence correlation spectroscopy (STB-FCS), which extends the upper limit of the time window to seconds. We further demonstrated that STB-FCS enables capture of both intramolecular and intermolecular kinetic processes whose time scales cross several orders of magnitude.

  20. Detection of greenhouse-gas-induced climatic change. Progress report, 1 December 1991--30 June 1994

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wigley, T.M.L.; Jones, P.D.

    1994-07-01

    In addition to changes due to variations in greenhouse gas concentrations, the global climate system exhibits a high degree of internally-generated and externally-forced natural variability. To detect the enhanced greenhouse effect, its signal must be isolated from the ``noise`` of this natural climatic variability. A high quality, spatially extensive data base is required to define the noise and its spatial characteristics. To facilitate this, available land and marine data bases will be updated and expanded. The data will be analyzed to determine the potential effects on climate of greenhouse gas concentration changes and other factors. Analyses will be guided bymore » a variety of models, from simple energy balance climate models to ocean General Circulation Models. Appendices A--G contain the following seven papers: (A) Recent global warmth moderated by the effects of the Mount Pinatubo eruption; (B) Recent warming in global temperature series; (C) Correlation methods in fingerprint detection studies; (D) Balancing the carbon budget. Implications for projections of future carbon dioxide concentration changes; (E) A simple model for estimating methane concentration and lifetime variations; (F) Implications for climate and sea level of revised IPCC emissions scenarios; and (G) Sulfate aerosol and climatic change.« less

  1. Theory and simulations of covariance mapping in multiple dimensions for data analysis in high-event-rate experiments

    NASA Astrophysics Data System (ADS)

    Zhaunerchyk, V.; Frasinski, L. J.; Eland, J. H. D.; Feifel, R.

    2014-05-01

    Multidimensional covariance analysis and its validity for correlation of processes leading to multiple products are investigated from a theoretical point of view. The need to correct for false correlations induced by experimental parameters which fluctuate from shot to shot, such as the intensity of self-amplified spontaneous emission x-ray free-electron laser pulses, is emphasized. Threefold covariance analysis based on simple extension of the two-variable formulation is shown to be valid for variables exhibiting Poisson statistics. In this case, false correlations arising from fluctuations in an unstable experimental parameter that scale linearly with signals can be eliminated by threefold partial covariance analysis, as defined here. Fourfold covariance based on the same simple extension is found to be invalid in general. Where fluctuations in an unstable parameter induce nonlinear signal variations, a technique of contingent covariance analysis is proposed here to suppress false correlations. In this paper we also show a method to eliminate false correlations associated with fluctuations of several unstable experimental parameters.

  2. Glassy behaviour in simple kinetically constrained models: topological networks, lattice analogues and annihilation-diffusion

    NASA Astrophysics Data System (ADS)

    Sherrington, David; Davison, Lexie; Buhot, Arnaud; Garrahan, Juan P.

    2002-02-01

    We report a study of a series of simple model systems with only non-interacting Hamiltonians, and hence simple equilibrium thermodynamics, but with constrained dynamics of a type initially suggested by foams and idealized covalent glasses. We demonstrate that macroscopic dynamical features characteristic of real and more complex model glasses, such as two-time decays in energy and auto-correlation functions, arise from the dynamics and we explain them qualitatively and quantitatively in terms of annihilation-diffusion concepts and theory. The comparison is with strong glasses. We also consider fluctuation-dissipation relations and demonstrate subtleties of interpretation. We find no FDT breakdown when the correct normalization is chosen.

  3. Free-energy functional of the Debye-Hückel model of simple fluids

    NASA Astrophysics Data System (ADS)

    Piron, R.; Blenski, T.

    2016-12-01

    The Debye-Hückel approximation to the free energy of a simple fluid is written as a functional of the pair correlation function. This functional can be seen as the Debye-Hückel equivalent to the functional derived in the hypernetted chain framework by Morita and Hiroike, as well as by Lado. It allows one to obtain the Debye-Hückel integral equation through a minimization with respect to the pair correlation function, leads to the correct form of the internal energy, and fulfills the virial theorem.

  4. Simple Plans or Sophisticated Habits? State, Transition and Learning Interactions in the Two-Step Task.

    PubMed

    Akam, Thomas; Costa, Rui; Dayan, Peter

    2015-12-01

    The recently developed 'two-step' behavioural task promises to differentiate model-based from model-free reinforcement learning, while generating neurophysiologically-friendly decision datasets with parametric variation of decision variables. These desirable features have prompted its widespread adoption. Here, we analyse the interactions between a range of different strategies and the structure of transitions and outcomes in order to examine constraints on what can be learned from behavioural performance. The task involves a trade-off between the need for stochasticity, to allow strategies to be discriminated, and a need for determinism, so that it is worth subjects' investment of effort to exploit the contingencies optimally. We show through simulation that under certain conditions model-free strategies can masquerade as being model-based. We first show that seemingly innocuous modifications to the task structure can induce correlations between action values at the start of the trial and the subsequent trial events in such a way that analysis based on comparing successive trials can lead to erroneous conclusions. We confirm the power of a suggested correction to the analysis that can alleviate this problem. We then consider model-free reinforcement learning strategies that exploit correlations between where rewards are obtained and which actions have high expected value. These generate behaviour that appears model-based under these, and also more sophisticated, analyses. Exploiting the full potential of the two-step task as a tool for behavioural neuroscience requires an understanding of these issues.

  5. Simple Plans or Sophisticated Habits? State, Transition and Learning Interactions in the Two-Step Task

    PubMed Central

    Akam, Thomas; Costa, Rui; Dayan, Peter

    2015-01-01

    The recently developed ‘two-step’ behavioural task promises to differentiate model-based from model-free reinforcement learning, while generating neurophysiologically-friendly decision datasets with parametric variation of decision variables. These desirable features have prompted its widespread adoption. Here, we analyse the interactions between a range of different strategies and the structure of transitions and outcomes in order to examine constraints on what can be learned from behavioural performance. The task involves a trade-off between the need for stochasticity, to allow strategies to be discriminated, and a need for determinism, so that it is worth subjects’ investment of effort to exploit the contingencies optimally. We show through simulation that under certain conditions model-free strategies can masquerade as being model-based. We first show that seemingly innocuous modifications to the task structure can induce correlations between action values at the start of the trial and the subsequent trial events in such a way that analysis based on comparing successive trials can lead to erroneous conclusions. We confirm the power of a suggested correction to the analysis that can alleviate this problem. We then consider model-free reinforcement learning strategies that exploit correlations between where rewards are obtained and which actions have high expected value. These generate behaviour that appears model-based under these, and also more sophisticated, analyses. Exploiting the full potential of the two-step task as a tool for behavioural neuroscience requires an understanding of these issues. PMID:26657806

  6. A Multivariate Analysis of Galaxy Cluster Properties

    NASA Astrophysics Data System (ADS)

    Ogle, P. M.; Djorgovski, S.

    1993-05-01

    We have assembled from the literature a data base on on 394 clusters of galaxies, with up to 16 parameters per cluster. They include optical and x-ray luminosities, x-ray temperatures, galaxy velocity dispersions, central galaxy and particle densities, optical and x-ray core radii and ellipticities, etc. In addition, derived quantities, such as the mass-to-light ratios and x-ray gas masses are included. Doubtful measurements have been identified, and deleted from the data base. Our goal is to explore the correlations between these parameters, and interpret them in the framework of our understanding of evolution of clusters and large-scale structure, such as the Gott-Rees scaling hierarchy. Among the simple, monovariate correlations we found, the most significant include those between the optical and x-ray luminosities, x-ray temperatures, cluster velocity dispersions, and central galaxy densities, in various mutual combinations. While some of these correlations have been discussed previously in the literature, generally smaller samples of objects have been used. We will also present the results of a multivariate statistical analysis of the data, including a principal component analysis (PCA). Such an approach has not been used previously for studies of cluster properties, even though it is much more powerful and complete than the simple monovariate techniques which are commonly employed. The observed correlations may lead to powerful constraints for theoretical models of formation and evolution of galaxy clusters. P.M.O. was supported by a Caltech graduate fellowship. S.D. acknowledges a partial support from the NASA contract NAS5-31348 and the NSF PYI award AST-9157412.

  7. A simple geometrical model describing shapes of soap films suspended on two rings

    NASA Astrophysics Data System (ADS)

    Herrmann, Felix J.; Kilvington, Charles D.; Wildenberg, Rebekah L.; Camacho, Franco E.; Walecki, Wojciech J.; Walecki, Peter S.; Walecki, Eve S.

    2016-09-01

    We measured and analysed the stability of two types of soap films suspended on two rings using the simple conical frusta-based model, where we use common definition of conical frustum as a portion of a cone that lies between two parallel planes cutting it. Using frusta-based we reproduced very well-known results for catenoid surfaces with and without a central disk. We present for the first time a simple conical frusta based spreadsheet model of the soap surface. This very simple, elementary, geometrical model produces results surprisingly well matching the experimental data and known exact analytical solutions. The experiment and the spreadsheet model can be used as a powerful teaching tool for pre-calculus and geometry students.

  8. On base station cooperation using statistical CSI in jointly correlated MIMO downlink channels

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Jiang, Bin; Jin, Shi; Gao, Xiqi; Wong, Kai-Kit

    2012-12-01

    This article studies the transmission of a single cell-edge user's signal using statistical channel state information at cooperative base stations (BSs) with a general jointly correlated multiple-input multiple-output (MIMO) channel model. We first present an optimal scheme to maximize the ergodic sum capacity with per-BS power constraints, revealing that the transmitted signals of all BSs are mutually independent and the optimum transmit directions for each BS align with the eigenvectors of the BS's own transmit correlation matrix of the channel. Then, we employ matrix permanents to derive a closed-form tight upper bound for the ergodic sum capacity. Based on these results, we develop a low-complexity power allocation solution using convex optimization techniques and a simple iterative water-filling algorithm (IWFA) for power allocation. Finally, we derive a necessary and sufficient condition for which a beamforming approach achieves capacity for all BSs. Simulation results demonstrate that the upper bound of ergodic sum capacity is tight and the proposed cooperative transmission scheme increases the downlink system sum capacity considerably.

  9. Colloid Transport in Saturated Porous Media: Elimination of Attachment Efficiency in a New Colloid Transport Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Landkamer, Lee L.; Harvey, Ronald W.; Scheibe, Timothy D.

    A new colloid transport model is introduced that is conceptually simple but captures the essential features of complicated attachment and detachment behavior of colloids when conditions of secondary minimum attachment exist. This model eliminates the empirical concept of collision efficiency; the attachment rate is computed directly from colloid filtration theory. Also, a new paradigm for colloid detachment based on colloid population heterogeneity is introduced. Assuming the dispersion coefficient can be estimated from tracer behavior, this model has only two fitting parameters: (1) the fraction of colloids that attach irreversibly and (2) the rate at which reversibly attached colloids leave themore » surface. These two parameters were correlated to physical parameters that control colloid transport such as the depth of the secondary minimum and pore water velocity. Given this correlation, the model serves as a heuristic tool for exploring the influence of physical parameters such as surface potential and fluid velocity on colloid transport. This model can be extended to heterogeneous systems characterized by both primary and secondary minimum deposition by simply increasing the fraction of colloids that attach irreversibly.« less

  10. FAST Model Calibration and Validation of the OC5-DeepCwind Floating Offshore Wind System Against Wave Tank Test Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendt, Fabian F; Robertson, Amy N; Jonkman, Jason

    During the course of the Offshore Code Comparison Collaboration, Continued, with Correlation (OC5) project, which focused on the validation of numerical methods through comparison against tank test data, the authors created a numerical FAST model of the 1:50-scale DeepCwind semisubmersible system that was tested at the Maritime Research Institute Netherlands ocean basin in 2013. This paper discusses several model calibration studies that were conducted to identify model adjustments that improve the agreement between the numerical simulations and the experimental test data. These calibration studies cover wind-field-specific parameters (coherence, turbulence), hydrodynamic and aerodynamic modeling approaches, as well as rotor model (blade-pitchmore » and blade-mass imbalances) and tower model (structural tower damping coefficient) adjustments. These calibration studies were conducted based on relatively simple calibration load cases (wave only/wind only). The agreement between the final FAST model and experimental measurements is then assessed based on more-complex combined wind and wave validation cases.« less

  11. Major Fault Patterns in Zanjan State of Iran Based of GECO Global Geoid Model

    NASA Astrophysics Data System (ADS)

    Beheshty, Sayyed Amir Hossein; Abrari Vajari, Mohammad; Raoufikelachayeh, SeyedehSusan

    2016-04-01

    A new Earth Gravitational Model (GECO) to degree 2190 has been developed incorporates EGM2008 and the latest GOCE based satellite solutions. Satellite gradiometry data are more sensitive information of the long- and medium- wavelengths of the gravity field than the conventional satellite tracking data. Hence, by utilizing this new technique, more accurate, reliable and higher degrees/orders of the spherical harmonic expansion of the gravity field can be achieved. Gravity gradients can also be useful in geophysical interpretation and prospecting. We have presented the concept of gravity gradients with some simple interpretations. A MATLAB based computer programs were developed and utilized for determining the gravity and gradient components of the gravity field using the GGMs, followed by a case study in Zanjan State of Iran. Our numerical studies show strong (more than 72%) correlations between gravity anomalies and the diagonal elements of the gradient tensor. Also, strong correlations were revealed between the components of the deflection of vertical and the off-diagonal elements as well as between the horizontal gradient and magnitude of the deflection of vertical. We clearly distinguished two big faults in North and South of Zanjan city based on the current information. Also, several minor faults were detected in the study area. Therefore, the same geophysical interpretation can be stated for gravity gradient components too. Our mathematical derivations support some of these correlations.

  12. Computing local edge probability in natural scenes from a population of oriented simple cells

    PubMed Central

    Ramachandra, Chaithanya A.; Mel, Bartlett W.

    2013-01-01

    A key computation in visual cortex is the extraction of object contours, where the first stage of processing is commonly attributed to V1 simple cells. The standard model of a simple cell—an oriented linear filter followed by a divisive normalization—fits a wide variety of physiological data, but is a poor performing local edge detector when applied to natural images. The brain's ability to finely discriminate edges from nonedges therefore likely depends on information encoded by local simple cell populations. To gain insight into the corresponding decoding problem, we used Bayes's rule to calculate edge probability at a given location/orientation in an image based on a surrounding filter population. Beginning with a set of ∼ 100 filters, we culled out a subset that were maximally informative about edges, and minimally correlated to allow factorization of the joint on- and off-edge likelihood functions. Key features of our approach include a new, efficient method for ground-truth edge labeling, an emphasis on achieving filter independence, including a focus on filters in the region orthogonal rather than tangential to an edge, and the use of a customized parametric model to represent the individual filter likelihood functions. The resulting population-based edge detector has zero parameters, calculates edge probability based on a sum of surrounding filter influences, is much more sharply tuned than the underlying linear filters, and effectively captures fine-scale edge structure in natural scenes. Our findings predict nonmonotonic interactions between cells in visual cortex, wherein a cell may for certain stimuli excite and for other stimuli inhibit the same neighboring cell, depending on the two cells' relative offsets in position and orientation, and their relative activation levels. PMID:24381295

  13. Effects of host social hierarchy on disease persistence.

    PubMed

    Davidson, Ross S; Marion, Glenn; Hutchings, Michael R

    2008-08-07

    The effects of social hierarchy on population dynamics and epidemiology are examined through a model which contains a number of fundamental features of hierarchical systems, but is simple enough to allow analytical insight. In order to allow for differences in birth rates, contact rates and movement rates among different sets of individuals the population is first divided into subgroups representing levels in the hierarchy. Movement, representing dominance challenges, is allowed between any two levels, giving a completely connected network. The model includes hierarchical effects by introducing a set of dominance parameters which affect birth rates in each social level and movement rates between social levels, dependent upon their rank. Although natural hierarchies vary greatly in form, the skewing of contact patterns, introduced here through non-uniform dominance parameters, has marked effects on the spread of disease. A simple homogeneous mixing differential equation model of a disease with SI dynamics in a population subject to simple birth and death process is presented and it is shown that the hierarchical model tends to this as certain parameter regions are approached. Outside of these parameter regions correlations within the system give rise to deviations from the simple theory. A Gaussian moment closure scheme is developed which extends the homogeneous model in order to take account of correlations arising from the hierarchical structure, and it is shown that the results are in reasonable agreement with simulations across a range of parameters. This approach helps to elucidate the origin of hierarchical effects and shows that it may be straightforward to relate the correlations in the model to measurable quantities which could be used to determine the importance of hierarchical corrections. Overall, hierarchical effects decrease the levels of disease present in a given population compared to a homogeneous unstructured model, but show higher levels of disease than structured models with no hierarchy. The separation between these three models is greatest when the rate of dominance challenges is low, reducing mixing, and when the disease prevalence is low. This suggests that these effects will often need to be considered in models being used to examine the impact of control strategies where the low disease prevalence behaviour of a model is critical.

  14. The galaxy clustering crisis in abundance matching

    NASA Astrophysics Data System (ADS)

    Campbell, Duncan; van den Bosch, Frank C.; Padmanabhan, Nikhil; Mao, Yao-Yuan; Zentner, Andrew R.; Lange, Johannes U.; Jiang, Fangzhou; Villarreal, Antonio

    2018-06-01

    Galaxy clustering on small scales is significantly underpredicted by sub-halo abundance matching (SHAM) models that populate (sub-)haloes with galaxies based on peak halo mass, Mpeak. SHAM models based on the peak maximum circular velocity, Vpeak, have had much better success. The primary reason for Mpeak-based models fail is the relatively low abundance of satellite galaxies produced in these models compared to those based on Vpeak. Despite success in predicting clustering, a simple Vpeak-based SHAM model results in predictions for galaxy growth that are at odds with observations. We evaluate three possible remedies that could `save' mass-based SHAM: (1) SHAM models require a significant population of `orphan' galaxies as a result of artificial disruption/merging of sub-haloes in modern high-resolution dark matter simulations; (2) satellites must grow significantly after their accretion; and (3) stellar mass is significantly affected by halo assembly history. No solution is entirely satisfactory. However, regardless of the particulars, we show that popular SHAM models based on Mpeak cannot be complete physical models as presented. Either Vpeak truly is a better predictor of stellar mass at z ˜ 0 and it remains to be seen how the correlation between stellar mass and Vpeak comes about, or SHAM models are missing vital component(s) that significantly affect galaxy clustering.

  15. Theoretical study of reactive and nonreactive turbulent coaxial jets

    NASA Technical Reports Server (NTRS)

    Gupta, R. N.; Wakelyn, N. T.

    1976-01-01

    The hydrodynamic properties and the reaction kinetics of axisymmetric coaxial turbulent jets having steady mean quantities are investigated. From the analysis, limited to free turbulent boundary layer mixing of such jets, it is found that the two-equation model of turbulence is adequate for most nonreactive flows. For the reactive flows, where an allowance must be made for second order correlations of concentration fluctuations in the finite rate chemistry for initially inhomogeneous mixture, an equation similar to the concentration fluctuation equation of a related model is suggested. For diffusion limited reactions, the eddy breakup model based on concentration fluctuations is found satisfactory and simple to use. The theoretical results obtained from these various models are compared with some of the available experimental data.

  16. Correlators in tensor models from character calculus

    NASA Astrophysics Data System (ADS)

    Mironov, A.; Morozov, A.

    2017-11-01

    We explain how the calculations of [20], which provided the first evidence for non-trivial structures of Gaussian correlators in tensor models, are efficiently performed with the help of the (Hurwitz) character calculus. This emphasizes a close similarity between technical methods in matrix and tensor models and supports a hope to understand the emerging structures in very similar terms. We claim that the 2m-fold Gaussian correlators of rank r tensors are given by r-linear combinations of dimensions with the Young diagrams of size m. The coefficients are made from the characters of the symmetric group Sm and their exact form depends on the choice of the correlator and on the symmetries of the model. As the simplest application of this new knowledge, we provide simple expressions for correlators in the Aristotelian tensor model as tri-linear combinations of dimensions.

  17. Inversion of Attributes and Full Waveforms of Ground Penetrating Radar Data Using PEST

    NASA Astrophysics Data System (ADS)

    Jazayeri, S.; Kruse, S.; Esmaeili, S.

    2015-12-01

    We seek to establish a method, based on freely available software, for inverting GPR signals for the underlying physical properties (electrical permittivity, magnetic permeability, target geometries). Such a procedure should be useful for classroom instruction and for analyzing surface GPR surveys over simple targets. We explore the applicability of the PEST parameter estimation software package for GPR inversion (www.pesthomepage.org). PEST is designed to invert data sets with large numbers of parameters, and offers a variety of inversion methods. Although primarily used in hydrogeology, the code has been applied to a wide variety of physical problems. The PEST code requires forward model input; the forward model of the GPR signal is done with the GPRMax package (www.gprmax.com). The problem of extracting the physical characteristics of a subsurface anomaly from the GPR data is highly nonlinear. For synthetic models of simple targets in homogeneous backgrounds, we find PEST's nonlinear Gauss-Marquardt-Levenberg algorithm is preferred. This method requires an initial model, for which the weighted differences between model-generated data and those of the "true" synthetic model (the objective function) are calculated. In order to do this, the Jacobian matrix and the derivatives of the observation data in respect to the model parameters are computed using a finite differences method. Next, the iterative process of building new models by updating the initial values starts in order to minimize the objective function. Another measure of the goodness of the final acceptable model is the correlation coefficient which is calculated based on the method of Cooley and Naff. An accepted final model satisfies both of these conditions. Models to date show that physical properties of simple isolated targets against homogeneous backgrounds can be obtained from multiple traces from common-offset surface surveys. Ongoing work examines the inversion capabilities with more complex target geometries and heterogeneous soils.

  18. Joint Blind Source Separation by Multi-set Canonical Correlation Analysis

    PubMed Central

    Li, Yi-Ou; Adalı, Tülay; Wang, Wei; Calhoun, Vince D

    2009-01-01

    In this work, we introduce a simple and effective scheme to achieve joint blind source separation (BSS) of multiple datasets using multi-set canonical correlation analysis (M-CCA) [1]. We first propose a generative model of joint BSS based on the correlation of latent sources within and between datasets. We specify source separability conditions, and show that, when the conditions are satisfied, the group of corresponding sources from each dataset can be jointly extracted by M-CCA through maximization of correlation among the extracted sources. We compare source separation performance of the M-CCA scheme with other joint BSS methods and demonstrate the superior performance of the M-CCA scheme in achieving joint BSS for a large number of datasets, group of corresponding sources with heterogeneous correlation values, and complex-valued sources with circular and non-circular distributions. We apply M-CCA to analysis of functional magnetic resonance imaging (fMRI) data from multiple subjects and show its utility in estimating meaningful brain activations from a visuomotor task. PMID:20221319

  19. Opportunities for Undergraduates to Engage in Research Using Seismic Data and Data Products

    NASA Astrophysics Data System (ADS)

    Taber, J. J.; Hubenthal, M.; Benoit, M. H.

    2014-12-01

    Introductory Earth science classes can become more interactive through the use of a range of seismic data and models that are available online, which students can use to conduct simple research regarding earthquakes and earth structure. One way to introduce students to these data sets is via a new set of six intro-level classroom activities designed to introduce undergraduates to some of the grand challenges in seismology research. The activities all use real data sets and some require students to collect their own data, either using physical models or via Web sites and Web applications. While the activities are designed to step students through a learning sequence, several of the activities are open-ended and can be expanded to research topics. For example, collecting and analyzing data from a deceptively simple physical model of earthquake behavior can lead students to query a map-based seismicity catalog via the IRIS Earthquake Browser to study seismicity rates and the distribution of earthquake magnitudes, and make predictions about the earthquake hazards in regions of their choosing. In another activity, students can pose their own questions and reach conclusions regarding the correlation between hydraulic fracturing, waste water disposal, and earthquakes. Other data sources are available for students to engage in self-directed research projects. For students with an interest in instrumentation, they can conduct research relating to instrument calibration and sensitivity using a simple educational seismometer. More advanced students can explore tomographic models of seismic velocity structure, and examine research questions related to earth structure, such as the correlation of topography to crustal thickness, and the fate of subducted slabs. The type of faulting in a region can be explored using a map-based catalog of focal mechanisms, allowing students to analyze the spatial distribution of normal, thrust and strike-slip events in a subduction zone region. For all of these topics and data sets, the societal impact of earthquakes can provide an additional motivation for students to engage in their research. www.iris.edu

  20. Improving Lidar-based Aboveground Biomass Estimation with Site Productivity for Central Hardwood Forests, USA

    NASA Astrophysics Data System (ADS)

    Shao, G.; Gallion, J.; Fei, S.

    2016-12-01

    Sound forest aboveground biomass estimation is required to monitor diverse forest ecosystems and their impacts on the changing climate. Lidar-based regression models provided promised biomass estimations in most forest ecosystems. However, considerable uncertainties of biomass estimations have been reported in the temperate hardwood and hardwood-dominated mixed forests. Varied site productivities in temperate hardwood forests largely diversified height and diameter growth rates, which significantly reduced the correlation between tree height and diameter at breast height (DBH) in mature and complex forests. It is, therefore, difficult to utilize height-based lidar metrics to predict DBH-based field-measured biomass through a simple regression model regardless the variation of site productivity. In this study, we established a multi-dimension nonlinear regression model incorporating lidar metrics and site productivity classes derived from soil features. In the regression model, lidar metrics provided horizontal and vertical structural information and productivity classes differentiated good and poor forest sites. The selection and combination of lidar metrics were discussed. Multiple regression models were employed and compared. Uncertainty analysis was applied to the best fit model. The effects of site productivity on the lidar-based biomass model were addressed.

  1. Correlation between He-Ne scatter and 2.7-microm pulsed laser damage at coating defects.

    PubMed

    Porteus, J O; Spiker, C J; Franck, J B

    1986-11-01

    A reported correlation between defect-initiated pulsed laser damage and local predamage scatter in multilayer infrared mirror coatings has been analyzed in detail. Examination of a much larger data base confirms the previous result on dielectric-enhanced reflectors with polished substrates over a wide range of energy densities above the damage onset. Scatter signals from individual undamaged defects were detected using a He-Ne scatter probe with a focal spot that nearly coincides with the 150-microm-diam (D1/e(2)) focal spot of the damage-probe beam. Subsequent damage frequency measurements (1-on-1) were made near normal or at 45 degrees incidence with 100-ns pulses at 2.7-microm wavelength. The correlation is characterized by an increase in damage frequency with increasing predamage scatter signal and by equivalence of the defect densities indicated by the two probes. Characteristics of the correlation are compared with a simple model based on focal spot intensity profiles. Conditions that limit correlation are discussed, including variable scatter from defects and background scatter from diamond-turned substrates. Results have implication for nondestructive defect detection and coating quality control.

  2. Is demography destiny? Application of machine learning techniques to accurately predict population health outcomes from a minimal demographic dataset.

    PubMed

    Luo, Wei; Nguyen, Thin; Nichols, Melanie; Tran, Truyen; Rana, Santu; Gupta, Sunil; Phung, Dinh; Venkatesh, Svetha; Allender, Steve

    2015-01-01

    For years, we have relied on population surveys to keep track of regional public health statistics, including the prevalence of non-communicable diseases. Because of the cost and limitations of such surveys, we often do not have the up-to-date data on health outcomes of a region. In this paper, we examined the feasibility of inferring regional health outcomes from socio-demographic data that are widely available and timely updated through national censuses and community surveys. Using data for 50 American states (excluding Washington DC) from 2007 to 2012, we constructed a machine-learning model to predict the prevalence of six non-communicable disease (NCD) outcomes (four NCDs and two major clinical risk factors), based on population socio-demographic characteristics from the American Community Survey. We found that regional prevalence estimates for non-communicable diseases can be reasonably predicted. The predictions were highly correlated with the observed data, in both the states included in the derivation model (median correlation 0.88) and those excluded from the development for use as a completely separated validation sample (median correlation 0.85), demonstrating that the model had sufficient external validity to make good predictions, based on demographics alone, for areas not included in the model development. This highlights both the utility of this sophisticated approach to model development, and the vital importance of simple socio-demographic characteristics as both indicators and determinants of chronic disease.

  3. Worldwide impact of economic cycles on suicide trends over 3 decades: differences according to level of development. A mixed effect model study

    PubMed Central

    Perez-Rodriguez, M Mercedes; Garcia-Nieto, Rebeca; Fernandez-Navarro, Pablo; Galfalvy, Hanga; de Leon, Jose; Baca-Garcia, Enrique

    2012-01-01

    Objectives To investigate the trends and correlations of gross domestic product (GDP) adjusted for purchasing power parity (PPP) per capita on suicide rates in 10 WHO regions during the past 30 years. Design Analyses of databases of PPP-adjusted GDP per capita and suicide rates. Countries were grouped according to the Global Burden of Disease regional classification system. Data sources World Bank's official website and WHO's mortality database. Statistical analyses After graphically displaying PPP-adjusted GDP per capita and suicide rates, mixed effect models were used for representing and analysing clustered data. Results Three different groups of countries, based on the correlation between the PPP-adjusted GDP per capita and suicide rates, are reported: (1) positive correlation: developing (lower middle and upper middle income) Latin-American and Caribbean countries, developing countries in the South East Asian Region including India, some countries in the Western Pacific Region (such as China and South Korea) and high-income Asian countries, including Japan; (2) negative correlation: high-income and developing European countries, Canada, Australia and New Zealand and (3) no correlation was found in an African country. Conclusions PPP-adjusted GDP per capita may offer a simple measure for designing the type of preventive interventions aimed at lowering suicide rates that can be used across countries. Public health interventions might be more suitable for developing countries. In high-income countries, however, preventive measures based on the medical model might prove more useful. PMID:22586285

  4. A VLSI implementation for synthetic aperture radar image processing

    NASA Technical Reports Server (NTRS)

    Premkumar, A.; Purviance, J.

    1990-01-01

    A simple physical model for the Synthetic Aperture Radar (SAR) is presented. This model explains the one dimensional and two dimensional nature of the received SAR signal in the range and azimuth directions. A time domain correlator, its algorithm, and features are explained. The correlator is ideally suited for VLSI implementation. A real time SAR architecture using these correlators is proposed. In the proposed architecture, the received SAR data is processed using one dimensional correlators for determining the range while two dimensional correlators are used to determine the azimuth of a target. The architecture uses only three different types of custom VLSI chips and a small amount of memory.

  5. Prediction of the effects of propeller operation on the static longitudinal stability of single-engine tractor monoplanes with flaps retracted

    NASA Technical Reports Server (NTRS)

    Weil, Joseph; Sleeman, William C , Jr

    1949-01-01

    The effects of propeller operation on the static longitudinal stability of single-engine tractor monoplanes are analyzed, and a simple method is presented for computing power-on pitching-moment curves for flap-retracted flight conditions. The methods evolved are based on the results of powered-model wind-tunnel investigations of 28 model configurations. Correlation curves are presented from which the effects of power on the downwash over the tail and the stabilizer effectiveness can be rapidly predicted. The procedures developed enable prediction of power-on longitudinal stability characteristics that are generally in very good agreement with experiment.

  6. Evaluation of the ORCHIDEE ecosystem model over Africa against 25 years of satellite-based water and carbon measurements

    NASA Astrophysics Data System (ADS)

    Traore, Abdoul Khadre; Ciais, Philippe; Vuichard, Nicolas; Poulter, Benjamin; Viovy, Nicolas; Guimberteau, Matthieu; Jung, Martin; Myneni, Ranga; Fisher, Joshua B.

    2014-08-01

    Few studies have evaluated land surface models for African ecosystems. Here we evaluate the Organizing Carbon and Hydrology in Dynamic Ecosystems (ORCHIDEE) process-based model for the interannual variability (IAV) of the fraction of absorbed active radiation, the gross primary productivity (GPP), soil moisture, and evapotranspiration (ET). Two ORCHIDEE versions are tested, which differ by their soil hydrology parameterization, one with a two-layer simple bucket and the other a more complex 11-layer soil-water diffusion. In addition, we evaluate the sensitivity of climate forcing data, atmospheric CO2, and soil depth. Beside a very generic vegetation parameterization, ORCHIDEE simulates rather well the IAV of GPP and ET (0.5 < r < 0.9 interannual correlation) over Africa except in forestlands. The ORCHIDEE 11-layer version outperforms the two-layer version for simulating IAV of soil moisture, whereas both versions have similar performance of GPP and ET. Effects of CO2 trends, and of variable soil depth on the IAV of GPP, ET, and soil moisture are small, although these drivers influence the trends of these variables. The meteorological forcing data appear to be quite important for faithfully reproducing the IAV of simulated variables, suggesting that in regions with sparse weather station data, the model uncertainty is strongly related to uncertain meteorological forcing. Simulated variables are positively and strongly correlated with precipitation but negatively and weakly correlated with temperature and solar radiation. Model-derived and observation-based sensitivities are in agreement for the driving role of precipitation. However, the modeled GPP is too sensitive to precipitation, suggesting that processes such as increased water use efficiency during drought need to be incorporated in ORCHIDEE.

  7. Statistical and linguistic features of DNA sequences

    NASA Technical Reports Server (NTRS)

    Havlin, S.; Buldyrev, S. V.; Goldberger, A. L.; Mantegna, R. N.; Peng, C. K.; Simons, M.; Stanley, H. E.

    1995-01-01

    We present evidence supporting the idea that the DNA sequence in genes containing noncoding regions is correlated, and that the correlation is remarkably long range--indeed, base pairs thousands of base pairs distant are correlated. We do not find such a long-range correlation in the coding regions of the gene. We resolve the problem of the "non-stationary" feature of the sequence of base pairs by applying a new algorithm called Detrended Fluctuation Analysis (DFA). We address the claim of Voss that there is no difference in the statistical properties of coding and noncoding regions of DNA by systematically applying the DFA algorithm, as well as standard FFT analysis, to all eukaryotic DNA sequences (33 301 coding and 29 453 noncoding) in the entire GenBank database. We describe a simple model to account for the presence of long-range power-law correlations which is based upon a generalization of the classic Levy walk. Finally, we describe briefly some recent work showing that the noncoding sequences have certain statistical features in common with natural languages. Specifically, we adapt to DNA the Zipf approach to analyzing linguistic texts, and the Shannon approach to quantifying the "redundancy" of a linguistic text in terms of a measurable entropy function. We suggest that noncoding regions in plants and invertebrates may display a smaller entropy and larger redundancy than coding regions, further supporting the possibility that noncoding regions of DNA may carry biological information.

  8. Understanding Zeeman EIT Noise Correlation Spectra in Buffered Rb Vapor

    NASA Astrophysics Data System (ADS)

    O'Leary, Shannon; Zheng, Aojie; Crescimanno, Michael

    2014-05-01

    Noise correlation spectroscopy on systems manifesting Electromagnetically Induced Transparency (EIT) holds promise as a simple, robust method for performing high-resolution spectroscopy used in applications such as EIT-based atomic magnetometry and clocks. During laser light's propagation through a resonant medium, interaction with the medium converts laser phase noise into intensity noise. While this noise conversion can diminish the precision of EIT applications, noise correlation techniques transform the noise into a useful spectroscopic tool that can improve the application's precision. Using a single diode laser with large phase noise, we examine laser intensity noise and noise correlations from Zeeman EIT in a buffered Rb vapor. Of particular interest is a narrow noise correlation feature, resonant with EIT, that has been shown in earlier work to be power-broadening resistant at low powers. We report here on our recent experimental work and complementary theoretical modeling on EIT noise spectra, including a study of power broadening of the narrow noise correlation feature. Understanding the nature of the noise correlation spectrum is essential for optimizing EIT-noise applications.

  9. A simple microstructure return model explaining microstructure noise and Epps effects

    NASA Astrophysics Data System (ADS)

    Saichev, A.; Sornette, D.

    2014-01-01

    We present a novel simple microstructure model of financial returns that combines (i) the well-known ARFIMA process applied to tick-by-tick returns, (ii) the bid-ask bounce effect, (iii) the fat tail structure of the distribution of returns and (iv) the non-Poissonian statistics of inter-trade intervals. This model allows us to explain both qualitatively and quantitatively important stylized facts observed in the statistics of both microstructure and macrostructure returns, including the short-ranged correlation of returns, the long-ranged correlations of absolute returns, the microstructure noise and Epps effects. According to the microstructure noise effect, volatility is a decreasing function of the time-scale used to estimate it. The Epps effect states that cross correlations between asset returns are increasing functions of the time-scale at which the returns are estimated. The microstructure noise is explained as the result of the negative return correlations inherent in the definition of the bid-ask bounce component (ii). In the presence of a genuine correlation between the returns of two assets, the Epps effect is due to an average statistical overlap of the momentum of the returns of the two assets defined over a finite time-scale in the presence of the long memory process (i).

  10. The use of dwell time cross-correlation functions to study single-ion channel gating kinetics.

    PubMed Central

    Ball, F G; Kerry, C J; Ramsey, R L; Sansom, M S; Usherwood, P N

    1988-01-01

    The derivation of cross-correlation functions from single-channel dwell (open and closed) times is described. Simulation of single-channel data for simple gating models, alongside theoretical treatment, is used to demonstrate the relationship of cross-correlation functions to underlying gating mechanisms. It is shown that time irreversibility of gating kinetics may be revealed in cross-correlation functions. Application of cross-correlation function analysis to data derived from the locust muscle glutamate receptor-channel provides evidence for multiple gateway states and time reversibility of gating. A model for the gating of this channel is used to show the effect of omission of brief channel events on cross-correlation functions. PMID:2462924

  11. Local Difference Measures between Complex Networks for Dynamical System Model Evaluation

    PubMed Central

    Lange, Stefan; Donges, Jonathan F.; Volkholz, Jan; Kurths, Jürgen

    2015-01-01

    A faithful modeling of real-world dynamical systems necessitates model evaluation. A recent promising methodological approach to this problem has been based on complex networks, which in turn have proven useful for the characterization of dynamical systems. In this context, we introduce three local network difference measures and demonstrate their capabilities in the field of climate modeling, where these measures facilitate a spatially explicit model evaluation. Building on a recent study by Feldhoff et al. [1] we comparatively analyze statistical and dynamical regional climate simulations of the South American monsoon system. Three types of climate networks representing different aspects of rainfall dynamics are constructed from the modeled precipitation space-time series. Specifically, we define simple graphs based on positive as well as negative rank correlations between rainfall anomaly time series at different locations, and such based on spatial synchronizations of extreme rain events. An evaluation against respective networks built from daily satellite data provided by the Tropical Rainfall Measuring Mission 3B42 V7 reveals far greater differences in model performance between network types for a fixed but arbitrary climate model than between climate models for a fixed but arbitrary network type. We identify two sources of uncertainty in this respect. Firstly, climate variability limits fidelity, particularly in the case of the extreme event network; and secondly, larger geographical link lengths render link misplacements more likely, most notably in the case of the anticorrelation network; both contributions are quantified using suitable ensembles of surrogate networks. Our model evaluation approach is applicable to any multidimensional dynamical system and especially our simple graph difference measures are highly versatile as the graphs to be compared may be constructed in whatever way required. Generalizations to directed as well as edge- and node-weighted graphs are discussed. PMID:25856374

  12. Local difference measures between complex networks for dynamical system model evaluation.

    PubMed

    Lange, Stefan; Donges, Jonathan F; Volkholz, Jan; Kurths, Jürgen

    2015-01-01

    A faithful modeling of real-world dynamical systems necessitates model evaluation. A recent promising methodological approach to this problem has been based on complex networks, which in turn have proven useful for the characterization of dynamical systems. In this context, we introduce three local network difference measures and demonstrate their capabilities in the field of climate modeling, where these measures facilitate a spatially explicit model evaluation.Building on a recent study by Feldhoff et al. [8] we comparatively analyze statistical and dynamical regional climate simulations of the South American monsoon system [corrected]. types of climate networks representing different aspects of rainfall dynamics are constructed from the modeled precipitation space-time series. Specifically, we define simple graphs based on positive as well as negative rank correlations between rainfall anomaly time series at different locations, and such based on spatial synchronizations of extreme rain events. An evaluation against respective networks built from daily satellite data provided by the Tropical Rainfall Measuring Mission 3B42 V7 reveals far greater differences in model performance between network types for a fixed but arbitrary climate model than between climate models for a fixed but arbitrary network type. We identify two sources of uncertainty in this respect. Firstly, climate variability limits fidelity, particularly in the case of the extreme event network; and secondly, larger geographical link lengths render link misplacements more likely, most notably in the case of the anticorrelation network; both contributions are quantified using suitable ensembles of surrogate networks. Our model evaluation approach is applicable to any multidimensional dynamical system and especially our simple graph difference measures are highly versatile as the graphs to be compared may be constructed in whatever way required. Generalizations to directed as well as edge- and node-weighted graphs are discussed.

  13. Patterns, transitions and the role of leaders in the collective dynamics of a simple robotic flock

    NASA Astrophysics Data System (ADS)

    Tarcai, Norbert; Virágh, Csaba; Ábel, Dániel; Nagy, Máté; Várkonyi, Péter L.; Vásárhelyi, Gábor; Vicsek, Tamás

    2011-04-01

    We have developed an experimental setup of very simple self-propelled robots to observe collective motion emerging as a result of inelastic collisions only. A circular pool and commercial RC boats were the basis of our first setup, where we demonstrated that jamming, clustering, disordered and ordered motion are all present in such a simple experiment and showed that the noise level has a fundamental role in the generation of collective dynamics. Critical noise ranges and the transition characteristics between the different collective patterns were also examined. In our second experiment we used a real-time tracking system and a few steerable model boats to introduce intelligent leaders into the flock. We demonstrated that even a very small portion of guiding members can determine group direction and enhance ordering through inelastic collisions. We also showed that noise can facilitate and speed up ordering with leaders. Our work was extended with an agent-based simulation model, too, and close similarity between real and simulation results was observed. The simulation results show clear statistical evidence of three states and negative correlation between density and ordered motion due to the onset of jamming. Our experiments confirm the different theoretical studies and simulation results in the literature on the subject of collision-based, noise-dependent and leader-driven self-propelled particle systems.

  14. On two-point boundary correlations in the six-vertex model with domain wall boundary conditions

    NASA Astrophysics Data System (ADS)

    Colomo, F.; Pronko, A. G.

    2005-05-01

    The six-vertex model with domain wall boundary conditions on an N × N square lattice is considered. The two-point correlation function describing the probability of having two vertices in a given state at opposite (top and bottom) boundaries of the lattice is calculated. It is shown that this two-point boundary correlator is expressible in a very simple way in terms of the one-point boundary correlators of the model on N × N and (N - 1) × (N - 1) lattices. In alternating sign matrix (ASM) language this result implies that the doubly refined x-enumerations of ASMs are just appropriate combinations of the singly refined ones.

  15. Ion transport in complex layered graphene-based membranes with tuneable interlayer spacing.

    PubMed

    Cheng, Chi; Jiang, Gengping; Garvey, Christopher J; Wang, Yuanyuan; Simon, George P; Liu, Jefferson Z; Li, Dan

    2016-02-01

    Investigation of the transport properties of ions confined in nanoporous carbon is generally difficult because of the stochastic nature and distribution of multiscale complex and imperfect pore structures within the bulk material. We demonstrate a combined approach of experiment and simulation to describe the structure of complex layered graphene-based membranes, which allows their use as a unique porous platform to gain unprecedented insights into nanoconfined transport phenomena across the entire sub-10-nm scales. By correlation of experimental results with simulation of concentration-driven ion diffusion through the cascading layered graphene structure with sub-10-nm tuneable interlayer spacing, we are able to construct a robust, representative structural model that allows the establishment of a quantitative relationship among the nanoconfined ion transport properties in relation to the complex nanoporous structure of the layered membrane. This correlation reveals the remarkable effect of the structural imperfections of the membranes on ion transport and particularly the scaling behaviors of both diffusive and electrokinetic ion transport in graphene-based cascading nanochannels as a function of channel size from 10 nm down to subnanometer. Our analysis shows that the range of ion transport effects previously observed in simple one-dimensional nanofluidic systems will translate themselves into bulk, complex nanoslit porous systems in a very different manner, and the complex cascading porous circuities can enable new transport phenomena that are unattainable in simple fluidic systems.

  16. Ion transport in complex layered graphene-based membranes with tuneable interlayer spacing

    PubMed Central

    Cheng, Chi; Jiang, Gengping; Garvey, Christopher J.; Wang, Yuanyuan; Simon, George P.; Liu, Jefferson Z.; Li, Dan

    2016-01-01

    Investigation of the transport properties of ions confined in nanoporous carbon is generally difficult because of the stochastic nature and distribution of multiscale complex and imperfect pore structures within the bulk material. We demonstrate a combined approach of experiment and simulation to describe the structure of complex layered graphene-based membranes, which allows their use as a unique porous platform to gain unprecedented insights into nanoconfined transport phenomena across the entire sub–10-nm scales. By correlation of experimental results with simulation of concentration-driven ion diffusion through the cascading layered graphene structure with sub–10-nm tuneable interlayer spacing, we are able to construct a robust, representative structural model that allows the establishment of a quantitative relationship among the nanoconfined ion transport properties in relation to the complex nanoporous structure of the layered membrane. This correlation reveals the remarkable effect of the structural imperfections of the membranes on ion transport and particularly the scaling behaviors of both diffusive and electrokinetic ion transport in graphene-based cascading nanochannels as a function of channel size from 10 nm down to subnanometer. Our analysis shows that the range of ion transport effects previously observed in simple one-dimensional nanofluidic systems will translate themselves into bulk, complex nanoslit porous systems in a very different manner, and the complex cascading porous circuities can enable new transport phenomena that are unattainable in simple fluidic systems. PMID:26933689

  17. Research of diagnosis sensors fault based on correlation analysis of the bridge structural health monitoring system

    NASA Astrophysics Data System (ADS)

    Hu, Shunren; Chen, Weimin; Liu, Lin; Gao, Xiaoxia

    2010-03-01

    Bridge structural health monitoring system is a typical multi-sensor measurement system due to the multi-parameters of bridge structure collected from the monitoring sites on the river-spanning bridges. Bridge structure monitored by multi-sensors is an entity, when subjected to external action; there will be different performances to different bridge structure parameters. Therefore, the data acquired by each sensor should exist countless correlation relation. However, complexity of the correlation relation is decided by complexity of bridge structure. Traditionally correlation analysis among monitoring sites is mainly considered from physical locations. unfortunately, this method is so simple that it cannot describe the correlation in detail. The paper analyzes the correlation among the bridge monitoring sites according to the bridge structural data, defines the correlation of bridge monitoring sites and describes its several forms, then integrating the correlative theory of data mining and signal system to establish the correlation model to describe the correlation among the bridge monitoring sites quantificationally. Finally, The Chongqing Mashangxi Yangtze river bridge health measurement system is regards as research object to diagnosis sensors fault, and simulation results verify the effectiveness of the designed method and theoretical discussions.

  18. Helicopter rotor and engine sizing for preliminary performance estimation

    NASA Technical Reports Server (NTRS)

    Talbot, P. D.; Bowles, J. V.; Lee, H. C.

    1986-01-01

    Methods are presented for estimating some of the more fundamental design variables of single-rotor helicopters (tip speed, blade area, disk loading, and installed power) based on design requirements (speed, weight, fuselage drag, and design hover ceiling). The well-known constraints of advancing-blade compressibility and retreating-blade stall are incorporated into the estimation process, based on an empirical interpretation of rotor performance data from large-scale wind-tunnel tests. Engine performance data are presented and correlated with a simple model usable for preliminary design. When approximate results are required quickly, these methods may be more convenient to use and provide more insight than large digital computer programs.

  19. Seasonal ENSO forecasting: Where does a simple model stand amongst other operational ENSO models?

    NASA Astrophysics Data System (ADS)

    Halide, Halmar

    2017-01-01

    We apply a simple linear multiple regression model called IndOzy for predicting ENSO up to 7 seasonal lead times. The model still used 5 (five) predictors of the past seasonal Niño 3.4 ENSO indices derived from chaos theory and it was rolling-validated to give a one-step ahead forecast. The model skill was evaluated against data from the season of May-June-July (MJJ) 2003 to November-December-January (NDJ) 2015/2016. There were three skill measures such as: Pearson correlation, RMSE, and Euclidean distance were used for forecast verification. The skill of this simple model was than compared to those of combined Statistical and Dynamical models compiled at the IRI (International Research Institute) website. It was found that the simple model was only capable of producing a useful ENSO prediction only up to 3 seasonal leads, while the IRI statistical and Dynamical model skill were still useful up to 4 and 6 seasonal leads, respectively. Even with its short-range seasonal prediction skills, however, the simple model still has a potential to give ENSO-derived tailored products such as probabilistic measures of precipitation and air temperature. Both meteorological conditions affect the presence of wild-land fire hot-spots in Sumatera and Kalimantan. It is suggested that to improve its long-range skill, the simple INDOZY model needs to incorporate a nonlinear model such as an artificial neural network technique.

  20. Structure of amplitude correlations in open chaotic systems

    NASA Astrophysics Data System (ADS)

    Ericson, Torleif E. O.

    2013-02-01

    The Verbaarschot-Weidenmüller-Zirnbauer (VWZ) model is believed to correctly represent the correlations of two S-matrix elements for an open quantum chaotic system, but the solution has considerable complexity and is presently only accessed numerically. Here a procedure is developed to deduce its features over the full range of the parameter space in a transparent and simple analytical form preserving accuracy to a considerable degree. The bulk of the VWZ correlations are described by the Gorin-Seligman expression for the two-amplitude correlations of the Ericson-Gorin-Seligman model. The structure of the remaining correction factors for correlation functions is discussed with special emphasis of the rôle of the level correlation hole both for inelastic and elastic correlations.

  1. SimpleBox 4.0: Improving the model while keeping it simple….

    PubMed

    Hollander, Anne; Schoorl, Marian; van de Meent, Dik

    2016-04-01

    Chemical behavior in the environment is often modeled with multimedia fate models. SimpleBox is one often-used multimedia fate model, firstly developed in 1986. Since then, two updated versions were published. Based on recent scientific developments and experience with SimpleBox 3.0, a new version of SimpleBox was developed and is made public here: SimpleBox 4.0. In this new model, eight major changes were implemented: removal of the local scale and vegetation compartments, addition of lake compartments and deep ocean compartments (including the thermohaline circulation), implementation of intermittent rain instead of drizzle and of depth dependent soil concentrations, adjustment of the partitioning behavior for organic acids and bases as well as of the value for enthalpy of vaporization. In this paper, the effects of the model changes in SimpleBox 4.0 on the predicted steady-state concentrations of chemical substances were explored for different substance groups (neutral organic substances, acids, bases, metals) in a standard emission scenario. In general, the largest differences between the predicted concentrations in the new and the old model are caused by the implementation of layered ocean compartments. Undesirable high model complexity caused by vegetation compartments and a local scale were removed to enlarge the simplicity and user friendliness of the model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. 'Unconventional' experiments in biology and medicine with optimized design based on quantum-like correlations.

    PubMed

    Beauvais, Francis

    2017-02-01

    In previous articles, a description of 'unconventional' experiments (e.g. in vitro or clinical studies based on high dilutions, 'memory of water' or homeopathy) using quantum-like probability was proposed. Because the mathematical formulations of quantum logic are frequently an obstacle for physicians and biologists, a modified modeling that rests on classical probability is described in the present article. This modeling is inspired from a relational interpretation of quantum physics that applies not only to microscopic objects, but also to macroscopic structures, including experimental devices and observers. In this framework, any outcome of an experiment is not an absolute property of the observed system as usually considered but is expressed relatively to an observer. A team of interacting observers is thus described from an external view point based on two principles: the outcomes of experiments are expressed relatively to each observer and the observers agree on outcomes when they interact with each other. If probability fluctuations are also taken into account, correlations between 'expected' and observed outcomes emerge. Moreover, quantum-like correlations are predicted in experiments with local blind design but not with centralized blind design. No assumption on 'memory' or other physical modification of water is necessary in the present description although such hypotheses cannot be formally discarded. In conclusion, a simple modeling of 'unconventional' experiments based on classical probability is now available and its predictions can be tested. The underlying concepts are sufficiently intuitive to be spread into the homeopathy community and beyond. It is hoped that this modeling will encourage new studies with optimized designs for in vitro experiments and clinical trials. Copyright © 2017 The Faculty of Homeopathy. Published by Elsevier Ltd. All rights reserved.

  3. Simultaneous acquisition of 3D shape and deformation by combination of interferometric and correlation-based laser speckle metrology.

    PubMed

    Dekiff, Markus; Berssenbrügge, Philipp; Kemper, Björn; Denz, Cornelia; Dirksen, Dieter

    2015-12-01

    A metrology system combining three laser speckle measurement techniques for simultaneous determination of 3D shape and micro- and macroscopic deformations is presented. While microscopic deformations are determined by a combination of Digital Holographic Interferometry (DHI) and Digital Speckle Photography (DSP), macroscopic 3D shape, position and deformation are retrieved by photogrammetry based on digital image correlation of a projected laser speckle pattern. The photogrammetrically obtained data extend the measurement range of the DHI-DSP system and also increase the accuracy of the calculation of the sensitivity vector. Furthermore, a precise assignment of microscopic displacements to the object's macroscopic shape for enhanced visualization is achieved. The approach allows for fast measurements with a simple setup. Key parameters of the system are optimized, and its precision and measurement range are demonstrated. As application examples, the deformation of a mandible model and the shrinkage of dental impression material are measured.

  4. Tenax extraction as a simple approach to improve environmental risk assessments.

    PubMed

    Harwood, Amanda D; Nutile, Samuel A; Landrum, Peter F; Lydy, Michael J

    2015-07-01

    It is well documented that using exhaustive chemical extractions is not an effective means of assessing exposure of hydrophobic organic compounds in sediments and that bioavailability-based techniques are an improvement over traditional methods. One technique that has shown special promise as a method for assessing the bioavailability of hydrophobic organic compounds in sediment is the use of Tenax-extractable concentrations. A 6-h or 24-h single-point Tenax-extractable concentration correlates to both bioaccumulation and toxicity. This method has demonstrated effectiveness for several hydrophobic organic compounds in various organisms under both field and laboratory conditions. In addition, a Tenax bioaccumulation model was developed for multiple compounds relating 24-h Tenax-extractable concentrations to oligochaete tissue concentrations exposed in both the laboratory and field. This model has demonstrated predictive capacity for additional compounds and species. Use of Tenax-extractable concentrations to estimate exposure is rapid, simple, straightforward, and relatively inexpensive, as well as accurate. Therefore, this method would be an invaluable tool if implemented in risk assessments. © 2015 SETAC.

  5. Simple anthropometric measures correlate with metabolic risk indicators as strongly as magnetic resonance imaging–measured adipose tissue depots in both HIV-infected and control subjects2

    PubMed Central

    Scherzer, Rebecca; Shen, Wei; Bacchetti, Peter; Kotler, Donald; Lewis, Cora E; Shlipak, Michael G; Heymsfield, Steven B

    2008-01-01

    Background Studies in persons without HIV infection have compared percentage body fat (%BF) and waist circumference as markers of risk for the complications of excess adiposity, but only limited study has been conducted in HIV-infected subjects. Objective We compared anthropometric and magnetic resonance imaging (MRI)–based adiposity measures as correlates of metabolic complications of adiposity in HIV-infected and control subjects. Design The study was a cross-sectional analysis of 666 HIV-positive and 242 control subjects in the Fat Redistribution and Metabolic Change in HIV Infection (FRAM) study assessing body mass index (BMI), waist (WC) and hip (HC) circumferences, waist-to-hip ratio (WHR), %BF, and MRI-measured regional adipose tissue. Study outcomes were 3 metabolic risk variables [homeostatic model assessment (HOMA), triglycerides, and HDL cholesterol]. Analyses were stratified by sex and HIV status and adjusted for demographic, lifestyle, and HIV-related factors. Results In HIV-infected and control subjects, univariate associations with HOMA, triglycerides, and HDL were strongest for WC, MRI-measured visceral adipose tissue, and WHR; in all cases, differences in correlation between the strongest measures for each outcome were small (r ≤ 0.07). Multivariate adjustment found no significant difference for optimally fitting models between the use of anthropometric and MRI measures, and the magnitudes of differences were small (adjusted R2 ≤ 0.06). For HOMA and HDL, WC appeared to be the best anthropometric correlate of metabolic complications, whereas, for triglycerides, the best was WHR. Conclusion Relations of simple anthropometric measures with HOMA, triglycerides, and HDL cholesterol are approximately as strong as MRI-measured whole-body adipose tissue depots in both HIV-infected and control subjects. PMID:18541572

  6. Simple anthropometric measures correlate with metabolic risk indicators as strongly as magnetic resonance imaging-measured adipose tissue depots in both HIV-infected and control subjects.

    PubMed

    Scherzer, Rebecca; Shen, Wei; Bacchetti, Peter; Kotler, Donald; Lewis, Cora E; Shlipak, Michael G; Heymsfield, Steven B; Grunfeld, Carl

    2008-06-01

    Studies in persons without HIV infection have compared percentage body fat (%BF) and waist circumference as markers of risk for the complications of excess adiposity, but only limited study has been conducted in HIV-infected subjects. We compared anthropometric and magnetic resonance imaging (MRI)-based adiposity measures as correlates of metabolic complications of adiposity in HIV-infected and control subjects. The study was a cross-sectional analysis of 666 HIV-positive and 242 control subjects in the Fat Redistribution and Metabolic Change in HIV Infection (FRAM) study assessing body mass index (BMI), waist (WC) and hip (HC) circumferences, waist-to-hip ratio (WHR), %BF, and MRI-measured regional adipose tissue. Study outcomes were 3 metabolic risk variables [homeostatic model assessment (HOMA), triglycerides, and HDL cholesterol]. Analyses were stratified by sex and HIV status and adjusted for demographic, lifestyle, and HIV-related factors. In HIV-infected and control subjects, univariate associations with HOMA, triglycerides, and HDL were strongest for WC, MRI-measured visceral adipose tissue, and WHR; in all cases, differences in correlation between the strongest measures for each outcome were small (r

  7. Methodology and consistency of slant and vertical assessments for ionospheric electron content models

    NASA Astrophysics Data System (ADS)

    Hernández-Pajares, Manuel; Roma-Dollase, David; Krankowski, Andrzej; García-Rigo, Alberto; Orús-Pérez, Raül

    2017-12-01

    A summary of the main concepts on global ionospheric map(s) [hereinafter GIM(s)] of vertical total electron content (VTEC), with special emphasis on their assessment, is presented in this paper. It is based on the experience accumulated during almost two decades of collaborative work in the context of the international global navigation satellite systems (GNSS) service (IGS) ionosphere working group. A representative comparison of the two main assessments of ionospheric electron content models (VTEC-altimeter and difference of Slant TEC, based on independent global positioning system data GPS, dSTEC-GPS) is performed. It is based on 26 GPS receivers worldwide distributed and mostly placed on islands, from the last quarter of 2010 to the end of 2016. The consistency between dSTEC-GPS and VTEC-altimeter assessments for one of the most accurate IGS GIMs (the tomographic-kriging GIM `UQRG' computed by UPC) is shown. Typical error RMS values of 2 TECU for VTEC-altimeter and 0.5 TECU for dSTEC-GPS assessments are found. And, as expected by following a simple random model, there is a significant correlation between both RMS and specially relative errors, mainly evident when large enough number of observations per pass is considered. The authors expect that this manuscript will be useful for new analysis contributor centres and in general for the scientific and technical community interested in simple and truly external ways of validating electron content models of the ionosphere.

  8. Performance of friction dampersin geometric mistuned bladed disk assembly subjected to random excitations

    NASA Astrophysics Data System (ADS)

    Cha, Douksoon

    2018-07-01

    In this study, the performance of friction dampers of a geometric mistuned bladed disk assembly is examined under random excitations. The results are represented by non-dimensional variables. It is shown that the performance of the blade-to-blade damper can deteriorate when the correlated narrow band excitations have a dominant frequency near the 1st natural frequency of the bladed disk assembly. Based on a simple model of a geometric mistuned bladed disk assembly, the analytical technique shows an efficient way to design friction dampers.

  9. A simple solution for model comparison in bold imaging: the special case of reward prediction error and reward outcomes.

    PubMed

    Erdeniz, Burak; Rohe, Tim; Done, John; Seidler, Rachael D

    2013-01-01

    Conventional neuroimaging techniques provide information about condition-related changes of the BOLD (blood-oxygen-level dependent) signal, indicating only where and when the underlying cognitive processes occur. Recently, with the help of a new approach called "model-based" functional neuroimaging (fMRI), researchers are able to visualize changes in the internal variables of a time varying learning process, such as the reward prediction error or the predicted reward value of a conditional stimulus. However, despite being extremely beneficial to the imaging community in understanding the neural correlates of decision variables, a model-based approach to brain imaging data is also methodologically challenging due to the multicollinearity problem in statistical analysis. There are multiple sources of multicollinearity in functional neuroimaging including investigations of closely related variables and/or experimental designs that do not account for this. The source of multicollinearity discussed in this paper occurs due to correlation between different subjective variables that are calculated very close in time. Here, we review methodological approaches to analyzing such data by discussing the special case of separating the reward prediction error signal from reward outcomes.

  10. Improving Prediction Accuracy for WSN Data Reduction by Applying Multivariate Spatio-Temporal Correlation

    PubMed Central

    Carvalho, Carlos; Gomes, Danielo G.; Agoulmine, Nazim; de Souza, José Neuman

    2011-01-01

    This paper proposes a method based on multivariate spatial and temporal correlation to improve prediction accuracy in data reduction for Wireless Sensor Networks (WSN). Prediction of data not sent to the sink node is a technique used to save energy in WSNs by reducing the amount of data traffic. However, it may not be very accurate. Simulations were made involving simple linear regression and multiple linear regression functions to assess the performance of the proposed method. The results show a higher correlation between gathered inputs when compared to time, which is an independent variable widely used for prediction and forecasting. Prediction accuracy is lower when simple linear regression is used, whereas multiple linear regression is the most accurate one. In addition to that, our proposal outperforms some current solutions by about 50% in humidity prediction and 21% in light prediction. To the best of our knowledge, we believe that we are probably the first to address prediction based on multivariate correlation for WSN data reduction. PMID:22346626

  11. Extension of Hopfield’s Electron Transfer Model To Accommodate Site–Site Correlation

    DOE PAGES

    Newton, Marshall D.

    2015-10-26

    Extension of the Förster analogue for the ET rate constant (based on virtual intermediate electron detachment or attachment states) with inclusion of site–site correlation due to coulomb terms associated with solvent reorganization energy and the driving force, has been developed and illustrated for a simple three-state, two-mode model. Furthermore, the model is applicable to charge separation (CS), recombination (CR), and shift (CSh) ET processes, with or without an intervening bridge. The model provides a unified perspective on the role of virtual intermediate states in accounting for the thermal Franck–Condon weighted density of states (FCWD), the gaps controlling superexchange coupling, andmore » mean absolute redox potentials, with full accommodation of site–site coulomb interactions. We analyzed two types of correlation: aside from the site–site correlation due to coulomb interactions, we have emphasized the intrinsic “nonorthogonality” which generally pertains to reaction coordinates (RCs) for different ET processes involving multiple electronic states, as may be expressed by suitably defined direction cosines (cos(θ)). A pair of RCs may be nonorthogonal even when the site–site coulomb correlations are absent. While different RCs are linearly independent in the mathematical sense for all θ ≠ 0°, they are independent in the sense of being “uncorrelated” only in the limit of orthogonality (θ = 90°). There is application to more than two coordinates is straightforward and may include both discrete and continuum contributions.« less

  12. A simple electrical-mechanical model of the heart applied to the study of electrical-mechanical alternans

    NASA Technical Reports Server (NTRS)

    Clancy, Edward A.; Smith, Joseph M.; Cohen, Richard J.

    1991-01-01

    Recent evidence has shown that a subtle alternation in the surface ECG (electrical alternans) may be correlated with the susceptibility to ventricular fibrillation. In the present work, the author presents evidence that a mechanical alternation in the heartbeat (mechanical alternans) generally accompanies electrical alternans. A simple finite-element computer model which emulates both the electrical and the mechanical activity of the heart is presented. A pilot animal study is also reported. The computer model and the animal study both found that (1) there exists a regime of combined electrical-mechanical alternans during the transition from a normal rhythm towards a fibrillatory rhythm, (2) the detected degree of alternation is correlated with the relative instability of the rhythm, and (3) the electrical and mechanical alternans may result from a dispersion in local electrical properties leading to a spatial-temporal alternation in the electrical conduction process.

  13. Understanding the determinants of volatility clustering in terms of stationary Markovian processes

    NASA Astrophysics Data System (ADS)

    Miccichè, S.

    2016-11-01

    Volatility is a key variable in the modeling of financial markets. The most striking feature of volatility is that it is a long-range correlated stochastic variable, i.e. its autocorrelation function decays like a power-law τ-β for large time lags. In the present work we investigate the determinants of such feature, starting from the empirical observation that the exponent β of a certain stock's volatility is a linear function of the average correlation of such stock's volatility with all other volatilities. We propose a simple approach consisting in diagonalizing the cross-correlation matrix of volatilities and investigating whether or not the diagonalized volatilities still keep some of the original volatility stylized facts. As a result, the diagonalized volatilities result to share with the original volatilities either the power-law decay of the probability density function and the power-law decay of the autocorrelation function. This would indicate that volatility clustering is already present in the diagonalized un-correlated volatilities. We therefore present a parsimonious univariate model based on a non-linear Langevin equation that well reproduces these two stylized facts of volatility. The model helps us in understanding that the main source of volatility clustering, once volatilities have been diagonalized, is that the economic forces driving volatility can be modeled in terms of a Smoluchowski potential with logarithmic tails.

  14. Exact Interval Estimation, Power Calculation, and Sample Size Determination in Normal Correlation Analysis

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2006-01-01

    This paper considers the problem of analysis of correlation coefficients from a multivariate normal population. A unified theorem is derived for the regression model with normally distributed explanatory variables and the general results are employed to provide useful expressions for the distributions of simple, multiple, and partial-multiple…

  15. Learning Pitch with STDP: A Computational Model of Place and Temporal Pitch Perception Using Spiking Neural Networks.

    PubMed

    Erfanian Saeedi, Nafise; Blamey, Peter J; Burkitt, Anthony N; Grayden, David B

    2016-04-01

    Pitch perception is important for understanding speech prosody, music perception, recognizing tones in tonal languages, and perceiving speech in noisy environments. The two principal pitch perception theories consider the place of maximum neural excitation along the auditory nerve and the temporal pattern of the auditory neurons' action potentials (spikes) as pitch cues. This paper describes a biophysical mechanism by which fine-structure temporal information can be extracted from the spikes generated at the auditory periphery. Deriving meaningful pitch-related information from spike times requires neural structures specialized in capturing synchronous or correlated activity from amongst neural events. The emergence of such pitch-processing neural mechanisms is described through a computational model of auditory processing. Simulation results show that a correlation-based, unsupervised, spike-based form of Hebbian learning can explain the development of neural structures required for recognizing the pitch of simple and complex tones, with or without the fundamental frequency. The temporal code is robust to variations in the spectral shape of the signal and thus can explain the phenomenon of pitch constancy.

  16. Learning Pitch with STDP: A Computational Model of Place and Temporal Pitch Perception Using Spiking Neural Networks

    PubMed Central

    Erfanian Saeedi, Nafise; Blamey, Peter J.; Burkitt, Anthony N.; Grayden, David B.

    2016-01-01

    Pitch perception is important for understanding speech prosody, music perception, recognizing tones in tonal languages, and perceiving speech in noisy environments. The two principal pitch perception theories consider the place of maximum neural excitation along the auditory nerve and the temporal pattern of the auditory neurons’ action potentials (spikes) as pitch cues. This paper describes a biophysical mechanism by which fine-structure temporal information can be extracted from the spikes generated at the auditory periphery. Deriving meaningful pitch-related information from spike times requires neural structures specialized in capturing synchronous or correlated activity from amongst neural events. The emergence of such pitch-processing neural mechanisms is described through a computational model of auditory processing. Simulation results show that a correlation-based, unsupervised, spike-based form of Hebbian learning can explain the development of neural structures required for recognizing the pitch of simple and complex tones, with or without the fundamental frequency. The temporal code is robust to variations in the spectral shape of the signal and thus can explain the phenomenon of pitch constancy. PMID:27049657

  17. Second-order closure models for supersonic turbulent flows

    NASA Technical Reports Server (NTRS)

    Speziale, Charles G.; Sarkar, Sutanu

    1991-01-01

    Recent work by the authors on the development of a second-order closure model for high-speed compressible flows is reviewed. This turbulence closure is based on the solution of modeled transport equations for the Favre-averaged Reynolds stress tensor and the solenoidal part of the turbulent dissipation rate. A new model for the compressible dissipation is used along with traditional gradient transport models for the Reynolds heat flux and mass flux terms. Consistent with simple asymptotic analyses, the deviatoric part of the remaining higher-order correlations in the Reynolds stress transport equation are modeled by a variable density extension of the newest incompressible models. The resulting second-order closure model is tested in a variety of compressible turbulent flows which include the decay of isotropic turbulence, homogeneous shear flow, the supersonic mixing layer, and the supersonic flat-plate turbulent boundary layer. Comparisons between the model predictions and the results of physical and numerical experiments are quite encouraging.

  18. Second-order closure models for supersonic turbulent flows

    NASA Technical Reports Server (NTRS)

    Speziale, Charles G.; Sarkar, Sutanu

    1991-01-01

    Recent work on the development of a second-order closure model for high-speed compressible flows is reviewed. This turbulent closure is based on the solution of modeled transport equations for the Favre-averaged Reynolds stress tensor and the solenoidal part of the turbulent dissipation rate. A new model for the compressible dissipation is used along with traditional gradient transport models for the Reynolds heat flux and mass flux terms. Consistent with simple asymptotic analyses, the deviatoric part of the remaining higher-order correlations in the Reynolds stress transport equations are modeled by a variable density extension of the newest incompressible models. The resulting second-order closure model is tested in a variety of compressible turbulent flows which include the decay of isotropic turbulence, homogeneous shear flow, the supersonic mixing layer, and the supersonic flat-plate turbulent boundary layer. Comparisons between the model predictions and the results of physical and numerical experiments are quite encouraging.

  19. A reevaluation of the infrared-radio correlation for spiral galaxies

    NASA Technical Reports Server (NTRS)

    Devereux, Nicholas A.; Eales, Stephen A.

    1989-01-01

    The infrared radio correlation has been reexamined for a sample of 237 optically bright spiral galaxies which range from 10 to the 8th to 10 to the 11th solar luminosities in far-infrared luminosity. The slope of the correlation is not unity. A simple model in which dust heating by both star formation and the interstellar radiation field contribute to the far-infrared luminosity can account for the nonunity slope. The model differs from previous two component models, however, in that the relative contribution of the two components is independent of far-infrared color temperature, but is dependent on the far-infrared luminosity.

  20. Value of the distant future: Model-independent results

    NASA Astrophysics Data System (ADS)

    Katz, Yuri A.

    2017-01-01

    This paper shows that the model-independent account of correlations in an interest rate process or a log-consumption growth process leads to declining long-term tails of discount curves. Under the assumption of an exponentially decaying memory in fluctuations of risk-free real interest rates, I derive the analytical expression for an apt value of the long run discount factor and provide a detailed comparison of the obtained result with the outcome of the benchmark risk-free interest rate models. Utilizing the standard consumption-based model with an isoelastic power utility of the representative economic agent, I derive the non-Markovian generalization of the Ramsey discounting formula. Obtained analytical results allowing simple calibration, may augment the rigorous cost-benefit and regulatory impact analysis of long-term environmental and infrastructure projects.

  1. Functional correlates of the lateral and medial entorhinal cortex: objects, path integration and local-global reference frames.

    PubMed

    Knierim, James J; Neunuebel, Joshua P; Deshmukh, Sachin S

    2014-02-05

    The hippocampus receives its major cortical input from the medial entorhinal cortex (MEC) and the lateral entorhinal cortex (LEC). It is commonly believed that the MEC provides spatial input to the hippocampus, whereas the LEC provides non-spatial input. We review new data which suggest that this simple dichotomy between 'where' versus 'what' needs revision. We propose a refinement of this model, which is more complex than the simple spatial-non-spatial dichotomy. MEC is proposed to be involved in path integration computations based on a global frame of reference, primarily using internally generated, self-motion cues and external input about environmental boundaries and scenes; it provides the hippocampus with a coordinate system that underlies the spatial context of an experience. LEC is proposed to process information about individual items and locations based on a local frame of reference, primarily using external sensory input; it provides the hippocampus with information about the content of an experience.

  2. Functional correlates of the lateral and medial entorhinal cortex: objects, path integration and local–global reference frames

    PubMed Central

    Knierim, James J.; Neunuebel, Joshua P.; Deshmukh, Sachin S.

    2014-01-01

    The hippocampus receives its major cortical input from the medial entorhinal cortex (MEC) and the lateral entorhinal cortex (LEC). It is commonly believed that the MEC provides spatial input to the hippocampus, whereas the LEC provides non-spatial input. We review new data which suggest that this simple dichotomy between ‘where’ versus ‘what’ needs revision. We propose a refinement of this model, which is more complex than the simple spatial–non-spatial dichotomy. MEC is proposed to be involved in path integration computations based on a global frame of reference, primarily using internally generated, self-motion cues and external input about environmental boundaries and scenes; it provides the hippocampus with a coordinate system that underlies the spatial context of an experience. LEC is proposed to process information about individual items and locations based on a local frame of reference, primarily using external sensory input; it provides the hippocampus with information about the content of an experience. PMID:24366146

  3. Simple, validated vaginal birth after cesarean delivery prediction model for use at the time of admission.

    PubMed

    Metz, Torri D; Stoddard, Gregory J; Henry, Erick; Jackson, Marc; Holmgren, Calla; Esplin, Sean

    2013-09-01

    To create a simple tool for predicting the likelihood of successful trial of labor after cesarean delivery (TOLAC) during the pregnancy after a primary cesarean delivery using variables available at the time of admission. Data for all deliveries at 14 regional hospitals over an 8-year period were reviewed. Women with one cesarean delivery and one subsequent delivery were included. Variables associated with successful VBAC were identified using multivariable logistic regression. Points were assigned to these characteristics, with weighting based on the coefficients in the regression model to calculate an integer VBAC score. The VBAC score was correlated with TOLAC success rate and was externally validated in an independent cohort using a logistic regression model. A total of 5,445 women met inclusion criteria. Of those women, 1,170 (21.5%) underwent TOLAC. Of the women who underwent trial of labor, 938 (80%) had a successful VBAC. A VBAC score was generated based on the Bishop score (cervical examination) at the time of admission, with points added for history of vaginal birth, age younger than 35 years, absence of recurrent indication, and body mass index less than 30. Women with a VBAC score less than 10 had a likelihood of TOLAC success less than 50%. Women with a VBAC score more than 16 had a TOLAC success rate more than 85%. The model performed well in an independent cohort with an area under the curve of 0.80 (95% confidence interval 0.76-0.84). Prediction of TOLAC success at the time of admission is highly dependent on the initial cervical examination. This simple VBAC score can be utilized when counseling women considering TOLAC. II.

  4. Energetics and dynamics of simple impulsive solar flares

    NASA Technical Reports Server (NTRS)

    Starr, R.; Heindl, W. A.; Crannell, C. J.; Thomas, R. J.; Batchelor, D. A.; Magun, A.

    1987-01-01

    Flare energetics and dynamics were studied using observations of simple impulsive spike bursts. A large, homogeneous set of events was selected to enable the most definite tests possible of competing flare models, in the absence of spatially resolved observations. The emission mechanisms and specific flare models that were considered in this investigation are described, and the derivations of the parameters that were tested are presented. Results of the correlation analysis between soft and hard X-ray energetics are also presented. The ion conduction front model and tests of that model with the well-observed spike bursts are described. Finally, conclusions drawn from this investigation and suggestions for future studies are discussed.

  5. Neural correlates of the difference between working memory speed and simple sensorimotor speed: an fMRI study.

    PubMed

    Takeuchi, Hikaru; Sugiura, Motoaki; Sassa, Yuko; Sekiguchi, Atsushi; Yomogida, Yukihito; Taki, Yasuyuki; Kawashima, Ryuta

    2012-01-01

    The difference between the speed of simple cognitive processes and the speed of complex cognitive processes has various psychological correlates. However, the neural correlates of this difference have not yet been investigated. In this study, we focused on working memory (WM) for typical complex cognitive processes. Functional magnetic resonance imaging data were acquired during the performance of an N-back task, which is a measure of WM for typical complex cognitive processes. In our N-back task, task speed and memory load were varied to identify the neural correlates responsible for the difference between the speed of simple cognitive processes (estimated from the 0-back task) and the speed of WM. Our findings showed that this difference was characterized by the increased activation in the right dorsolateral prefrontal cortex (DLPFC) and the increased functional interaction between the right DLPFC and right superior parietal lobe. Furthermore, the local gray matter volume of the right DLPFC was correlated with participants' accuracy during fast WM tasks, which in turn correlated with a psychometric measure of participants' intelligence. Our findings indicate that the right DLPFC and its related network are responsible for the execution of the fast cognitive processes involved in WM. Identified neural bases may underlie the psychometric differences between the speed with which subjects perform simple cognitive tasks and the speed with which subjects perform more complex cognitive tasks, and explain the previous traditional psychological findings.

  6. A method for the estimation of the significance of cross-correlations in unevenly sampled red-noise time series

    NASA Astrophysics Data System (ADS)

    Max-Moerbeck, W.; Richards, J. L.; Hovatta, T.; Pavlidou, V.; Pearson, T. J.; Readhead, A. C. S.

    2014-11-01

    We present a practical implementation of a Monte Carlo method to estimate the significance of cross-correlations in unevenly sampled time series of data, whose statistical properties are modelled with a simple power-law power spectral density. This implementation builds on published methods; we introduce a number of improvements in the normalization of the cross-correlation function estimate and a bootstrap method for estimating the significance of the cross-correlations. A closely related matter is the estimation of a model for the light curves, which is critical for the significance estimates. We present a graphical and quantitative demonstration that uses simulations to show how common it is to get high cross-correlations for unrelated light curves with steep power spectral densities. This demonstration highlights the dangers of interpreting them as signs of a physical connection. We show that by using interpolation and the Hanning sampling window function we are able to reduce the effects of red-noise leakage and to recover steep simple power-law power spectral densities. We also introduce the use of a Neyman construction for the estimation of the errors in the power-law index of the power spectral density. This method provides a consistent way to estimate the significance of cross-correlations in unevenly sampled time series of data.

  7. Polarization and amplitude probes in Hanle effect EIT noise spectroscopy of a buffer gas cell

    NASA Astrophysics Data System (ADS)

    O'Leary, Shannon; Zheng, Aojie; Crescimanno, Michael

    2015-05-01

    Noise correlation spectroscopy on systems manifesting Electromagnetically Induced Transparency (EIT) holds promise as a simple, robust method for performing high-resolution spectroscopy used in applications such as EIT-based atomic magnetometry and clocks. While this noise conversion can diminish the precision of EIT applications, noise correlation techniques transform the noise into a useful spectroscopic tool that can improve the application's precision. We study intensity noise, originating from the large phase noise of a semiconductor diode laser's light, in Rb vapor EIT in the Hanle configuration. We report here on our recent experimental work on and complementary theoretical modeling of the effects of light polarization preparation and post-selection on the correlation spectrum and on the independent noise channel traces. We also explain methodology and recent results for delineating the effects of residual laser amplitude fluctuations on the correlation noise resonance as compared to other contributing processes. Understanding these subtleties are essential for optimizing EIT-noise applications.

  8. Generation of twin beams using four-wave mixing: theory and experiments

    NASA Astrophysics Data System (ADS)

    Glorieux, Quentin; Dubessy, Romain; Guibal, Samuel; Guidoni, Luca; Likforman, Jean Pierre; Coudreau, Thomas; Arimondo, Ennio

    2010-03-01

    Recently, four-wave mixing has drawn a large interest as a simple and efficient source of non classical light [1]. Using a strong pump (400 mW) propagating in a heated rubidium cell, it is possible to generate quantum correlated beams. The set-up has the advantage of both simplicity (no resonant cavity) and efficiency (we measure up to 9.5 dB of noise reduction below the standard quantum limit). However, up to now, no microscopic model was proposed for this phenomenon. Here we present for the first time such a model [2] based on the Heisenberg-Langevin input-output formalism [3] and we verify that the classical gain and the quantum correlations are in very good agreement with our experimental datas. A new regime of correlation generation in absence of gain is also proposed. [4pt] [1] C.F. McCormick et al., Opt. Lett (2007) vol. 32 p. 178[0pt] [2] Q. Glorieux et al., in preparation (2010)[0pt] [3] P. Kolchin, Phys. Rev. A (2007) vol. 75 p. 33814

  9. Algorithm refinement for stochastic partial differential equations: II. Correlated systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexander, Francis J.; Garcia, Alejandro L.; Tartakovsky, Daniel M.

    2005-08-10

    We analyze a hybrid particle/continuum algorithm for a hydrodynamic system with long ranged correlations. Specifically, we consider the so-called train model for viscous transport in gases, which is based on a generalization of the random walk process for the diffusion of momentum. This discrete model is coupled with its continuous counterpart, given by a pair of stochastic partial differential equations. At the interface between the particle and continuum computations the coupling is by flux matching, giving exact mass and momentum conservation. This methodology is an extension of our stochastic Algorithm Refinement (AR) hybrid for simple diffusion [F. Alexander, A. Garcia,more » D. Tartakovsky, Algorithm refinement for stochastic partial differential equations: I. Linear diffusion, J. Comput. Phys. 182 (2002) 47-66]. Results from a variety of numerical experiments are presented for steady-state scenarios. In all cases the mean and variance of density and velocity are captured correctly by the stochastic hybrid algorithm. For a non-stochastic version (i.e., using only deterministic continuum fluxes) the long-range correlations of velocity fluctuations are qualitatively preserved but at reduced magnitude.« less

  10. A null model for Pearson coexpression networks.

    PubMed

    Gobbi, Andrea; Jurman, Giuseppe

    2015-01-01

    Gene coexpression networks inferred by correlation from high-throughput profiling such as microarray data represent simple but effective structures for discovering and interpreting linear gene relationships. In recent years, several approaches have been proposed to tackle the problem of deciding when the resulting correlation values are statistically significant. This is most crucial when the number of samples is small, yielding a non-negligible chance that even high correlation values are due to random effects. Here we introduce a novel hard thresholding solution based on the assumption that a coexpression network inferred by randomly generated data is expected to be empty. The threshold is theoretically derived by means of an analytic approach and, as a deterministic independent null model, it depends only on the dimensions of the starting data matrix, with assumptions on the skewness of the data distribution compatible with the structure of gene expression levels data. We show, on synthetic and array datasets, that the proposed threshold is effective in eliminating all false positive links, with an offsetting cost in terms of false negative detected edges.

  11. Reconstruction of implanted marker trajectories from cone-beam CT projection images using interdimensional correlation modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, Hyekyun

    Purpose: Cone-beam CT (CBCT) is a widely used imaging modality for image-guided radiotherapy. Most vendors provide CBCT systems that are mounted on a linac gantry. Thus, CBCT can be used to estimate the actual 3-dimensional (3D) position of moving respiratory targets in the thoracic/abdominal region using 2D projection images. The authors have developed a method for estimating the 3D trajectory of respiratory-induced target motion from CBCT projection images using interdimensional correlation modeling. Methods: Because the superior–inferior (SI) motion of a target can be easily analyzed on projection images of a gantry-mounted CBCT system, the authors investigated the interdimensional correlation ofmore » the SI motion with left–right and anterior–posterior (AP) movements while the gantry is rotating. A simple linear model and a state-augmented model were implemented and applied to the interdimensional correlation analysis, and their performance was compared. The parameters of the interdimensional correlation models were determined by least-square estimation of the 2D error between the actual and estimated projected target position. The method was validated using 160 3D tumor trajectories from 46 thoracic/abdominal cancer patients obtained during CyberKnife treatment. The authors’ simulations assumed two application scenarios: (1) retrospective estimation for the purpose of moving tumor setup used just after volumetric matching with CBCT; and (2) on-the-fly estimation for the purpose of real-time target position estimation during gating or tracking delivery, either for full-rotation volumetric-modulated arc therapy (VMAT) in 60 s or a stationary six-field intensity-modulated radiation therapy (IMRT) with a beam delivery time of 20 s. Results: For the retrospective CBCT simulations, the mean 3D root-mean-square error (RMSE) for all 4893 trajectory segments was 0.41 mm (simple linear model) and 0.35 mm (state-augmented model). In the on-the-fly simulations, prior projections over more than 60° appear to be necessary for reliable estimations. The mean 3D RMSE during beam delivery after the simple linear model had established with a prior 90° projection data was 0.42 mm for VMAT and 0.45 mm for IMRT. Conclusions: The proposed method does not require any internal/external correlation or statistical modeling to estimate the target trajectory and can be used for both retrospective image-guided radiotherapy with CBCT projection images and real-time target position monitoring for respiratory gating or tracking.« less

  12. Combustion of Nitramine Propellants

    DTIC Science & Technology

    1983-03-01

    through development of a comprehensive analytical model. The ultimate goals are to enable prediction of deflagration rate over a wide pressure range...superior in burn rate prediction , both simple models fail in correlating existing temperature- sensitivity data. (2) In the second part, a...auxiliary condition to enable independent burn rate prediction ; improved melt phase model including decomposition-gas bubbles; model for far-field

  13. Stringy horizons and generalized FZZ duality in perturbation theory

    NASA Astrophysics Data System (ADS)

    Giribet, Gaston

    2017-02-01

    We study scattering amplitudes in two-dimensional string theory on a black hole bakground. We start with a simple derivation of the Fateev-Zamolodchikov-Zamolodchikov (FZZ) duality, which associates correlation functions of the sine-Liouville integrable model on the Riemann sphere to tree-level string amplitudes on the Euclidean two-dimensional black hole. This derivation of FZZ duality is based on perturbation theory, and it relies on a trick originally due to Fateev, which involves duality relations between different Selberg type integrals. This enables us to rewrite the correlation functions of sine-Liouville theory in terms of a special set of correlators in the gauged Wess-Zumino-Witten (WZW) theory, and use this to perform further consistency checks of the recently conjectured Generalized FZZ (GFZZ) duality. In particular, we prove that n-point correlation functions in sine-Liouville theory involving n - 2 winding modes actually coincide with the correlation functions in the SL(2,R)/U(1) gauged WZW model that include n - 2 oscillator operators of the type described by Giveon, Itzhaki and Kutasov in reference [1]. This proves the GFZZ duality for the case of tree level maximally winding violating n-point amplitudes with arbitrary n. We also comment on the connection between GFZZ and other marginal deformations previously considered in the literature.

  14. Design and analysis of simple choice surveys for natural resource management

    USGS Publications Warehouse

    Fieberg, John; Cornicelli, Louis; Fulton, David C.; Grund, Marrett D.

    2010-01-01

    We used a simple yet powerful method for judging public support for management actions from randomized surveys. We asked respondents to rank choices (representing management regulations under consideration) according to their preference, and we then used discrete choice models to estimate probability of choosing among options (conditional on the set of options presented to respondents). Because choices may share similar unmodeled characteristics, the multinomial logit model, commonly applied to discrete choice data, may not be appropriate. We introduced the nested logit model, which offers a simple approach for incorporating correlation among choices. This forced choice survey approach provides a useful method of gathering public input; it is relatively easy to apply in practice, and the data are likely to be more informative than asking constituents to rate attractiveness of each option separately.

  15. Percolation on fitness landscapes: effects of correlation, phenotype, and incompatibilities

    PubMed Central

    Gravner, Janko; Pitman, Damien; Gavrilets, Sergey

    2009-01-01

    We study how correlations in the random fitness assignment may affect the structure of fitness landscapes, in three classes of fitness models. The first is a phenotype space in which individuals are characterized by a large number n of continuously varying traits. In a simple model of random fitness assignment, viable phenotypes are likely to form a giant connected cluster percolating throughout the phenotype space provided the viability probability is larger than 1/2n. The second model explicitly describes genotype-to-phenotype and phenotype-to-fitness maps, allows for neutrality at both phenotype and fitness levels, and results in a fitness landscape with tunable correlation length. Here, phenotypic neutrality and correlation between fitnesses can reduce the percolation threshold, and correlations at the point of phase transition between local and global are most conducive to the formation of the giant cluster. In the third class of models, particular combinations of alleles or values of phenotypic characters are “incompatible” in the sense that the resulting genotypes or phenotypes have zero fitness. This setting can be viewed as a generalization of the canonical Bateson-Dobzhansky-Muller model of speciation and is related to K- SAT problems, prominent in computer science. We analyze the conditions for the existence of viable genotypes, their number, as well as the structure and the number of connected clusters of viable genotypes. We show that analysis based on expected values can easily lead to wrong conclusions, especially when fitness correlations are strong. We focus on pairwise incompatibilities between diallelic loci, but we also address multiple alleles, complex incompatibilities, and continuous phenotype spaces. In the case of diallelic loci, the number of clusters is stochastically bounded and each cluster contains a very large sub-cube. Finally, we demonstrate that the discrete NK model shares some signature properties of models with high correlations. PMID:17692873

  16. Importance of the correlation contribution for local hybrid functionals: range separation and self-interaction corrections.

    PubMed

    Arbuznikov, Alexei V; Kaupp, Martin

    2012-01-07

    Local hybrid functionals with their position-dependent exact-exchange admixture are a conceptually simple and promising extension of the concept of a hybrid functional. Local hybrids based on a simple mixing of the local spin density approximation (LSDA) with exact exchange have been shown to be successful for thermochemistry, reaction barriers, and a range of other properties. So far, the combination of this generation of local hybrids with an LSDA correlation functional has been found to give the most favorable results for atomization energies, for a range of local mixing functions (LMFs) governing the exact-exchange admixture. Here, we show that the choice of correlation functional to be used with local hybrid exchange crucially influences the parameterization also of the exchange part as well as the overall performance. A novel ansatz for the correlation part of local hybrids is suggested based on (i) range-separation of LSDA correlation into short-range (SR) and long-range (LR) parts, and (ii) partial or full elimination of the one-electron self-correlation from the SR part. It is shown that such modified correlation functionals allow overall larger exact exchange admixture in thermochemically competitive local hybrids than before. This results in improvements for reaction barriers and for other properties crucially influenced by self-interaction errors, as demonstrated by a number of examples. Based on the range-separation approach, a fresh view on the breakdown of the correlation energy into dynamical and non-dynamical parts is suggested.

  17. CINCH (confocal incoherent correlation holography) super resolution fluorescence microscopy based upon FINCH (Fresnel incoherent correlation holography).

    PubMed

    Siegel, Nisan; Storrie, Brian; Bruce, Marc; Brooker, Gary

    2015-02-07

    FINCH holographic fluorescence microscopy creates high resolution super-resolved images with enhanced depth of focus. The simple addition of a real-time Nipkow disk confocal image scanner in a conjugate plane of this incoherent holographic system is shown to reduce the depth of focus, and the combination of both techniques provides a simple way to enhance the axial resolution of FINCH in a combined method called "CINCH". An important feature of the combined system allows for the simultaneous real-time image capture of widefield and holographic images or confocal and confocal holographic images for ready comparison of each method on the exact same field of view. Additional GPU based complex deconvolution processing of the images further enhances resolution.

  18. A Cellular Automata-based Model for Simulating Restitution Property in a Single Heart Cell.

    PubMed

    Sabzpoushan, Seyed Hojjat; Pourhasanzade, Fateme

    2011-01-01

    Ventricular fibrillation is the cause of the most sudden mortalities. Restitution is one of the specific properties of ventricular cell. The recent findings have clearly proved the correlation between the slope of restitution curve with ventricular fibrillation. This; therefore, mandates the modeling of cellular restitution to gain high importance. A cellular automaton is a powerful tool for simulating complex phenomena in a simple language. A cellular automaton is a lattice of cells where the behavior of each cell is determined by the behavior of its neighboring cells as well as the automata rule. In this paper, a simple model is depicted for the simulation of the property of restitution in a single cardiac cell using cellular automata. At first, two state variables; action potential and recovery are introduced in the automata model. In second, automata rule is determined and then recovery variable is defined in such a way so that the restitution is developed. In order to evaluate the proposed model, the generated restitution curve in our study is compared with the restitution curves from the experimental findings of valid sources. Our findings indicate that the presented model is not only capable of simulating restitution in cardiac cell, but also possesses the capability of regulating the restitution curve.

  19. Statistical analysis of co-occurrence patterns in microbial presence-absence datasets.

    PubMed

    Mainali, Kumar P; Bewick, Sharon; Thielen, Peter; Mehoke, Thomas; Breitwieser, Florian P; Paudel, Shishir; Adhikari, Arjun; Wolfe, Joshua; Slud, Eric V; Karig, David; Fagan, William F

    2017-01-01

    Drawing on a long history in macroecology, correlation analysis of microbiome datasets is becoming a common practice for identifying relationships or shared ecological niches among bacterial taxa. However, many of the statistical issues that plague such analyses in macroscale communities remain unresolved for microbial communities. Here, we discuss problems in the analysis of microbial species correlations based on presence-absence data. We focus on presence-absence data because this information is more readily obtainable from sequencing studies, especially for whole-genome sequencing, where abundance estimation is still in its infancy. First, we show how Pearson's correlation coefficient (r) and Jaccard's index (J)-two of the most common metrics for correlation analysis of presence-absence data-can contradict each other when applied to a typical microbiome dataset. In our dataset, for example, 14% of species-pairs predicted to be significantly correlated by r were not predicted to be significantly correlated using J, while 37.4% of species-pairs predicted to be significantly correlated by J were not predicted to be significantly correlated using r. Mismatch was particularly common among species-pairs with at least one rare species (<10% prevalence), explaining why r and J might differ more strongly in microbiome datasets, where there are large numbers of rare taxa. Indeed 74% of all species-pairs in our study had at least one rare species. Next, we show how Pearson's correlation coefficient can result in artificial inflation of positive taxon relationships and how this is a particular problem for microbiome studies. We then illustrate how Jaccard's index of similarity (J) can yield improvements over Pearson's correlation coefficient. However, the standard null model for Jaccard's index is flawed, and thus introduces its own set of spurious conclusions. We thus identify a better null model based on a hypergeometric distribution, which appropriately corrects for species prevalence. This model is available from recent statistics literature, and can be used for evaluating the significance of any value of an empirically observed Jaccard's index. The resulting simple, yet effective method for handling correlation analysis of microbial presence-absence datasets provides a robust means of testing and finding relationships and/or shared environmental responses among microbial taxa.

  20. Arm Nerve Conduction Velocity (NCV), Brain NCV, Reaction Time, and Intelligence.

    ERIC Educational Resources Information Center

    Reed, T. Edward; Jensen, Arthur R.

    1991-01-01

    Correlations among peripheral nerve conduction velocity (NCV), brain NCV, simple and choice reaction times, and a standard measure of intelligence were investigated for 200 male college students. No correlation was found between any arm NCV and the intelligence score. Neurophysiological bases of human information processing and intelligence are…

  1. Quantitative structure-activity relationship (QSAR) for insecticides: development of predictive in vivo insecticide activity models.

    PubMed

    Naik, P K; Singh, T; Singh, H

    2009-07-01

    Quantitative structure-activity relationship (QSAR) analyses were performed independently on data sets belonging to two groups of insecticides, namely the organophosphates and carbamates. Several types of descriptors including topological, spatial, thermodynamic, information content, lead likeness and E-state indices were used to derive quantitative relationships between insecticide activities and structural properties of chemicals. A systematic search approach based on missing value, zero value, simple correlation and multi-collinearity tests as well as the use of a genetic algorithm allowed the optimal selection of the descriptors used to generate the models. The QSAR models developed for both organophosphate and carbamate groups revealed good predictability with r(2) values of 0.949 and 0.838 as well as [image omitted] values of 0.890 and 0.765, respectively. In addition, a linear correlation was observed between the predicted and experimental LD(50) values for the test set data with r(2) of 0.871 and 0.788 for both the organophosphate and carbamate groups, indicating that the prediction accuracy of the QSAR models was acceptable. The models were also tested successfully from external validation criteria. QSAR models developed in this study should help further design of novel potent insecticides.

  2. Predicting translational deformity following opening-wedge osteotomy for lower limb realignment.

    PubMed

    Barksfield, Richard C; Monsell, Fergal P

    2015-11-01

    An opening-wedge osteotomy is well recognised for the management of limb deformity and requires an understanding of the principles of geometry. Translation at the osteotomy is needed when the osteotomy is performed away from the centre of rotation of angulation (CORA), but the amount of translation varies with the distance from the CORA. This translation enables proximal and distal axes on either side of the proposed osteotomy to realign. We have developed two experimental models to establish whether the amount of translation required (based on the translation deformity created) can be predicted based upon simple trigonometry. A predictive algorithm was derived where translational deformity was predicted as 2(tan α × d), where α represents 50 % of the desired angular correction, and d is the distance of the desired osteotomy site from the CORA. A simulated model was developed using TraumaCad online digital software suite (Brainlab AG, Germany). Osteotomies were simulated in the distal femur, proximal tibia and distal tibia for nine sets of lower limb scanograms at incremental distances from the CORA and the resulting translational deformity recorded. There was strong correlation between the distance of the osteotomy from the CORA and simulated translation deformity for distal femoral deformities (correlation coefficient 0.99, p < 0.0001), proximal tibial deformities (correlation coefficient 0.93-0.99, p < 0.0001) and distal tibial deformities (correlation coefficient 0.99, p < 0.0001). There was excellent agreement between the predictive algorithm and simulated translational deformity for all nine simulations (correlation coefficient 0.93-0.99, p < 0.0001). Translational deformity following corrective osteotomy for lower limb deformity can be anticipated and predicted based upon the angular correction and the distance between the planned osteotomy site and the CORA.

  3. The standard mean-field treatment of inter-particle attraction in classical DFT is better than one might expect

    NASA Astrophysics Data System (ADS)

    Archer, Andrew J.; Chacko, Blesson; Evans, Robert

    2017-07-01

    In classical density functional theory (DFT), the part of the Helmholtz free energy functional arising from attractive inter-particle interactions is often treated in a mean-field or van der Waals approximation. On the face of it, this is a somewhat crude treatment as the resulting functional generates the simple random phase approximation (RPA) for the bulk fluid pair direct correlation function. We explain why using standard mean-field DFT to describe inhomogeneous fluid structure and thermodynamics is more accurate than one might expect based on this observation. By considering the pair correlation function g(x) and structure factor S(k) of a one-dimensional model fluid, for which exact results are available, we show that the mean-field DFT, employed within the test-particle procedure, yields results much superior to those from the RPA closure of the bulk Ornstein-Zernike equation. We argue that one should not judge the quality of a DFT based solely on the approximation it generates for the bulk pair direct correlation function.

  4. A Novel Optical/digital Processing System for Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Boone, Bradley G.; Shukla, Oodaye B.

    1993-01-01

    This paper describes two processing algorithms that can be implemented optically: the Radon transform and angular correlation. These two algorithms can be combined in one optical processor to extract all the basic geometric and amplitude features from objects embedded in video imagery. We show that the internal amplitude structure of objects is recovered by the Radon transform, which is a well-known result, but, in addition, we show simulation results that calculate angular correlation, a simple but unique algorithm that extracts object boundaries from suitably threshold images from which length, width, area, aspect ratio, and orientation can be derived. In addition to circumventing scale and rotation distortions, these simulations indicate that the features derived from the angular correlation algorithm are relatively insensitive to tracking shifts and image noise. Some optical architecture concepts, including one based on micro-optical lenslet arrays, have been developed to implement these algorithms. Simulation test and evaluation using simple synthetic object data will be described, including results of a study that uses object boundaries (derivable from angular correlation) to classify simple objects using a neural network.

  5. Development of a rapid, simple assay of plasma total carotenoids

    PubMed Central

    2012-01-01

    Background Plasma total carotenoids can be used as an indicator of risk of chronic disease. Laboratory analysis of individual carotenoids by high performance liquid chromatography (HPLC) is time consuming, expensive, and not amenable to use beyond a research laboratory. The aim of this research is to establish a rapid, simple, and inexpensive spectrophotometric assay of plasma total carotenoids that has a very strong correlation with HPLC carotenoid profile analysis. Results Plasma total carotenoids from 29 volunteers ranged in concentration from 1.2 to 7.4 μM, as analyzed by HPLC. A linear correlation was found between the absorbance at 448 nm of an alcohol / heptane extract of the plasma and plasma total carotenoids analyzed by HPLC, with a Pearson correlation coefficient of 0.989. The average coefficient of variation for the spectrophotometric assay was 6.5% for the plasma samples. The limit of detection was about 0.3 μM and was linear up to about 34 μM without dilution. Correlations between the integrals of the absorption spectra in the range of carotenoid absorption and total plasma carotenoid concentration gave similar results to the absorbance correlation. Spectrophotometric assay results also agreed with the calculated expected absorbance based on published extinction coefficients for the individual carotenoids, with a Pearson correlation coefficient of 0.988. Conclusion The spectrophotometric assay of total carotenoids strongly correlated with HPLC analysis of carotenoids of the same plasma samples and expected absorbance values based on extinction coefficients. This rapid, simple, inexpensive assay, when coupled with the carotenoid health index, may be useful for nutrition intervention studies, population cohort studies, and public health interventions. PMID:23006902

  6. FAST Model Calibration and Validation of the OC5- DeepCwind Floating Offshore Wind System Against Wave Tank Test Data: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendt, Fabian F; Robertson, Amy N; Jonkman, Jason

    During the course of the Offshore Code Comparison Collaboration, Continued, with Correlation (OC5) project, which focused on the validation of numerical methods through comparison against tank test data, the authors created a numerical FAST model of the 1:50-scale DeepCwind semisubmersible system that was tested at the Maritime Research Institute Netherlands ocean basin in 2013. This paper discusses several model calibration studies that were conducted to identify model adjustments that improve the agreement between the numerical simulations and the experimental test data. These calibration studies cover wind-field-specific parameters (coherence, turbulence), hydrodynamic and aerodynamic modeling approaches, as well as rotor model (blade-pitchmore » and blade-mass imbalances) and tower model (structural tower damping coefficient) adjustments. These calibration studies were conducted based on relatively simple calibration load cases (wave only/wind only). The agreement between the final FAST model and experimental measurements is then assessed based on more-complex combined wind and wave validation cases.« less

  7. Computational modeling approaches to quantitative structure-binding kinetics relationships in drug discovery.

    PubMed

    De Benedetti, Pier G; Fanelli, Francesca

    2018-03-21

    Simple comparative correlation analyses and quantitative structure-kinetics relationship (QSKR) models highlight the interplay of kinetic rates and binding affinity as an essential feature in drug design and discovery. The choice of the molecular series, and their structural variations, used in QSKR modeling is fundamental to understanding the mechanistic implications of ligand and/or drug-target binding and/or unbinding processes. Here, we discuss the implications of linear correlations between kinetic rates and binding affinity constants and the relevance of the computational approaches to QSKR modeling. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Volatility of linear and nonlinear time series

    NASA Astrophysics Data System (ADS)

    Kalisky, Tomer; Ashkenazy, Yosef; Havlin, Shlomo

    2005-07-01

    Previous studies indicated that nonlinear properties of Gaussian distributed time series with long-range correlations, ui , can be detected and quantified by studying the correlations in the magnitude series ∣ui∣ , the “volatility.” However, the origin for this empirical observation still remains unclear and the exact relation between the correlations in ui and the correlations in ∣ui∣ is still unknown. Here we develop analytical relations between the scaling exponent of linear series ui and its magnitude series ∣ui∣ . Moreover, we find that nonlinear time series exhibit stronger (or the same) correlations in the magnitude time series compared with linear time series with the same two-point correlations. Based on these results we propose a simple model that generates multifractal time series by explicitly inserting long range correlations in the magnitude series; the nonlinear multifractal time series is generated by multiplying a long-range correlated time series (that represents the magnitude series) with uncorrelated time series [that represents the sign series sgn(ui) ]. We apply our techniques on daily deep ocean temperature records from the equatorial Pacific, the region of the El-Ninõ phenomenon, and find: (i) long-range correlations from several days to several years with 1/f power spectrum, (ii) significant nonlinear behavior as expressed by long-range correlations of the volatility series, and (iii) broad multifractal spectrum.

  9. Hospital-based nurses' perceptions of the adoption of Web 2.0 tools for knowledge sharing, learning, social interaction and the production of collective intelligence.

    PubMed

    Lau, Adela S M

    2011-11-11

    Web 2.0 provides a platform or a set of tools such as blogs, wikis, really simple syndication (RSS), podcasts, tags, social bookmarks, and social networking software for knowledge sharing, learning, social interaction, and the production of collective intelligence in a virtual environment. Web 2.0 is also becoming increasingly popular in e-learning and e-social communities. The objectives were to investigate how Web 2.0 tools can be applied for knowledge sharing, learning, social interaction, and the production of collective intelligence in the nursing domain and to investigate what behavioral perceptions are involved in the adoption of Web 2.0 tools by nurses. The decomposed technology acceptance model was applied to construct the research model on which the hypotheses were based. A questionnaire was developed based on the model and data from nurses (n = 388) were collected from late January 2009 until April 30, 2009. Pearson's correlation analysis and t tests were used for data analysis. Intention toward using Web 2.0 tools was positively correlated with usage behavior (r = .60, P < .05). Behavioral intention was positively correlated with attitude (r = .72, P < .05), perceived behavioral control (r = .58, P < .05), and subjective norm (r = .45, P < .05). In their decomposed constructs, perceived usefulness (r = .7, P < .05), relative advantage (r = .64, P < .05), and compatibility (r = .60,P < .05) were positively correlated with attitude, but perceived ease of use was not significantly correlated (r = .004, P < .05) with it. Peer (r = .47, P < .05), senior management (r = .24,P < .05), and hospital (r = .45, P < .05) influences had positive correlations with subjective norm. Resource (r = .41,P < .05) and technological (r = .69,P < .05) conditions were positively correlated with perceived behavioral control. The identified behavioral perceptions may further health policy makers' understanding of nurses' concerns regarding and barriers to the adoption of Web 2.0 tools and enable them to better plan the strategy of implementation of Web 2.0 tools for knowledge sharing, learning, social interaction, and the production of collective intelligence.

  10. Hospital-Based Nurses’ Perceptions of the Adoption of Web 2.0 Tools for Knowledge Sharing, Learning, Social Interaction and the Production of Collective Intelligence

    PubMed Central

    2011-01-01

    Background Web 2.0 provides a platform or a set of tools such as blogs, wikis, really simple syndication (RSS), podcasts, tags, social bookmarks, and social networking software for knowledge sharing, learning, social interaction, and the production of collective intelligence in a virtual environment. Web 2.0 is also becoming increasingly popular in e-learning and e-social communities. Objectives The objectives were to investigate how Web 2.0 tools can be applied for knowledge sharing, learning, social interaction, and the production of collective intelligence in the nursing domain and to investigate what behavioral perceptions are involved in the adoption of Web 2.0 tools by nurses. Methods The decomposed technology acceptance model was applied to construct the research model on which the hypotheses were based. A questionnaire was developed based on the model and data from nurses (n = 388) were collected from late January 2009 until April 30, 2009. Pearson’s correlation analysis and t tests were used for data analysis. Results Intention toward using Web 2.0 tools was positively correlated with usage behavior (r = .60, P < .05). Behavioral intention was positively correlated with attitude (r = .72, P < .05), perceived behavioral control (r = .58, P < .05), and subjective norm (r = .45, P < .05). In their decomposed constructs, perceived usefulness (r = .7, P < .05), relative advantage (r = .64, P < .05), and compatibility (r = .60, P < .05) were positively correlated with attitude, but perceived ease of use was not significantly correlated (r = .004, P < .05) with it. Peer (r = .47, P < .05), senior management (r = .24, P < .05), and hospital (r = .45, P < .05) influences had positive correlations with subjective norm. Resource (r = .41, P < .05) and technological (r = .69, P < .05) conditions were positively correlated with perceived behavioral control. Conclusions The identified behavioral perceptions may further health policy makers’ understanding of nurses’ concerns regarding and barriers to the adoption of Web 2.0 tools and enable them to better plan the strategy of implementation of Web 2.0 tools for knowledge sharing, learning, social interaction, and the production of collective intelligence. PMID:22079851

  11. Description of a New Predictive Modeling Approach That Correlates the Risk and Associated Cost of Well-Defined Diabetes-Related Complications With Changes in Glycated Hemoglobin (HbA1c)

    PubMed Central

    Fortwaengler, Kurt; Parkin, Christopher G.; Neeser, Kurt; Neumann, Monika; Mast, Oliver

    2017-01-01

    The modeling approach described here is designed to support the development of spreadsheet-based simple predictive models. It is based on 3 pillars: association of the complications with HbA1c changes, incidence of the complications, and average cost per event of the complication. For each pillar, the goal of the analysis was (1) to find results for a large diversity of populations with a focus on countries/regions, diabetes type, age, diabetes duration, baseline HbA1c value, and gender; (2) to assess the range of incidences and associations previously reported. Unlike simple predictive models, which mostly are based on only 1 source of information for each of the pillars, we conducted a comprehensive, systematic literature review. Each source found was thoroughly reviewed and only sources meeting quality expectations were considered. The approach allows avoidance of unintended use of extreme data. The user can utilize (1) one of the found sources, (2) the found range as validation for the found figures, or (3) the average of all found publications for an expedited estimate. The modeling approach is intended for use in average insulin-treated diabetes populations in which the baseline HbA1c values are within an average range (6.5% to 11.5%); it is not intended for use in individuals or unique diabetes populations (eg, gestational diabetes). Because the modeling approach only considers diabetes-related complications that are positively associated with HbA1c decreases, the costs of negatively associated complications (eg, severe hypoglycemic events) must be calculated separately. PMID:27510441

  12. Tracing the origin of azimuthal gluon correlations in the color glass condensate

    NASA Astrophysics Data System (ADS)

    Lappi, T.; Schenke, B.; Schlichting, S.; Venugopalan, R.

    2016-01-01

    We examine the origins of azimuthal correlations observed in high energy proton-nucleus collisions by considering the simple example of the scattering of uncorrelated partons off color fields in a large nucleus. We demonstrate how the physics of fluctuating color fields in the color glass condensate (CGC) effective theory generates these azimuthal multiparticle correlations and compute the corresponding Fourier coefficients v n within different CGC approximation schemes. We discuss in detail the qualitative and quantitative differences between the different schemes. We will show how a recently introduced color field domain model that captures key features of the observed azimuthal correlations can be understood in the CGC effective theory as a model of non-Gaussian correlations in the target nucleus.

  13. Are V1 Simple Cells Optimized for Visual Occlusions? A Comparative Study

    PubMed Central

    Bornschein, Jörg; Henniges, Marc; Lücke, Jörg

    2013-01-01

    Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. PMID:23754938

  14. Degradation data analysis based on a generalized Wiener process subject to measurement error

    NASA Astrophysics Data System (ADS)

    Li, Junxing; Wang, Zhihua; Zhang, Yongbo; Fu, Huimin; Liu, Chengrui; Krishnaswamy, Sridhar

    2017-09-01

    Wiener processes have received considerable attention in degradation modeling over the last two decades. In this paper, we propose a generalized Wiener process degradation model that takes unit-to-unit variation, time-correlated structure and measurement error into considerations simultaneously. The constructed methodology subsumes a series of models studied in the literature as limiting cases. A simple method is given to determine the transformed time scale forms of the Wiener process degradation model. Then model parameters can be estimated based on a maximum likelihood estimation (MLE) method. The cumulative distribution function (CDF) and the probability distribution function (PDF) of the Wiener process with measurement errors are given based on the concept of the first hitting time (FHT). The percentiles of performance degradation (PD) and failure time distribution (FTD) are also obtained. Finally, a comprehensive simulation study is accomplished to demonstrate the necessity of incorporating measurement errors in the degradation model and the efficiency of the proposed model. Two illustrative real applications involving the degradation of carbon-film resistors and the wear of sliding metal are given. The comparative results show that the constructed approach can derive a reasonable result and an enhanced inference precision.

  15. Iterative method for in situ measurement of lens aberrations in lithographic tools using CTC-based quadratic aberration model.

    PubMed

    Liu, Shiyuan; Xu, Shuang; Wu, Xiaofei; Liu, Wei

    2012-06-18

    This paper proposes an iterative method for in situ lens aberration measurement in lithographic tools based on a quadratic aberration model (QAM) that is a natural extension of the linear model formed by taking into account interactions among individual Zernike coefficients. By introducing a generalized operator named cross triple correlation (CTC), the quadratic model can be calculated very quickly and accurately with the help of fast Fourier transform (FFT). The Zernike coefficients up to the 37th order or even higher are determined by solving an inverse problem through an iterative procedure from several through-focus aerial images of a specially designed mask pattern. The simulation work has validated the theoretical derivation and confirms that such a method is simple to implement and yields a superior quality of wavefront estimate, particularly for the case when the aberrations are relatively large. It is fully expected that this method will provide a useful practical means for the in-line monitoring of the imaging quality of lithographic tools.

  16. Network modelling methods for FMRI.

    PubMed

    Smith, Stephen M; Miller, Karla L; Salimi-Khorshidi, Gholamreza; Webster, Matthew; Beckmann, Christian F; Nichols, Thomas E; Ramsey, Joseph D; Woolrich, Mark W

    2011-01-15

    There is great interest in estimating brain "networks" from FMRI data. This is often attempted by identifying a set of functional "nodes" (e.g., spatial ROIs or ICA maps) and then conducting a connectivity analysis between the nodes, based on the FMRI timeseries associated with the nodes. Analysis methods range from very simple measures that consider just two nodes at a time (e.g., correlation between two nodes' timeseries) to sophisticated approaches that consider all nodes simultaneously and estimate one global network model (e.g., Bayes net models). Many different methods are being used in the literature, but almost none has been carefully validated or compared for use on FMRI timeseries data. In this work we generate rich, realistic simulated FMRI data for a wide range of underlying networks, experimental protocols and problematic confounds in the data, in order to compare different connectivity estimation approaches. Our results show that in general correlation-based approaches can be quite successful, methods based on higher-order statistics are less sensitive, and lag-based approaches perform very poorly. More specifically: there are several methods that can give high sensitivity to network connection detection on good quality FMRI data, in particular, partial correlation, regularised inverse covariance estimation and several Bayes net methods; however, accurate estimation of connection directionality is more difficult to achieve, though Patel's τ can be reasonably successful. With respect to the various confounds added to the data, the most striking result was that the use of functionally inaccurate ROIs (when defining the network nodes and extracting their associated timeseries) is extremely damaging to network estimation; hence, results derived from inappropriate ROI definition (such as via structural atlases) should be regarded with great caution. Copyright © 2010 Elsevier Inc. All rights reserved.

  17. Particle-based simulations of self-motile suspensions

    NASA Astrophysics Data System (ADS)

    Hinz, Denis F.; Panchenko, Alexander; Kim, Tae-Yeon; Fried, Eliot

    2015-11-01

    A simple model for simulating flows of active suspensions is investigated. The approach is based on dissipative particle dynamics. While the model is potentially applicable to a wide range of self-propelled particle systems, the specific class of self-motile bacterial suspensions is considered as a modeling scenario. To mimic the rod-like geometry of a bacterium, two dissipative particle dynamics particles are connected by a stiff harmonic spring to form an aggregate dissipative particle dynamics molecule. Bacterial motility is modeled through a constant self-propulsion force applied along the axis of each such aggregate molecule. The model accounts for hydrodynamic interactions between self-propelled agents through the pairwise dissipative interactions conventional to dissipative particle dynamics. Numerical simulations are performed using a customized version of the open-source software package LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) software package. Detailed studies of the influence of agent concentration, pairwise dissipative interactions, and Stokes friction on the statistics of the system are provided. The simulations are used to explore the influence of hydrodynamic interactions in active suspensions. For high agent concentrations in combination with dominating pairwise dissipative forces, strongly correlated motion patterns and a fluid-like spectral distributions of kinetic energy are found. In contrast, systems dominated by Stokes friction exhibit weaker spatial correlations of the velocity field. These results indicate that hydrodynamic interactions may play an important role in the formation of spatially extended structures in active suspensions.

  18. Rich structure in the correlation matrix spectra in non-equilibrium steady states

    NASA Astrophysics Data System (ADS)

    Biswas, Soham; Leyvraz, Francois; Monroy Castillero, Paulino; Seligman, Thomas H.

    2017-01-01

    It has been shown that, if a model displays long-range (power-law) spatial correlations, its equal-time correlation matrix will also have a power law tail in the distribution of its high-lying eigenvalues. The purpose of this paper is to show that the converse is generally incorrect: a power-law tail in the high-lying eigenvalues of the correlation matrix may exist even in the absence of equal-time power law correlations in the initial model. We may therefore view the study of the eigenvalue distribution of the correlation matrix as a more powerful tool than the study of spatial Correlations, one which may in fact uncover structure, that would otherwise not be apparent. Specifically, we show that in the Totally Asymmetric Simple Exclusion Process, whereas there are no clearly visible correlations in the steady state, the eigenvalues of its correlation matrix exhibit a rich structure which we describe in detail.

  19. Rich structure in the correlation matrix spectra in non-equilibrium steady states.

    PubMed

    Biswas, Soham; Leyvraz, Francois; Monroy Castillero, Paulino; Seligman, Thomas H

    2017-01-17

    It has been shown that, if a model displays long-range (power-law) spatial correlations, its equal-time correlation matrix will also have a power law tail in the distribution of its high-lying eigenvalues. The purpose of this paper is to show that the converse is generally incorrect: a power-law tail in the high-lying eigenvalues of the correlation matrix may exist even in the absence of equal-time power law correlations in the initial model. We may therefore view the study of the eigenvalue distribution of the correlation matrix as a more powerful tool than the study of spatial Correlations, one which may in fact uncover structure, that would otherwise not be apparent. Specifically, we show that in the Totally Asymmetric Simple Exclusion Process, whereas there are no clearly visible correlations in the steady state, the eigenvalues of its correlation matrix exhibit a rich structure which we describe in detail.

  20. Noise correlations in cosmic microwave background experiments

    NASA Technical Reports Server (NTRS)

    Dodelson, Scott; Kosowsky, Arthur; Myers, Steven T.

    1995-01-01

    Many analysis of microwave background experiments neglect the correlation of noise in different frequency of polarization channels. We show that these correlations, should they be present, can lead to serve misinterpretation of an experiment. In particular, correlated noise arising from either electronics or atmosphere may mimic a cosmic signal. We quantify how the likelihood function for a given experiment varies with noise correlation, using both simple analytic models and actual data. For a typical microwave background anisotropy experiment, noise correlations at the level of 1% of the overall noise can seriously reduce the significance of a given detection.

  1. Refinement of the probability density function model for preferential concentration of aerosol particles in isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Zaichik, Leonid I.; Alipchenkov, Vladimir M.

    2007-11-01

    The purposes of the paper are threefold: (i) to refine the statistical model of preferential particle concentration in isotropic turbulence that was previously proposed by Zaichik and Alipchenkov [Phys. Fluids 15, 1776 (2003)], (ii) to investigate the effect of clustering of low-inertia particles using the refined model, and (iii) to advance a simple model for predicting the collision rate of aerosol particles. The model developed is based on a kinetic equation for the two-point probability density function of the relative velocity distribution of particle pairs. Improvements in predicting the preferential concentration of low-inertia particles are attained due to refining the description of the turbulent velocity field of the carrier fluid by including a difference between the time scales of the of strain and rotation rate correlations. The refined model results in a better agreement with direct numerical simulations for aerosol particles.

  2. Two-qubit correlations revisited: average mutual information, relevant (and useful) observables and an application to remote state preparation

    NASA Astrophysics Data System (ADS)

    Giorda, Paolo; Allegra, Michele

    2017-07-01

    Understanding how correlations can be used for quantum communication protocols is a central goal of quantum information science. While many authors have linked the global measures of correlations such as entanglement or discord to the performance of specific protocols, in general the latter may require only correlations between specific observables. In this work, we first introduce a general measure of correlations for two-qubit states, based on the classical mutual information between local observables. Our measure depends on the state’s purity and the symmetry in the correlation distribution, according to which we provide a classification of maximally mixed marginal states (MMMS). We discuss the complementarity relation between correlations and coherence. By focusing on a simple yet paradigmatic example, i.e. the remote state preparation protocol, we introduce a method to systematically define the proper protocol-tailored measures of the correlations. The method is based on the identification of those correlations that are relevant (useful) for the protocol. On the one hand, the approach allows the role of the symmetry of the correlation distribution to be discussed in determining the efficiency of the protocol, both for MMMS and general two-qubit quantum states, and on the other hand, it allows an optimized protocol for non-MMMS to be devised, which is more efficient with respect to the standard one. Overall, our findings clarify how the key resources in simple communication protocols are the purity of the state used and the symmetry of the correlation distribution.

  3. Estimating Infiltration Rates for a Loessal Silt Loam Using Soil Properties

    Treesearch

    M. Dean Knighton

    1978-01-01

    Soil properties were related to infiltration rates as measured by single-ringsteady-head infiltometers. The properties showing strong simple correlations were identified. Regression models were developed to estimate infiltration rate from several soil properties. The best model gave fair agreement to measured rates at another location.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lappi, T.; Schenke, B.; Schlichting, S.

    Here we examine the origins of azimuthal correlations observed in high energy proton-nucleus collisions by considering the simple example of the scattering of uncorrelated partons off color fields in a large nucleus. We demonstrate how the physics of fluctuating color fields in the color glass condensate (CGC) effective theory generates these azimuthal multiparticle correlations and compute the corresponding Fourier coefficients v n within different CGC approximation schemes. We discuss in detail the qualitative and quantitative differences between the different schemes. Lastly, we will show how a recently introduced color field domain model that captures key features of the observed azimuthalmore » correlations can be understood in the CGC effective theory as a model of non-Gaussian correlations in the target nucleus.« less

  5. Extremism without extremists: Deffuant model with emotions

    NASA Astrophysics Data System (ADS)

    Sobkowicz, Pawel

    2015-03-01

    The frequent occurrence of extremist views in many social contexts, often growing from small minorities to almost total majority, poses a significant challenge for democratic societies. The phenomenon can be described within the sociophysical paradigm. We present a modified version of the continuous bounded confidence opinion model, including a simple description of the influence of emotions on tolerances, and eventually on the evolution of opinions. Allowing for psychologically based correlation between the extreme opinions, high emotions and low tolerance for other people's views leads to quick dominance of the extreme views within the studied model, without introducing a special class of agents, as has been done in previous works. This dominance occurs even if the initial numbers of people with extreme opinions is very small. Possible suggestions related to mitigation of the process are briefly discussed.

  6. System analysis of a piston steam engine employing the uniflow principle, a study in optimized performance

    NASA Technical Reports Server (NTRS)

    Peoples, J. A.

    1975-01-01

    Results are reported which were obtained from a mathematical model of a generalized piston steam engine configuration employing the uniflow principal. The model accounted for the effects of clearance volume, compression work, and release volume. A simple solution is presented which characterizes optimum performance of the steam engine, based on miles per gallon. Development of the mathematical model is presented. The relationship between efficiency and miles per gallon is developed. An approach to steam car analysis and design is presented which has purpose rather than lucky hopefulness. A practical engine design is proposed which correlates to the definition of the type engine used. This engine integrates several system components into the engine structure. All conclusions relate to the classical Rankine Cycle.

  7. A model of interval timing by neural integration.

    PubMed

    Simen, Patrick; Balci, Fuat; de Souza, Laura; Cohen, Jonathan D; Holmes, Philip

    2011-06-22

    We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes, that correlations among them can be largely cancelled by balancing excitation and inhibition, that neural populations can act as integrators, and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys, and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule's predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior.

  8. Discrimination of Native-like States of Membrane Proteins with Implicit Membrane-based Scoring Functions.

    PubMed

    Dutagaci, Bercem; Wittayanarakul, Kitiyaporn; Mori, Takaharu; Feig, Michael

    2017-06-13

    A scoring protocol based on implicit membrane-based scoring functions and a new protocol for optimizing the positioning of proteins inside the membrane was evaluated for its capacity to discriminate native-like states from misfolded decoys. A decoy set previously established by the Baker lab (Proteins: Struct., Funct., Genet. 2006, 62, 1010-1025) was used along with a second set that was generated to cover higher resolution models. The Implicit Membrane Model 1 (IMM1), IMM1 model with CHARMM 36 parameters (IMM1-p36), generalized Born with simple switching (GBSW), and heterogeneous dielectric generalized Born versions 2 (HDGBv2) and 3 (HDGBv3) were tested along with the new HDGB van der Waals (HDGBvdW) model that adds implicit van der Waals contributions to the solvation free energy. For comparison, scores were also calculated with the distance-scaled finite ideal-gas reference (DFIRE) scoring function. Z-scores for native state discrimination, energy vs root-mean-square deviation (RMSD) correlations, and the ability to select the most native-like structures as top-scoring decoys were evaluated to assess the performance of the scoring functions. Ranking of the decoys in the Baker set that were relatively far from the native state was challenging and dominated largely by packing interactions that were captured best by DFIRE with less benefit of the implicit membrane-based models. Accounting for the membrane environment was much more important in the second decoy set where especially the HDGB-based scoring functions performed very well in ranking decoys and providing significant correlations between scores and RMSD, which shows promise for improving membrane protein structure prediction and refinement applications. The new membrane structure scoring protocol was implemented in the MEMScore web server ( http://feiglab.org/memscore ).

  9. Tetrahedrality and structural order for hydrophobic interactions in a coarse-grained water model

    NASA Astrophysics Data System (ADS)

    Chaimovich, Aviel; Shell, M. Scott

    2014-02-01

    The hydrophobic interaction manifests two separate regimes in terms of size: Small nonpolar bodies exhibit a weak oscillatory force (versus distance) while large nonpolar surfaces exhibit a strong monotonic one. This crossover in hydrophobic behavior is typically explained in terms of water's tetrahedral structure: Its tetrahedrality is enhanced near small solutes and diminished near large planar ones. Here, we demonstrate that water's tetrahedral correlations signal this switch even in a highly simplified, isotropic, "core-softened" water model. For this task, we introduce measures of tetrahedrality based on the angular distribution of water's nearest neighbors. On a quantitative basis, the coarse-grained model of course is only approximate: (1) While greater than simple Lennard-Jones liquids, its bulk tetrahedrality remains lower than that of fully atomic models; and (2) the decay length of the large-scale hydrophobic interaction is less than has been found in experiments. Even so, the qualitative behavior of the model is surprisingly rich and exhibits numerous waterlike hydrophobic behaviors, despite its simplicity. We offer several arguments for the manner in which it should be able to (at least partially) reproduce tetrahedral correlations underlying these effects.

  10. Tetrahedrality and structural order for hydrophobic interactions in a coarse-grained water model.

    PubMed

    Chaimovich, Aviel; Shell, M Scott

    2014-02-01

    The hydrophobic interaction manifests two separate regimes in terms of size: Small nonpolar bodies exhibit a weak oscillatory force (versus distance) while large nonpolar surfaces exhibit a strong monotonic one. This crossover in hydrophobic behavior is typically explained in terms of water's tetrahedral structure: Its tetrahedrality is enhanced near small solutes and diminished near large planar ones. Here, we demonstrate that water's tetrahedral correlations signal this switch even in a highly simplified, isotropic, "core-softened" water model. For this task, we introduce measures of tetrahedrality based on the angular distribution of water's nearest neighbors. On a quantitative basis, the coarse-grained model of course is only approximate: (1) While greater than simple Lennard-Jones liquids, its bulk tetrahedrality remains lower than that of fully atomic models; and (2) the decay length of the large-scale hydrophobic interaction is less than has been found in experiments. Even so, the qualitative behavior of the model is surprisingly rich and exhibits numerous waterlike hydrophobic behaviors, despite its simplicity. We offer several arguments for the manner in which it should be able to (at least partially) reproduce tetrahedral correlations underlying these effects.

  11. The relationship between the spatial scaling of biodiversity and ecosystem stability

    PubMed Central

    Delsol, Robin; Loreau, Michel; Haegeman, Bart

    2018-01-01

    Aim Ecosystem stability and its link with biodiversity have mainly been studied at the local scale. Here we present a simple theoretical model to address the joint dependence of diversity and stability on spatial scale, from local to continental. Methods The notion of stability we use is based on the temporal variability of an ecosystem-level property, such as primary productivity. In this way, our model integrates the well-known species–area relationship (SAR) with a recent proposal to quantify the spatial scaling of stability, called the invariability–area relationship (IAR). Results We show that the link between the two relationships strongly depends on whether the temporal fluctuations of the ecosystem property of interest are more correlated within than between species. If fluctuations are correlated within species but not between them, then the IAR is strongly constrained by the SAR. If instead individual fluctuations are only correlated by spatial proximity, then the IAR is unrelated to the SAR. We apply these two correlation assumptions to explore the effects of species loss and habitat destruction on stability, and find a rich variety of multi-scale spatial dependencies, with marked differences between the two assumptions. Main conclusions The dependence of ecosystem stability on biodiversity across spatial scales is governed by the spatial decay of correlations within and between species. Our work provides a point of reference for mechanistic models and data analyses. More generally, it illustrates the relevance of macroecology for ecosystem functioning and stability. PMID:29651225

  12. Comparing and combining process-based crop models and statistical models with some implications for climate change

    NASA Astrophysics Data System (ADS)

    Roberts, Michael J.; Braun, Noah O.; Sinclair, Thomas R.; Lobell, David B.; Schlenker, Wolfram

    2017-09-01

    We compare predictions of a simple process-based crop model (Soltani and Sinclair 2012), a simple statistical model (Schlenker and Roberts 2009), and a combination of both models to actual maize yields on a large, representative sample of farmer-managed fields in the Corn Belt region of the United States. After statistical post-model calibration, the process model (Simple Simulation Model, or SSM) predicts actual outcomes slightly better than the statistical model, but the combined model performs significantly better than either model. The SSM, statistical model and combined model all show similar relationships with precipitation, while the SSM better accounts for temporal patterns of precipitation, vapor pressure deficit and solar radiation. The statistical and combined models show a more negative impact associated with extreme heat for which the process model does not account. Due to the extreme heat effect, predicted impacts under uniform climate change scenarios are considerably more severe for the statistical and combined models than for the process-based model.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newton, Marshall D.

    Extension of the Förster analogue for the ET rate constant (based on virtual intermediate electron detachment or attachment states) with inclusion of site–site correlation due to coulomb terms associated with solvent reorganization energy and the driving force, has been developed and illustrated for a simple three-state, two-mode model. Furthermore, the model is applicable to charge separation (CS), recombination (CR), and shift (CSh) ET processes, with or without an intervening bridge. The model provides a unified perspective on the role of virtual intermediate states in accounting for the thermal Franck–Condon weighted density of states (FCWD), the gaps controlling superexchange coupling, andmore » mean absolute redox potentials, with full accommodation of site–site coulomb interactions. We analyzed two types of correlation: aside from the site–site correlation due to coulomb interactions, we have emphasized the intrinsic “nonorthogonality” which generally pertains to reaction coordinates (RCs) for different ET processes involving multiple electronic states, as may be expressed by suitably defined direction cosines (cos(θ)). A pair of RCs may be nonorthogonal even when the site–site coulomb correlations are absent. While different RCs are linearly independent in the mathematical sense for all θ ≠ 0°, they are independent in the sense of being “uncorrelated” only in the limit of orthogonality (θ = 90°). There is application to more than two coordinates is straightforward and may include both discrete and continuum contributions.« less

  14. Chaotic oscillations and noise transformations in a simple dissipative system with delayed feedback

    NASA Astrophysics Data System (ADS)

    Zverev, V. V.; Rubinstein, B. Ya.

    1991-04-01

    We analyze the statistical behavior of signals in nonlinear circuits with delayed feedback in the presence of external Markovian noise. For the special class of circuits with intense phase mixing we develop an approach for the computation of the probability distributions and multitime correlation functions based on the random phase approximation. Both Gaussian and Kubo-Andersen models of external noise statistics are analyzed and the existence of the stationary (asymptotic) random process in the long-time limit is shown. We demonstrate that a nonlinear system with chaotic behavior becomes a noise amplifier with specific statistical transformation properties.

  15. Numerical and analytical investigation towards performance enhancement of a newly developed rockfall protective cable-net structure

    NASA Astrophysics Data System (ADS)

    Dhakal, S.; Bhandary, N. P.; Yatabe, R.; Kinoshita, N.

    2012-04-01

    In a previous companion paper, we presented a three-tier modelling of a particular type of rockfall protective cable-net structure (barrier), developed newly in Japan. Therein, we developed a three-dimensional, Finite Element based, nonlinear numerical model having been calibrated/back-calculated and verified with the element- and structure-level physical tests. Moreover, using a very simple, lumped-mass, single-degree-of-freedom, equivalently linear analytical model, a global-displacement-predictive correlation was devised by modifying the basic equation - obtained by combining the principles of conservation of linear momentum and energy - based on the back-analysis of the tests on the numerical model. In this paper, we use the developed models to explore the performance enhancement potential of the structure in terms of (a) the control of global displacement - possibly the major performance criterion for the proposed structure owing to a narrow space available in the targeted site, and (b) the increase in energy dissipation by the existing U-bolt-type Friction-brake Devices - which are identified to have performed weakly when integrated into the structure. A set of parametric investigations have revealed correlations to achieve the first objective in terms of the structure's mass, particularly by manipulating the wire-net's characteristics, and has additionally disclosed the effects of the impacting-block's parameters. Towards achieving the second objective, another set of parametric investigations have led to a proposal of a few innovative improvements in the constitutive behaviour (model) of the studied brake device (dissipator), in addition to an important recommendation of careful handling of the device based on the identified potential flaw.

  16. Learning in Structured Connectionist Networks

    DTIC Science & Technology

    1988-04-01

    the structure is too rigid and learning too difficult for cognitive modeling. Two algorithms for learning simple, feature-based concept descriptions...and learning too difficult for cognitive model- ing. Two algorithms for learning simple, feature-based concept descriptions were also implemented. The...Term Goals Recent progress in connectionist research has been encouraging; networks have success- fully modeled human performance for various cognitive

  17. Evaluating uses of data mining techniques in propensity score estimation: a simulation study.

    PubMed

    Setoguchi, Soko; Schneeweiss, Sebastian; Brookhart, M Alan; Glynn, Robert J; Cook, E Francis

    2008-06-01

    In propensity score modeling, it is a standard practice to optimize the prediction of exposure status based on the covariate information. In a simulation study, we examined in what situations analyses based on various types of exposure propensity score (EPS) models using data mining techniques such as recursive partitioning (RP) and neural networks (NN) produce unbiased and/or efficient results. We simulated data for a hypothetical cohort study (n = 2000) with a binary exposure/outcome and 10 binary/continuous covariates with seven scenarios differing by non-linear and/or non-additive associations between exposure and covariates. EPS models used logistic regression (LR) (all possible main effects), RP1 (without pruning), RP2 (with pruning), and NN. We calculated c-statistics (C), standard errors (SE), and bias of exposure-effect estimates from outcome models for the PS-matched dataset. Data mining techniques yielded higher C than LR (mean: NN, 0.86; RPI, 0.79; RP2, 0.72; and LR, 0.76). SE tended to be greater in models with higher C. Overall bias was small for each strategy, although NN estimates tended to be the least biased. C was not correlated with the magnitude of bias (correlation coefficient [COR] = -0.3, p = 0.1) but increased SE (COR = 0.7, p < 0.001). Effect estimates from EPS models by simple LR were generally robust. NN models generally provided the least numerically biased estimates. C was not associated with the magnitude of bias but was with the increased SE.

  18. Developmental dissociation in the neural responses to simple multiplication and subtraction problems

    PubMed Central

    Prado, Jérôme; Mutreja, Rachna; Booth, James R.

    2014-01-01

    Mastering single-digit arithmetic during school years is commonly thought to depend upon an increasing reliance on verbally memorized facts. An alternative model, however, posits that fluency in single-digit arithmetic might also be achieved via the increasing use of efficient calculation procedures. To test between these hypotheses, we used a cross-sectional design to measure the neural activity associated with single-digit subtraction and multiplication in 34 children from 2nd to 7th grade. The neural correlates of language and numerical processing were also identified in each child via localizer scans. Although multiplication and subtraction were undistinguishable in terms of behavior, we found a striking developmental dissociation in their neural correlates. First, we observed grade-related increases of activity for multiplication, but not for subtraction, in a language-related region of the left temporal cortex. Second, we found grade-related increases of activity for subtraction, but not for multiplication, in a region of the right parietal cortex involved in the procedural manipulation of numerical quantities. The present results suggest that fluency in simple arithmetic in children may be achieved by both increasing reliance on verbal retrieval and by greater use of efficient quantity-based procedures, depending on the operation. PMID:25089323

  19. A Hebbian learning rule gives rise to mirror neurons and links them to control theoretic inverse models.

    PubMed

    Hanuschkin, A; Ganguli, S; Hahnloser, R H R

    2013-01-01

    Mirror neurons are neurons whose responses to the observation of a motor act resemble responses measured during production of that act. Computationally, mirror neurons have been viewed as evidence for the existence of internal inverse models. Such models, rooted within control theory, map-desired sensory targets onto the motor commands required to generate those targets. To jointly explore both the formation of mirrored responses and their functional contribution to inverse models, we develop a correlation-based theory of interactions between a sensory and a motor area. We show that a simple eligibility-weighted Hebbian learning rule, operating within a sensorimotor loop during motor explorations and stabilized by heterosynaptic competition, naturally gives rise to mirror neurons as well as control theoretic inverse models encoded in the synaptic weights from sensory to motor neurons. Crucially, we find that the correlational structure or stereotypy of the neural code underlying motor explorations determines the nature of the learned inverse model: random motor codes lead to causal inverses that map sensory activity patterns to their motor causes; such inverses are maximally useful, by allowing the imitation of arbitrary sensory target sequences. By contrast, stereotyped motor codes lead to less useful predictive inverses that map sensory activity to future motor actions. Our theory generalizes previous work on inverse models by showing that such models can be learned in a simple Hebbian framework without the need for error signals or backpropagation, and it makes new conceptual connections between the causal nature of inverse models, the statistical structure of motor variability, and the time-lag between sensory and motor responses of mirror neurons. Applied to bird song learning, our theory can account for puzzling aspects of the song system, including necessity of sensorimotor gating and selectivity of auditory responses to bird's own song (BOS) stimuli.

  20. A Hebbian learning rule gives rise to mirror neurons and links them to control theoretic inverse models

    PubMed Central

    Hanuschkin, A.; Ganguli, S.; Hahnloser, R. H. R.

    2013-01-01

    Mirror neurons are neurons whose responses to the observation of a motor act resemble responses measured during production of that act. Computationally, mirror neurons have been viewed as evidence for the existence of internal inverse models. Such models, rooted within control theory, map-desired sensory targets onto the motor commands required to generate those targets. To jointly explore both the formation of mirrored responses and their functional contribution to inverse models, we develop a correlation-based theory of interactions between a sensory and a motor area. We show that a simple eligibility-weighted Hebbian learning rule, operating within a sensorimotor loop during motor explorations and stabilized by heterosynaptic competition, naturally gives rise to mirror neurons as well as control theoretic inverse models encoded in the synaptic weights from sensory to motor neurons. Crucially, we find that the correlational structure or stereotypy of the neural code underlying motor explorations determines the nature of the learned inverse model: random motor codes lead to causal inverses that map sensory activity patterns to their motor causes; such inverses are maximally useful, by allowing the imitation of arbitrary sensory target sequences. By contrast, stereotyped motor codes lead to less useful predictive inverses that map sensory activity to future motor actions. Our theory generalizes previous work on inverse models by showing that such models can be learned in a simple Hebbian framework without the need for error signals or backpropagation, and it makes new conceptual connections between the causal nature of inverse models, the statistical structure of motor variability, and the time-lag between sensory and motor responses of mirror neurons. Applied to bird song learning, our theory can account for puzzling aspects of the song system, including necessity of sensorimotor gating and selectivity of auditory responses to bird's own song (BOS) stimuli. PMID:23801941

  1. A flexible cure rate model for spatially correlated survival data based on generalized extreme value distribution and Gaussian process priors.

    PubMed

    Li, Dan; Wang, Xia; Dey, Dipak K

    2016-09-01

    Our present work proposes a new survival model in a Bayesian context to analyze right-censored survival data for populations with a surviving fraction, assuming that the log failure time follows a generalized extreme value distribution. Many applications require a more flexible modeling of covariate information than a simple linear or parametric form for all covariate effects. It is also necessary to include the spatial variation in the model, since it is sometimes unexplained by the covariates considered in the analysis. Therefore, the nonlinear covariate effects and the spatial effects are incorporated into the systematic component of our model. Gaussian processes (GPs) provide a natural framework for modeling potentially nonlinear relationship and have recently become extremely powerful in nonlinear regression. Our proposed model adopts a semiparametric Bayesian approach by imposing a GP prior on the nonlinear structure of continuous covariate. With the consideration of data availability and computational complexity, the conditionally autoregressive distribution is placed on the region-specific frailties to handle spatial correlation. The flexibility and gains of our proposed model are illustrated through analyses of simulated data examples as well as a dataset involving a colon cancer clinical trial from the state of Iowa. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Linking brain-wide multivoxel activation patterns to behaviour: Examples from language and math.

    PubMed

    Raizada, Rajeev D S; Tsao, Feng-Ming; Liu, Huei-Mei; Holloway, Ian D; Ansari, Daniel; Kuhl, Patricia K

    2010-05-15

    A key goal of cognitive neuroscience is to find simple and direct connections between brain and behaviour. However, fMRI analysis typically involves choices between many possible options, with each choice potentially biasing any brain-behaviour correlations that emerge. Standard methods of fMRI analysis assess each voxel individually, but then face the problem of selection bias when combining those voxels into a region-of-interest, or ROI. Multivariate pattern-based fMRI analysis methods use classifiers to analyse multiple voxels together, but can also introduce selection bias via data-reduction steps as feature selection of voxels, pre-selecting activated regions, or principal components analysis. We show here that strong brain-behaviour links can be revealed without any voxel selection or data reduction, using just plain linear regression as a classifier applied to the whole brain at once, i.e. treating each entire brain volume as a single multi-voxel pattern. The brain-behaviour correlations emerged despite the fact that the classifier was not provided with any information at all about subjects' behaviour, but instead was given only the neural data and its condition-labels. Surprisingly, more powerful classifiers such as a linear SVM and regularised logistic regression produce very similar results. We discuss some possible reasons why the very simple brain-wide linear regression model is able to find correlations with behaviour that are as strong as those obtained on the one hand from a specific ROI and on the other hand from more complex classifiers. In a manner which is unencumbered by arbitrary choices, our approach offers a method for investigating connections between brain and behaviour which is simple, rigorous and direct. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  3. Linking brain-wide multivoxel activation patterns to behaviour: Examples from language and math

    PubMed Central

    Raizada, Rajeev D.S.; Tsao, Feng-Ming; Liu, Huei-Mei; Holloway, Ian D.; Ansari, Daniel; Kuhl, Patricia K.

    2010-01-01

    A key goal of cognitive neuroscience is to find simple and direct connections between brain and behaviour. However, fMRI analysis typically involves choices between many possible options, with each choice potentially biasing any brain–behaviour correlations that emerge. Standard methods of fMRI analysis assess each voxel individually, but then face the problem of selection bias when combining those voxels into a region-of-interest, or ROI. Multivariate pattern-based fMRI analysis methods use classifiers to analyse multiple voxels together, but can also introduce selection bias via data-reduction steps as feature selection of voxels, pre-selecting activated regions, or principal components analysis. We show here that strong brain–behaviour links can be revealed without any voxel selection or data reduction, using just plain linear regression as a classifier applied to the whole brain at once, i.e. treating each entire brain volume as a single multi-voxel pattern. The brain–behaviour correlations emerged despite the fact that the classifier was not provided with any information at all about subjects' behaviour, but instead was given only the neural data and its condition-labels. Surprisingly, more powerful classifiers such as a linear SVM and regularised logistic regression produce very similar results. We discuss some possible reasons why the very simple brain-wide linear regression model is able to find correlations with behaviour that are as strong as those obtained on the one hand from a specific ROI and on the other hand from more complex classifiers. In a manner which is unencumbered by arbitrary choices, our approach offers a method for investigating connections between brain and behaviour which is simple, rigorous and direct. PMID:20132896

  4. Accurate Modeling of Galaxy Clustering on Small Scales: Testing the Standard ΛCDM + Halo Model

    NASA Astrophysics Data System (ADS)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron; Scoccimarro, Roman

    2015-01-01

    The large-scale distribution of galaxies can be explained fairly simply by assuming (i) a cosmological model, which determines the dark matter halo distribution, and (ii) a simple connection between galaxies and the halos they inhabit. This conceptually simple framework, called the halo model, has been remarkably successful at reproducing the clustering of galaxies on all scales, as observed in various galaxy redshift surveys. However, none of these previous studies have carefully modeled the systematics and thus truly tested the halo model in a statistically rigorous sense. We present a new accurate and fully numerical halo model framework and test it against clustering measurements from two luminosity samples of galaxies drawn from the SDSS DR7. We show that the simple ΛCDM cosmology + halo model is not able to simultaneously reproduce the galaxy projected correlation function and the group multiplicity function. In particular, the more luminous sample shows significant tension with theory. We discuss the implications of our findings and how this work paves the way for constraining galaxy formation by accurate simultaneous modeling of multiple galaxy clustering statistics.

  5. Short-time microscopic dynamics of aqueous methanol solutions

    NASA Astrophysics Data System (ADS)

    Kalampounias, A. G.; Tsilomelekis, G.; Boghosian, S.

    2012-12-01

    In this paper we present the picosecond vibrational dynamics of a series of methanol aqueous solutions over a wide concentration range from dense to dilute solutions. We studied the vibrational dephasing and vibrational frequency modulation by calculating the time correlation functions of vibrational relaxation by fits in the frequency domain. This method is applied to aqueous methanol solutions xMeOH-(1 - x)H2O, where x = 0, 0.2, 0.4, 0.6, 0.8 and 1. The important finding is that the vibrational dynamics of the system become slower with increasing methanol concentration. The removal of many-body effects by having the molecules in less-crowded environments seems to be the key factor. The interpretation of the vibrational correlation function in the context of Kubo theory, which is based on the assumption that the environmental modulation arises from a single relaxation process and applied to simple liquids, is inadequate for all solutions studied. We found that the vibrational correlation functions of the solutions over the whole concentration range comply with the Rothschild approach, assuming that the environmental modulation is described by a stretched exponential decay. The evolution of the dispersion parameter α with dilution indicates the deviation of the solutions from the model simple liquid and the results are discussed in the framework of the current phenomenological status of the field.

  6. CINCH (confocal incoherent correlation holography) super resolution fluorescence microscopy based upon FINCH (Fresnel incoherent correlation holography)

    PubMed Central

    Siegel, Nisan; Storrie, Brian; Bruce, Marc

    2016-01-01

    FINCH holographic fluorescence microscopy creates high resolution super-resolved images with enhanced depth of focus. The simple addition of a real-time Nipkow disk confocal image scanner in a conjugate plane of this incoherent holographic system is shown to reduce the depth of focus, and the combination of both techniques provides a simple way to enhance the axial resolution of FINCH in a combined method called “CINCH”. An important feature of the combined system allows for the simultaneous real-time image capture of widefield and holographic images or confocal and confocal holographic images for ready comparison of each method on the exact same field of view. Additional GPU based complex deconvolution processing of the images further enhances resolution. PMID:26839443

  7. Numerical simulation of a compressible homogeneous, turbulent shear flow. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Feiereisen, W. J.; Reynolds, W. C.; Ferziger, J. H.

    1981-01-01

    A direct, low Reynolds number, numerical simulation was performed on a homogeneous turbulent shear flow. The full compressible Navier-Stokes equations were used in a simulation on the ILLIAC IV computer with a 64,000 mesh. The flow fields generated by the code are used as an experimental data base, to examine the behavior of the Reynols stresses in this simple, compressible flow. The variation of the structure of the stresses and their dynamic equations as the character of the flow changed is emphasized. The structure of the tress tensor is more heavily dependent on the shear number and less on the fluctuating Mach number. The pressure-strain correlation tensor in the dynamic uations is directly calculated in this simulation. These correlations are decomposed into several parts, as contrasted with the traditional incompressible decomposition into two parts. The performance of existing models for the conventional terms is examined, and a model is proposed for the 'mean fluctuating' part.

  8. Testing a single regression coefficient in high dimensional linear models

    PubMed Central

    Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling

    2017-01-01

    In linear regression models with high dimensional data, the classical z-test (or t-test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z-test to assess the significance of each covariate. Based on the p-value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively. PMID:28663668

  9. Testing a single regression coefficient in high dimensional linear models.

    PubMed

    Lan, Wei; Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling

    2016-11-01

    In linear regression models with high dimensional data, the classical z -test (or t -test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z -test to assess the significance of each covariate. Based on the p -value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively.

  10. Arthroscopic skills assessment and use of box model for training in arthroscopic surgery using Sawbones – “FAST” workstation

    PubMed Central

    Goyal, Saumitra; Radi, Mohamed Abdel; Ramadan, Islam Karam-allah; Said, Hatem Galal

    2016-01-01

    Purpose: Arthroscopic skills training outside the operative room may decrease risks and errors by trainee surgeons. There is a need of simple objective method for evaluating proficiency and skill of arthroscopy trainees using simple bench model of arthroscopic simulator. The aim of this study is to correlate motor task performance to level of prior arthroscopic experience and establish benchmarks for training modules. Methods: Twenty orthopaedic surgeons performed a set of tasks to assess a) arthroscopic triangulation, b) navigation, c) object handling and d) meniscus trimming using SAWBONES “FAST” arthroscopy skills workstation. Time to completion and the errors were computed. The subjects were divided into four levels; “Novice”, “Beginner”, “Intermediate” and “Advanced” based on previous arthroscopy experience, for analyses of performance. Results: The task performance under transparent dome was not related to experience of the surgeon unlike opaque dome, highlighting the importance of hand-eye co-ordination required in arthroscopy. Median time to completion for each task improved as the level of experience increased and this was found to be statistically significant (p < .05) e.g. time for maze navigation (Novice – 166 s, Beginner – 135.5 s, Intermediate – 100 s, Advance – 97.5 s) and the similar results for all tasks. Majority (>85%) of subjects across all the levels reported improvement in performance with sequential tasks. Conclusion: Use of the arthroscope requires visuo-spatial coordination which is a skill that develops with practice. This simple box model can reliably differentiate the arthroscopic skills based on experience and can be used to monitor progression of skills of trainees in institutions. PMID:27801643

  11. Probing failure susceptibilities of earthquake faults using small-quake tidal correlations.

    PubMed

    Brinkman, Braden A W; LeBlanc, Michael; Ben-Zion, Yehuda; Uhl, Jonathan T; Dahmen, Karin A

    2015-01-27

    Mitigating the devastating economic and humanitarian impact of large earthquakes requires signals for forecasting seismic events. Daily tide stresses were previously thought to be insufficient for use as such a signal. Recently, however, they have been found to correlate significantly with small earthquakes, just before large earthquakes occur. Here we present a simple earthquake model to investigate whether correlations between daily tidal stresses and small earthquakes provide information about the likelihood of impending large earthquakes. The model predicts that intervals of significant correlations between small earthquakes and ongoing low-amplitude periodic stresses indicate increased fault susceptibility to large earthquake generation. The results agree with the recent observations of large earthquakes preceded by time periods of significant correlations between smaller events and daily tide stresses. We anticipate that incorporating experimentally determined parameters and fault-specific details into the model may provide new tools for extracting improved probabilities of impending large earthquakes.

  12. Intonation in unaccompanied singing: accuracy, drift, and a model of reference pitch memory.

    PubMed

    Mauch, Matthias; Frieler, Klaus; Dixon, Simon

    2014-07-01

    This paper presents a study on intonation and intonation drift in unaccompanied singing, and proposes a simple model of reference pitch memory that accounts for many of the effects observed. Singing experiments were conducted with 24 singers of varying ability under three conditions (Normal, Masked, Imagined). Over the duration of a recording, ∼50 s, a median absolute intonation drift of 11 cents was observed. While smaller than the median note error (19 cents), drift was significant in 22% of recordings. Drift magnitude did not correlate with other measures of singing accuracy, singing experience, or the presence of conditions tested. Furthermore, it is shown that neither a static intonation memory model nor a memoryless interval-based intonation model can account for the accuracy and drift behavior observed. The proposed causal model provides a better explanation as it treats the reference pitch as a changing latent variable.

  13. Non-invasive assessment of carotid PWV via accelerometric sensors: validation of a new device and comparison with established techniques.

    PubMed

    Di Lascio, Nicole; Bruno, Rosa Maria; Stea, Francesco; Bianchini, Elisabetta; Gemignani, Vincenzo; Ghiadoni, Lorenzo; Faita, Francesco

    2014-01-01

    Carotid pulse wave velocity (PWV) is considered as a surrogate marker for carotid stiffness and its assessment is increasingly being used in clinical practice. However, at the moment, its estimation needs specific equipment and a moderate level of technical expertise; moreover, it is based on a mathematical model. The aim of this study was to validate a new system for non-invasive and model-free carotid PWV assessment based on accelerometric sensors by comparison with currently used techniques. Accelerometric PWV (accPWV) values were obtained in 97 volunteers free of cardiovascular disease (age 24-85 years) and compared with standard ultrasound-based carotid stiffness parameters, such as carotid PWV (cPWV), relative distension (relD) and distensibility coefficient (DC). Moreover, the comparison between accPWV measurements and carotid-femoral PWV (cfPWV) was performed. Accelerometric PWV evaluations showed a significant correlation with cPWV measurements (R = 0.67), relD values (R = 0.66) and DC assessments (R = 0.64). These values were also significantly correlated with cfPWV evaluations (R = 0.46). In addition, the first attempt success rate was equal to 76.8 %. The accelerometric system allows a simple and quick local carotid stiffness evaluation and the values obtained with this system are significantly correlated with known carotid stiffness biomarkers. Therefore, the presented device could provide a concrete opportunity for an easy carotid stiffness evaluation even in clinical practice.

  14. Crises and Collective Socio-Economic Phenomena: Simple Models and Challenges

    NASA Astrophysics Data System (ADS)

    Bouchaud, Jean-Philippe

    2013-05-01

    Financial and economic history is strewn with bubbles and crashes, booms and busts, crises and upheavals of all sorts. Understanding the origin of these events is arguably one of the most important problems in economic theory. In this paper, we review recent efforts to include heterogeneities and interactions in models of decision. We argue that the so-called Random Field Ising model ( rfim) provides a unifying framework to account for many collective socio-economic phenomena that lead to sudden ruptures and crises. We discuss different models that can capture potentially destabilizing self-referential feedback loops, induced either by herding, i.e. reference to peers, or trending, i.e. reference to the past, and that account for some of the phenomenology missing in the standard models. We discuss some empirically testable predictions of these models, for example robust signatures of rfim-like herding effects, or the logarithmic decay of spatial correlations of voting patterns. One of the most striking result, inspired by statistical physics methods, is that Adam Smith's invisible hand can fail badly at solving simple coordination problems. We also insist on the issue of time-scales, that can be extremely long in some cases, and prevent socially optimal equilibria from being reached. As a theoretical challenge, the study of so-called "detailed-balance" violating decision rules is needed to decide whether conclusions based on current models (that all assume detailed-balance) are indeed robust and generic.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, K. S.; Nakae, L. F.; Prasad, M. K.

    Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less

  16. Synaptic Basis for Differential Orientation Selectivity between Complex and Simple Cells in Mouse Visual Cortex

    PubMed Central

    Li, Ya-tang; Liu, Bao-hua; Chou, Xiao-lin; Zhang, Li I.

    2015-01-01

    In the primary visual cortex (V1), orientation-selective neurons can be categorized into simple and complex cells primarily based on their receptive field (RF) structures. In mouse V1, although previous studies have examined the excitatory/inhibitory interplay underlying orientation selectivity (OS) of simple cells, the synaptic bases for that of complex cells have remained obscure. Here, by combining in vivo loose-patch and whole-cell recordings, we found that complex cells, identified by their overlapping on/off subfields, had significantly weaker OS than simple cells at both spiking and subthreshold membrane potential response levels. Voltage-clamp recordings further revealed that although excitatory inputs to complex and simple cells exhibited a similar degree of OS, inhibition in complex cells was more narrowly tuned than excitation, whereas in simple cells inhibition was more broadly tuned than excitation. The differential inhibitory tuning can primarily account for the difference in OS between complex and simple cells. Interestingly, the differential synaptic tuning correlated well with the spatial organization of synaptic input: the inhibitory visual RF in complex cells was more elongated in shape than its excitatory counterpart and also was more elongated than that in simple cells. Together, our results demonstrate that OS of complex and simple cells is differentially shaped by cortical inhibition based on its orientation tuning profile relative to excitation, which is contributed at least partially by the spatial organization of RFs of presynaptic inhibitory neurons. SIGNIFICANCE STATEMENT Simple and complex cells, two classes of principal neurons in the primary visual cortex (V1), are generally thought to be equally selective for orientation. In mouse V1, we report that complex cells, identified by their overlapping on/off subfields, has significantly weaker orientation selectivity (OS) than simple cells. This can be primarily attributed to the differential tuning selectivity of inhibitory synaptic input: inhibition in complex cells is more narrowly tuned than excitation, whereas in simple cells inhibition is more broadly tuned than excitation. In addition, there is a good correlation between inhibitory tuning selectivity and the spatial organization of inhibitory inputs. These complex and simple cells with differential degree of OS may provide functionally distinct signals to different downstream targets. PMID:26245969

  17. Synaptic Basis for Differential Orientation Selectivity between Complex and Simple Cells in Mouse Visual Cortex.

    PubMed

    Li, Ya-tang; Liu, Bao-hua; Chou, Xiao-lin; Zhang, Li I; Tao, Huizhong W

    2015-08-05

    In the primary visual cortex (V1), orientation-selective neurons can be categorized into simple and complex cells primarily based on their receptive field (RF) structures. In mouse V1, although previous studies have examined the excitatory/inhibitory interplay underlying orientation selectivity (OS) of simple cells, the synaptic bases for that of complex cells have remained obscure. Here, by combining in vivo loose-patch and whole-cell recordings, we found that complex cells, identified by their overlapping on/off subfields, had significantly weaker OS than simple cells at both spiking and subthreshold membrane potential response levels. Voltage-clamp recordings further revealed that although excitatory inputs to complex and simple cells exhibited a similar degree of OS, inhibition in complex cells was more narrowly tuned than excitation, whereas in simple cells inhibition was more broadly tuned than excitation. The differential inhibitory tuning can primarily account for the difference in OS between complex and simple cells. Interestingly, the differential synaptic tuning correlated well with the spatial organization of synaptic input: the inhibitory visual RF in complex cells was more elongated in shape than its excitatory counterpart and also was more elongated than that in simple cells. Together, our results demonstrate that OS of complex and simple cells is differentially shaped by cortical inhibition based on its orientation tuning profile relative to excitation, which is contributed at least partially by the spatial organization of RFs of presynaptic inhibitory neurons. Simple and complex cells, two classes of principal neurons in the primary visual cortex (V1), are generally thought to be equally selective for orientation. In mouse V1, we report that complex cells, identified by their overlapping on/off subfields, has significantly weaker orientation selectivity (OS) than simple cells. This can be primarily attributed to the differential tuning selectivity of inhibitory synaptic input: inhibition in complex cells is more narrowly tuned than excitation, whereas in simple cells inhibition is more broadly tuned than excitation. In addition, there is a good correlation between inhibitory tuning selectivity and the spatial organization of inhibitory inputs. These complex and simple cells with differential degree of OS may provide functionally distinct signals to different downstream targets. Copyright © 2015 the authors 0270-6474/15/3511081-13$15.00/0.

  18. Radiative interactions in multi-dimensional chemically reacting flows using Monte Carlo simulations

    NASA Technical Reports Server (NTRS)

    Liu, Jiwen; Tiwari, Surendra N.

    1994-01-01

    The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical narrow band model with an exponential-tailed inverse intensity distribution. The amount and transfer of the emitted radiative energy in a finite volume element within a medium are considered in an exact manner. The spectral correlation between transmittances of two different segments of the same path in a medium makes the statistical relationship different from the conventional relationship, which only provides the non-correlated results for nongray methods is discussed. Validation of the Monte Carlo formulations is conducted by comparing results of this method of other solutions. In order to further establish the validity of the MCM, a relatively simple problem of radiative interactions in laminar parallel plate flows is considered. One-dimensional correlated Monte Carlo formulations are applied to investigate radiative heat transfer. The nongray Monte Carlo solutions are also obtained for the same problem and they also essentially match the available analytical solutions. the exact correlated and non-correlated Monte Carlo formulations are very complicated for multi-dimensional systems. However, by introducing the assumption of an infinitesimal volume element, the approximate correlated and non-correlated formulations are obtained which are much simpler than the exact formulations. Consideration of different problems and comparison of different solutions reveal that the approximate and exact correlated solutions agree very well, and so do the approximate and exact non-correlated solutions. However, the two non-correlated solutions have no physical meaning because they significantly differ from the correlated solutions. An accurate prediction of radiative heat transfer in any nongray and multi-dimensional system is possible by using the approximate correlated formulations. Radiative interactions are investigated in chemically reacting compressible flows of premixed hydrogen and air in an expanding nozzle. The governing equations are based on the fully elliptic Navier-Stokes equations. Chemical reaction mechanisms were described by a finite rate chemistry model. The correlated Monte Carlo method developed earlier was employed to simulate multi-dimensional radiative heat transfer. Results obtained demonstrate that radiative effects on the flowfield are minimal but radiative effects on the wall heat transfer are significant. Extensive parametric studies are conducted to investigate the effects of equivalence ratio, wall temperature, inlet flow temperature, and nozzle size on the radiative and conductive wall fluxes.

  19. A virtual model of the bench press exercise.

    PubMed

    Rahmani, Abderrahmane; Rambaud, Olivier; Bourdin, Muriel; Mariot, Jean-Pierre

    2009-08-07

    The objective of this study was to design and validate a three degrees of freedom model in the sagittal plane for the bench press exercise. The mechanical model was based on rigid segments connected by revolute and prismatic pairs, which enabled a kinematic approach and global force estimation. The method requires only three simple measurements: (i) horizontal position of the hand (x(0)); (ii) vertical displacement of the barbell (Z) and (iii) elbow angle (theta). Eight adult male throwers performed maximal concentric bench press exercises against different masses. The kinematic results showed that the vertical displacement of each segment and the global centre of mass followed the vertical displacement of the lifted mass. Consequently, the vertical velocity and acceleration of the combined centre of mass and the lifted mass were identical. Finally, for each lifted mass, there were no practical differences between forces calculated from the bench press model and those simultaneously measured with a force platform. The error was lower than 2.5%. The validity of the mechanical method was also highlighted by a standard error of the estimate (SEE) ranging from 2.0 to 6.6N in absolute terms, a coefficient of variation (CV) < or =0.8%, and a correlation between the two scores > or =0.99 for all the lifts (p<0.001). The method described here, which is based on three simple parameters, allows accurate evaluation of the force developed by the upper limb muscles during bench press exercises in both field and laboratory conditions.

  20. Saturated fat consumption and the Theory of Planned Behaviour: exploring additive and interactive effects of habit strength.

    PubMed

    de Bruijn, Gert-Jan; Kroeze, Willemieke; Oenema, Anke; Brug, Johannes

    2008-09-01

    The additive and interactive effects of habit strength in the explanation of saturated fat intake were explored within the framework of the Theory of Planned Behaviour (TPB). Cross-sectional data were gathered in a Dutch adult sample (n=764) using self-administered questionnaires and analyzed using hierarchical regression analyses and simple slope analyses. Results showed that habit strength was a significant correlate of fat intake (beta=-0.11) and significantly increased the amount of explained variance in fat intake (R(2-change)=0.01). Furthermore, based on a significant interaction effect (beta=0.11), simple slope analyses revealed that intention was a significant correlate of fat intake for low levels (beta=-0.29) and medium levels (beta=-0.19) of habit strength, but a weaker and non-significant correlate for high levels (beta=-0.07) of habit strength. Higher habit strength may thus make limiting fat intake a non-intentional behaviour. Implications for information and motivation-based interventions are discussed.

  1. Performance of HADDOCK and a simple contact-based protein-ligand binding affinity predictor in the D3R Grand Challenge 2

    NASA Astrophysics Data System (ADS)

    Kurkcuoglu, Zeynep; Koukos, Panagiotis I.; Citro, Nevia; Trellet, Mikael E.; Rodrigues, J. P. G. L. M.; Moreira, Irina S.; Roel-Touris, Jorge; Melquiond, Adrien S. J.; Geng, Cunliang; Schaarschmidt, Jörg; Xue, Li C.; Vangone, Anna; Bonvin, A. M. J. J.

    2018-01-01

    We present the performance of HADDOCK, our information-driven docking software, in the second edition of the D3R Grand Challenge. In this blind experiment, participants were requested to predict the structures and binding affinities of complexes between the Farnesoid X nuclear receptor and 102 different ligands. The models obtained in Stage1 with HADDOCK and ligand-specific protocol show an average ligand RMSD of 5.1 Å from the crystal structure. Only 6/35 targets were within 2.5 Å RMSD from the reference, which prompted us to investigate the limiting factors and revise our protocol for Stage2. The choice of the receptor conformation appeared to have the strongest influence on the results. Our Stage2 models were of higher quality (13 out of 35 were within 2.5 Å), with an average RMSD of 4.1 Å. The docking protocol was applied to all 102 ligands to generate poses for binding affinity prediction. We developed a modified version of our contact-based binding affinity predictor PRODIGY, using the number of interatomic contacts classified by their type and the intermolecular electrostatic energy. This simple structure-based binding affinity predictor shows a Kendall's Tau correlation of 0.37 in ranking the ligands (7th best out of 77 methods, 5th/25 groups). Those results were obtained from the average prediction over the top10 poses, irrespective of their similarity/correctness, underscoring the robustness of our simple predictor. This results in an enrichment factor of 2.5 compared to a random predictor for ranking ligands within the top 25%, making it a promising approach to identify lead compounds in virtual screening.

  2. Signatures of criticality arise from random subsampling in simple population models.

    PubMed

    Nonnenmacher, Marcel; Behrens, Christian; Berens, Philipp; Bethge, Matthias; Macke, Jakob H

    2017-10-01

    The rise of large-scale recordings of neuronal activity has fueled the hope to gain new insights into the collective activity of neural ensembles. How can one link the statistics of neural population activity to underlying principles and theories? One attempt to interpret such data builds upon analogies to the behaviour of collective systems in statistical physics. Divergence of the specific heat-a measure of population statistics derived from thermodynamics-has been used to suggest that neural populations are optimized to operate at a "critical point". However, these findings have been challenged by theoretical studies which have shown that common inputs can lead to diverging specific heat. Here, we connect "signatures of criticality", and in particular the divergence of specific heat, back to statistics of neural population activity commonly studied in neural coding: firing rates and pairwise correlations. We show that the specific heat diverges whenever the average correlation strength does not depend on population size. This is necessarily true when data with correlations is randomly subsampled during the analysis process, irrespective of the detailed structure or origin of correlations. We also show how the characteristic shape of specific heat capacity curves depends on firing rates and correlations, using both analytically tractable models and numerical simulations of a canonical feed-forward population model. To analyze these simulations, we develop efficient methods for characterizing large-scale neural population activity with maximum entropy models. We find that, consistent with experimental findings, increases in firing rates and correlation directly lead to more pronounced signatures. Thus, previous reports of thermodynamical criticality in neural populations based on the analysis of specific heat can be explained by average firing rates and correlations, and are not indicative of an optimized coding strategy. We conclude that a reliable interpretation of statistical tests for theories of neural coding is possible only in reference to relevant ground-truth models.

  3. Five-Factor Model personality disorder prototypes: a review of their development, validity, and comparison to alternative approaches.

    PubMed

    Miller, Joshua D

    2012-12-01

    In this article, the development of Five-Factor Model (FFM) personality disorder (PD) prototypes for the assessment of DSM-IV PDs are reviewed, as well as subsequent procedures for scoring individuals' FFM data with regard to these PD prototypes, including similarity scores and simple additive counts that are based on a quantitative prototype matching methodology. Both techniques, which result in very strongly correlated scores, demonstrate convergent and discriminant validity, and provide clinically useful information with regard to various forms of functioning. The techniques described here for use with FFM data are quite different from the prototype matching methods used elsewhere. © 2012 The Author. Journal of Personality © 2012, Wiley Periodicals, Inc.

  4. Enthalpy measurement of coal-derived liquids. Technical progress report, August-October 1982

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kidnay, A.J.; Yesavage, V.F.

    The correlational effort on the coal syncrudes and model compounds has been proceeding along two fronts. The first involves experimental work on a correlating factor for association in the liquids and the second involves an investigation of the modeling capabilities of cubic equations of state. The first area of investigation is the experimental measurement of a correlating factor for assocition in coal liquids. The procedure involves molecular weight measurement by freezing point depression. To facilitate these measurements, a simple Beckman freezing point depression apparatus is being currently modified to increase the accuracy, speed, and ease of measurement. The second areamore » of effort has involved establishing a set of cubic equations of state which can adequately model the enthalpy departures of quinoline and m-cresol. To this effort, a number of standard and association specific equations of state have been tested against a data base of previously measured enthalpy departures of m-cresol and quinoline. It has been found that these equations do quantitatively a poor job on m-cresol and quinoline. These problems are probably due to the highly polar nature of m-cresol and to a lesser extent quinoline, and to the poor quality of critical parameters for quinoline.« less

  5. Revisiting node-based SIR models in complex networks with degree correlations

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Cao, Jinde; Alofi, Abdulaziz; AL-Mazrooei, Abdullah; Elaiw, Ahmed

    2015-11-01

    In this paper, we consider two growing networks which will lead to the degree-degree correlations between two nearest neighbors in the network. When the network grows to some certain size, we introduce an SIR-like disease such as pandemic influenza H1N1/09 to the population. Due to its rapid spread, the population size changes slowly, and thus the disease spreads on correlated networks with approximately fixed size. To predict the disease evolution on correlated networks, we first review two node-based SIR models incorporating degree correlations and an edge-based SIR model without considering degree correlation, and then compare the predictions of these models with stochastic SIR simulations, respectively. We find that the edge-based model, even without considering degree correlations, agrees much better than the node-based models incorporating degree correlations with stochastic SIR simulations in many respects. Moreover, simulation results show that for networks with positive correlation, the edge-based model provides a better upper bound of the cumulative incidence than the node-based SIR models, whereas for networks with negative correlation, it provides a lower bound of the cumulative incidence.

  6. Investigation of the sound generation mechanisms for in-duct orifice plates.

    PubMed

    Tao, Fuyang; Joseph, Phillip; Zhang, Xin; Stalnov, Oksana; Siercke, Matthias; Scheel, Henning

    2017-08-01

    Sound generation due to an orifice plate in a hard-walled flow duct which is commonly used in air distribution systems (ADS) and flow meters is investigated. The aim is to provide an understanding of this noise generation mechanism based on measurements of the source pressure distribution over the orifice plate. A simple model based on Curle's acoustic analogy is described that relates the broadband in-duct sound field to the surface pressure cross spectrum on both sides of the orifice plate. This work describes careful measurements of the surface pressure cross spectrum over the orifice plate from which the surface pressure distribution and correlation length is deduced. This information is then used to predict the radiated in-duct sound field. Agreement within 3 dB between the predicted and directly measured sound fields is obtained, providing direct confirmation that the surface pressure fluctuations acting over the orifice plates are the main noise sources. Based on the developed model, the contributions to the sound field from different radial locations of the orifice plate are calculated. The surface pressure is shown to follow a U 3.9 velocity scaling law and the area over which the surface sources are correlated follows a U 1.8 velocity scaling law.

  7. Tracing the origin of azimuthal gluon correlations in the color glass condensate

    DOE PAGES

    Lappi, T.; Schenke, B.; Schlichting, S.; ...

    2016-01-11

    Here we examine the origins of azimuthal correlations observed in high energy proton-nucleus collisions by considering the simple example of the scattering of uncorrelated partons off color fields in a large nucleus. We demonstrate how the physics of fluctuating color fields in the color glass condensate (CGC) effective theory generates these azimuthal multiparticle correlations and compute the corresponding Fourier coefficients v n within different CGC approximation schemes. We discuss in detail the qualitative and quantitative differences between the different schemes. Lastly, we will show how a recently introduced color field domain model that captures key features of the observed azimuthalmore » correlations can be understood in the CGC effective theory as a model of non-Gaussian correlations in the target nucleus.« less

  8. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance

    PubMed Central

    Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.

    2015-01-01

    To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT (“face patches”) did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. SIGNIFICANCE STATEMENT We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887

  9. Image analysis of pubic bone for age estimation in a computed tomography sample.

    PubMed

    López-Alcaraz, Manuel; González, Pedro Manuel Garamendi; Aguilera, Inmaculada Alemán; López, Miguel Botella

    2015-03-01

    Radiology has demonstrated great utility for age estimation, but most of the studies are based on metrical and morphological methods in order to perform an identification profile. A simple image analysis-based method is presented, aimed to correlate the bony tissue ultrastructure with several variables obtained from the grey-level histogram (GLH) of computed tomography (CT) sagittal sections of the pubic symphysis surface and the pubic body, and relating them with age. The CT sample consisted of 169 hospital Digital Imaging and Communications in Medicine (DICOM) archives of known sex and age. The calculated multiple regression models showed a maximum R (2) of 0.533 for females and 0.726 for males, with a high intra- and inter-observer agreement. The method suggested is considered not only useful for performing an identification profile during virtopsy, but also for application in further studies in order to attach a quantitative correlation for tissue ultrastructure characteristics, without complex and expensive methods beyond image analysis.

  10. Ontology-aided feature correlation for multi-modal urban sensing

    NASA Astrophysics Data System (ADS)

    Misra, Archan; Lantra, Zaman; Jayarajah, Kasthuri

    2016-05-01

    The paper explores the use of correlation across features extracted from different sensing channels to help in urban situational understanding. We use real-world datasets to show how such correlation can improve the accuracy of detection of city-wide events by combining metadata analysis with image analysis of Instagram content. We demonstrate this through a case study on the Singapore Haze. We show that simple ontological relationships and reasoning can significantly help in automating such correlation-based understanding of transient urban events.

  11. A classical density functional theory of ionic liquids.

    PubMed

    Forsman, Jan; Woodward, Clifford E; Trulsson, Martin

    2011-04-28

    We present a simple, classical density functional approach to the study of simple models of room temperature ionic liquids. Dispersion attractions as well as ion correlation effects and excluded volume packing are taken into account. The oligomeric structure, common to many ionic liquid molecules, is handled by a polymer density functional treatment. The theory is evaluated by comparisons with simulations, with an emphasis on the differential capacitance, an experimentally measurable quantity of significant practical interest.

  12. Multitrait, random regression, or simple repeatability model in high-throughput phenotyping data improve genomic prediction for wheat grain yield

    USDA-ARS?s Scientific Manuscript database

    High-throughput phenotyping (HTP) platforms can be used to measure traits that are genetically correlated with wheat (Triticum aestivum L.) grain yield across time. Incorporating such secondary traits in the multivariate pedigree and genomic prediction models would be desirable to improve indirect s...

  13. Weak-value amplification and optimal parameter estimation in the presence of correlated noise

    NASA Astrophysics Data System (ADS)

    Sinclair, Josiah; Hallaji, Matin; Steinberg, Aephraim M.; Tollaksen, Jeff; Jordan, Andrew N.

    2017-11-01

    We analytically and numerically investigate the performance of weak-value amplification (WVA) and related parameter estimation methods in the presence of temporally correlated noise. WVA is a special instance of a general measurement strategy that involves sorting data into separate subsets based on the outcome of a second "partitioning" measurement. Using a simplified correlated noise model that can be analyzed exactly together with optimal statistical estimators, we compare WVA to a conventional measurement method. We find that WVA indeed yields a much lower variance of the parameter of interest than the conventional technique does, optimized in the absence of any partitioning measurements. In contrast, a statistically optimal analysis that employs partitioning measurements, incorporating all partitioned results and their known correlations, is found to yield an improvement—typically slight—over the noise reduction achieved by WVA. This result occurs because the simple WVA technique is not tailored to any specific noise environment and therefore does not make use of correlations between the different partitions. We also compare WVA to traditional background subtraction, a familiar technique where measurement outcomes are partitioned to eliminate unknown offsets or errors in calibration. Surprisingly, for the cases we consider, background subtraction turns out to be a special case of the optimal partitioning approach, possessing a similar typically slight advantage over WVA. These results give deeper insight into the role of partitioning measurements (with or without postselection) in enhancing measurement precision, which some have found puzzling. They also resolve previously made conflicting claims about the usefulness of weak-value amplification to precision measurement in the presence of correlated noise. We finish by presenting numerical results to model a more realistic laboratory situation of time-decaying correlations, showing that our conclusions hold for a wide range of statistical models.

  14. Sub-surface structure of La Soufrière of Guadeloupe lava dome deduced from a ground-based magnetic survey

    NASA Astrophysics Data System (ADS)

    Bouligand, Claire; Coutant, Olivier; Glen, Jonathan M. G.

    2016-07-01

    In this study, we present the analysis and interpretation of a new ground magnetic survey acquired at the Soufrière volcano on Guadeloupe Island. Observed short-wavelength magnetic anomalies are compared to those predicted assuming a constant magnetization within the sub-surface. The good correlation between modeled and observed data over the summit of the dome indicates that the shallow sub-surface displays relatively constant and high magnetization intensity. In contrast, the poor correlation at the base of the dome suggests that the underlying material is non- to weakly-magnetic, consistent with what is expected for a talus comprised of randomly oriented and highly altered and weathered boulders. The new survey also reveals a dipole anomaly that is not accounted for by a constant magnetization in the sub-surface and suggests the existence of material with decreased magnetization beneath the Soufrière lava dome. We construct simple models to constrain its dimensions and propose that this body corresponds to hydrothermally altered material within and below the dome. The very large inferred volume for such material may have implications on the stability of the dome.

  15. Complexity, Training Paradigm Design, and the Contribution of Memory Subsystems to Grammar Learning

    PubMed Central

    Ettlinger, Marc; Wong, Patrick C. M.

    2016-01-01

    Although there is variability in nonnative grammar learning outcomes, the contributions of training paradigm design and memory subsystems are not well understood. To examine this, we presented learners with an artificial grammar that formed words via simple and complex morphophonological rules. Across three experiments, we manipulated training paradigm design and measured subjects' declarative, procedural, and working memory subsystems. Experiment 1 demonstrated that passive, exposure-based training boosted learning of both simple and complex grammatical rules, relative to no training. Additionally, procedural memory correlated with simple rule learning, whereas declarative memory correlated with complex rule learning. Experiment 2 showed that presenting corrective feedback during the test phase did not improve learning. Experiment 3 revealed that structuring the order of training so that subjects are first exposed to the simple rule and then the complex improved learning. The cumulative findings shed light on the contributions of grammatical complexity, training paradigm design, and domain-general memory subsystems in determining grammar learning success. PMID:27391085

  16. Electrostatic potential jump across fast-mode collisionless shocks

    NASA Technical Reports Server (NTRS)

    Mandt, M. E.; Kan, J. R.

    1991-01-01

    The electrostatic potential jump across fast-mode collisionless shocks is examined by comparing published observations, hybrid simulations, and a simple model, in order to better characterize its dependence on the various shock parameters. In all three, it is assumed that the electrons can be described by an isotropic power-law equation of state. The observations show that the cross-shock potential jump correlates well with the shock strength but shows very little correlation with other shock parameters. Assuming that the electrons obey an isotropic power law equation of state, the correlation of the potential jump with the shock strength follows naturally from the increased shock compression and an apparent dependence of the power law exponent on the Mach number which the observations indicate. It is found that including a Mach number dependence for the power law exponent in the electron equation of state in the simple model produces a potential jump which better fits the observations. On the basis of the simulation results and theoretical estimates of the cross-shock potential, it is discussed how the cross-shock potential might be expected to depend on the other shock parameters.

  17. Predicting the ethanol potential of wheat straw using near-infrared spectroscopy and chemometrics: The challenge of inherently intercorrelated response functions

    DOE PAGES

    Rinnan, Asmund; Bruun, Sander; Lindedam, Jane; ...

    2017-02-07

    Here, the combination of NIR spectroscopy and chemometrics is a powerful correlation method for predicting the chemical constituents in biological matrices, such as the glucose and xylose content of straw. However, difficulties arise when it comes to predicting enzymatic glucose and xylose release potential, which is matrix dependent. Further complications are caused by xylose and glucose release potential being highly intercorrelated. This study emphasizes the importance of understanding the causal relationship between the model and the constituent of interest. It investigates the possibility of using near-infrared spectroscopy to evaluate the ethanol potential of wheat straw by analyzing more than 1000more » samples from different wheat varieties and growth conditions. During the calibration model development, the prime emphasis was to investigate the correlation structure between the two major quality traits for saccharification of wheat straw: glucose and xylose release. The large sample set enabled a versatile and robust calibration model to be developed, showing that the prediction model for xylose release is based on a causal relationship with the NIR spectral data. In contrast, the prediction of glucose release was found to be highly dependent on the intercorrelation with xylose release. If this correlation is broken, the model performance breaks down. A simple method was devised for avoiding this breakdown and can be applied to any large dataset for investigating the causality or lack of causality of a prediction model.« less

  18. Predicting the ethanol potential of wheat straw using near-infrared spectroscopy and chemometrics: The challenge of inherently intercorrelated response functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rinnan, Asmund; Bruun, Sander; Lindedam, Jane

    Here, the combination of NIR spectroscopy and chemometrics is a powerful correlation method for predicting the chemical constituents in biological matrices, such as the glucose and xylose content of straw. However, difficulties arise when it comes to predicting enzymatic glucose and xylose release potential, which is matrix dependent. Further complications are caused by xylose and glucose release potential being highly intercorrelated. This study emphasizes the importance of understanding the causal relationship between the model and the constituent of interest. It investigates the possibility of using near-infrared spectroscopy to evaluate the ethanol potential of wheat straw by analyzing more than 1000more » samples from different wheat varieties and growth conditions. During the calibration model development, the prime emphasis was to investigate the correlation structure between the two major quality traits for saccharification of wheat straw: glucose and xylose release. The large sample set enabled a versatile and robust calibration model to be developed, showing that the prediction model for xylose release is based on a causal relationship with the NIR spectral data. In contrast, the prediction of glucose release was found to be highly dependent on the intercorrelation with xylose release. If this correlation is broken, the model performance breaks down. A simple method was devised for avoiding this breakdown and can be applied to any large dataset for investigating the causality or lack of causality of a prediction model.« less

  19. The use of simple inflow- and storage-based heuristics equations to represent reservoir behavior in California for investigating human impacts on the water cycle

    NASA Astrophysics Data System (ADS)

    Solander, K.; David, C. H.; Reager, J. T.; Famiglietti, J. S.

    2013-12-01

    The ability to reasonably replicate reservoir behavior in terms of storage and outflow is important for studying the potential human impacts on the terrestrial water cycle. Developing a simple method for this purpose could facilitate subsequent integration in a land surface or global climate model. This study attempts to simulate monthly reservoir outflow and storage using a simple, temporally-varying set of heuristics equations with input consisting of in situ records of reservoir inflow and storage. Equations of increasing complexity relative to the number of parameters involved were tested. Only two parameters were employed in the final equations used to predict outflow and storage in an attempt to best mimic seasonal reservoir behavior while still preserving model parsimony. California reservoirs were selected for model development due to the high level of data availability and intensity of water resource management in this region relative to other areas. Calibration was achieved using observations from eight major reservoirs representing approximately 41% of the 107 largest reservoirs in the state. Parameter optimization was accomplished using the minimum RMSE between observed and modeled storage and outflow as the main objective function. Initial results obtained for a multi-reservoir average of the correlation coefficient between observed and modeled storage (resp. outflow) is of 0.78 (resp. 0.75). These results combined with the simplicity of the equations being used show promise for integration into a land surface or a global climate model. This would be invaluable for evaluations of reservoir management impacts on the flow regime and associated ecosystems as well as on the climate at both regional and global scales.

  20. Cosmological velocity correlations - Observations and model predictions

    NASA Technical Reports Server (NTRS)

    Gorski, Krzysztof M.; Davis, Marc; Strauss, Michael A.; White, Simon D. M.; Yahil, Amos

    1989-01-01

    By applying the present simple statistics for two-point cosmological peculiar velocity-correlation measurements to the actual data sets of the Local Supercluster spiral galaxy of Aaronson et al. (1982) and the elliptical galaxy sample of Burstein et al. (1987), as well as to the velocity field predicted by the distribution of IRAS galaxies, a coherence length of 1100-1600 km/sec is obtained. Coherence length is defined as that separation at which the correlations drop to half their zero-lag value. These results are compared with predictions from two models of large-scale structure formation: that of cold dark matter and that of baryon isocurvature proposed by Peebles (1980). N-body simulations of these models are performed to check the linear theory predictions and measure sampling fluctuations.

  1. Evaluation of left-right difference of impulse in impact forces at stance phase: comparison of measurements on flat land and stairs

    PubMed Central

    Tasato, Hiroshi; Kida, Noriyuki

    2018-01-01

    [Purpose] The purpose of this study was to investigate the measurement method and parameters to simply evaluate the condition of the knee that are necessary for preventing locomotive syndrome as advocated by the Japan Orthopedic Association. [Subjects and Methods] The subjects installed acceleration sensors in lateral condyles of the tibia and measured acceleration and load under the conditions of walking on a flat ground and walking using stairs; the difference between the impulse of impact forces (acceleration × load) of the two knees was defined as a simple evaluation parameter. [Results] Simple evaluation parameters were not correlated with age during walking on a flat ground, but during walking using stairs, it was almost flat up to the age of 20–40 years, and after the age of 49 years, based on the quadratic curve approximation (R2=0.99), a correlation of simple evaluation parameters with age could be confirmed. [Conclusion] The simple evaluation parameter during walking using stairs was highly correlated with age, suggesting a contribution to preventing locomotive syndrome by improving reliability. In the future, we plan to improve reliability by increasing the data, and establish it as a simple evaluation parameter that can be used for preventing locomotive syndrome in elderly people and those with KL classification grades 0–1. PMID:29706699

  2. Automated map sharpening by maximization of detail and connectivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terwilliger, Thomas C.; Sobolev, Oleg V.; Afonine, Pavel V.

    An algorithm for automatic map sharpening is presented that is based on optimization of the detail and connectivity of the sharpened map. The detail in the map is reflected in the surface area of an iso-contour surface that contains a fixed fraction of the volume of the map, where a map with high level of detail has a high surface area. The connectivity of the sharpened map is reflected in the number of connected regions defined by the same iso-contour surfaces, where a map with high connectivity has a small number of connected regions. By combining these two measures inmore » a metric termed the `adjusted surface area', map quality can be evaluated in an automated fashion. This metric was used to choose optimal map-sharpening parameters without reference to a model or other interpretations of the map. Map sharpening by optimization of the adjusted surface area can be carried out for a map as a whole or it can be carried out locally, yielding a locally sharpened map. To evaluate the performance of various approaches, a simple metric based on map–model correlation that can reproduce visual choices of optimally sharpened maps was used. The map–model correlation is calculated using a model withBfactors (atomic displacement factors; ADPs) set to zero. Finally, this model-based metric was used to evaluate map sharpening and to evaluate map-sharpening approaches, and it was found that optimization of the adjusted surface area can be an effective tool for map sharpening.« less

  3. Automated map sharpening by maximization of detail and connectivity

    DOE PAGES

    Terwilliger, Thomas C.; Sobolev, Oleg V.; Afonine, Pavel V.; ...

    2018-05-18

    An algorithm for automatic map sharpening is presented that is based on optimization of the detail and connectivity of the sharpened map. The detail in the map is reflected in the surface area of an iso-contour surface that contains a fixed fraction of the volume of the map, where a map with high level of detail has a high surface area. The connectivity of the sharpened map is reflected in the number of connected regions defined by the same iso-contour surfaces, where a map with high connectivity has a small number of connected regions. By combining these two measures inmore » a metric termed the `adjusted surface area', map quality can be evaluated in an automated fashion. This metric was used to choose optimal map-sharpening parameters without reference to a model or other interpretations of the map. Map sharpening by optimization of the adjusted surface area can be carried out for a map as a whole or it can be carried out locally, yielding a locally sharpened map. To evaluate the performance of various approaches, a simple metric based on map–model correlation that can reproduce visual choices of optimally sharpened maps was used. The map–model correlation is calculated using a model withBfactors (atomic displacement factors; ADPs) set to zero. Finally, this model-based metric was used to evaluate map sharpening and to evaluate map-sharpening approaches, and it was found that optimization of the adjusted surface area can be an effective tool for map sharpening.« less

  4. A simple method to determine evaporation and compensate for liquid losses in small-scale cell culture systems.

    PubMed

    Wiegmann, Vincent; Martinez, Cristina Bernal; Baganz, Frank

    2018-04-24

    Establish a method to indirectly measure evaporation in microwell-based cell culture systems and show that the proposed method allows compensating for liquid losses in fed-batch processes. A correlation between evaporation and the concentration of Na + was found (R 2  = 0.95) when using the 24-well-based miniature bioreactor system (micro-Matrix) for a batch culture with GS-CHO. Based on these results, a method was developed to counteract evaporation with periodic water additions based on measurements of the Na + concentration. Implementation of this method resulted in a reduction of the relative liquid loss after 15 days of a fed-batch cultivation from 36.7 ± 6.7% without volume corrections to 6.9 ± 6.5% with volume corrections. A procedure was established to indirectly measure evaporation through a correlation with the level of Na + ions in solution and deriving a simple formula to account for liquid losses.

  5. Fragmentation modeling of a resin bonded sand

    NASA Astrophysics Data System (ADS)

    Hilth, William; Ryckelynck, David

    2017-06-01

    Cemented sands exhibit a complex mechanical behavior that can lead to sophisticated models, with numerous parameters without real physical meaning. However, using a rather simple generalized critical state bonded soil model has proven to be a relevant compromise between an easy calibration and good results. The constitutive model formulation considers a non-associated elasto-plastic formulation within the critical state framework. The calibration procedure, using standard laboratory tests, is complemented by the study of an uniaxial compression test observed by tomography. Using finite elements simulations, this test is simulated considering a non-homogeneous 3D media. The tomography of compression sample gives access to 3D displacement fields by using image correlation techniques. Unfortunately these fields have missing experimental data because of the low resolution of correlations for low displacement magnitudes. We propose a recovery method that reconstructs 3D full displacement fields and 2D boundary displacement fields. These fields are mandatory for the calibration of the constitutive parameters by using 3D finite element simulations. The proposed recovery technique is based on a singular value decomposition of available experimental data. This calibration protocol enables an accurate prediction of the fragmentation of the specimen.

  6. Investigation on the correlation between energy deposition and clustered DNA damage induced by low-energy electrons.

    PubMed

    Liu, Wei; Tan, Zhenyu; Zhang, Liming; Champion, Christophe

    2018-05-01

    This study presents the correlation between energy deposition and clustered DNA damage, based on a Monte Carlo simulation of the spectrum of direct DNA damage induced by low-energy electrons including the dissociative electron attachment. Clustered DNA damage is classified as simple and complex in terms of the combination of single-strand breaks (SSBs) or double-strand breaks (DSBs) and adjacent base damage (BD). The results show that the energy depositions associated with about 90% of total clustered DNA damage are below 150 eV. The simple clustered DNA damage, which is constituted of the combination of SSBs and adjacent BD, is dominant, accounting for 90% of all clustered DNA damage, and the spectra of the energy depositions correlating with them are similar for different primary energies. One type of simple clustered DNA damage is the combination of a SSB and 1-5 BD, which is denoted as SSB + BD. The average contribution of SSB + BD to total simple clustered DNA damage reaches up to about 84% for the considered primary energies. In all forms of SSB + BD, the SSB + BD including only one base damage is dominant (above 80%). In addition, for the considered primary energies, there is no obvious difference between the average energy depositions for a fixed complexity of SSB + BD determined by the number of base damage, but average energy depositions increase with the complexity of SSB + BD. In the complex clustered DNA damage constituted by the combination of DSBs and BD around them, a relatively simple type is a DSB combining adjacent BD, marked as DSB + BD, and it is of substantial contribution (on average up to about 82%). The spectrum of DSB + BD is given mainly by the DSB in combination with different numbers of base damage, from 1 to 5. For the considered primary energies, the DSB combined with only one base damage contributes about 83% of total DSB + BD, and the average energy deposition is about 106 eV. However, the energy deposition increases with the complexity of clustered DNA damage, and therefore, the clustered DNA damage with high complexity still needs to be considered in the study of radiation biological effects, in spite of their small contributions to all clustered DNA damage.

  7. The modelling of the flow-induced vibrations of periodic flat and axial-symmetric structures with a wave-based method

    NASA Astrophysics Data System (ADS)

    Errico, F.; Ichchou, M.; De Rosa, S.; Bareille, O.; Franco, F.

    2018-06-01

    The stochastic response of periodic flat and axial-symmetric structures, subjected to random and spatially-correlated loads, is here analysed through an approach based on the combination of a wave finite element and a transfer matrix method. Although giving a lower computational cost, the present approach keeps the same accuracy of classic finite element methods. When dealing with homogeneous structures, the accuracy is also extended to higher frequencies, without increasing the time of calculation. Depending on the complexity of the structure and the frequency range, the computational cost can be reduced more than two orders of magnitude. The presented methodology is validated both for simple and complex structural shapes, under deterministic and random loads.

  8. A SIMPLE CELLULAR AUTOMATON MODEL FOR HIGH-LEVEL VEGETATION DYNAMICS

    EPA Science Inventory

    We have produced a simple two-dimensional (ground-plan) cellular automata model of vegetation dynamics specifically to investigate high-level community processes. The model is probabilistic, with individual plant behavior determined by physiologically-based rules derived from a w...

  9. Simple and Hierarchical Models for Stochastic Test Misgrading.

    ERIC Educational Resources Information Center

    Wang, Jianjun

    1993-01-01

    Test misgrading is treated as a stochastic process. The expected number of misgradings, inter-occurrence time of misgradings, and waiting time for the "n"th misgrading are discussed based on a simple Poisson model and a hierarchical Beta-Poisson model. Examples of model construction are given. (SLD)

  10. Length-scale crossover of the hydrophobic interaction in a coarse-grained water model

    NASA Astrophysics Data System (ADS)

    Chaimovich, Aviel; Shell, M. Scott

    2013-11-01

    It has been difficult to establish a clear connection between the hydrophobic interaction among small molecules typically studied in molecular simulations (a weak, oscillatory force) and that found between large, macroscopic surfaces in experiments (a strong, monotonic force). Here, we show that both types of interaction can emerge with a simple, core-softened water model that captures water's unique pairwise structure. As in hydrophobic hydration, we find that the hydrophobic interaction manifests a length-scale dependence, exhibiting distinct driving forces in the molecular and macroscopic regimes. Moreover, the ability of this simple model to capture both regimes suggests that several features of the hydrophobic force can be understood merely through water's pair correlations.

  11. Length-scale crossover of the hydrophobic interaction in a coarse-grained water model.

    PubMed

    Chaimovich, Aviel; Shell, M Scott

    2013-11-01

    It has been difficult to establish a clear connection between the hydrophobic interaction among small molecules typically studied in molecular simulations (a weak, oscillatory force) and that found between large, macroscopic surfaces in experiments (a strong, monotonic force). Here, we show that both types of interaction can emerge with a simple, core-softened water model that captures water's unique pairwise structure. As in hydrophobic hydration, we find that the hydrophobic interaction manifests a length-scale dependence, exhibiting distinct driving forces in the molecular and macroscopic regimes. Moreover, the ability of this simple model to capture both regimes suggests that several features of the hydrophobic force can be understood merely through water's pair correlations.

  12. Real external predictivity of QSAR models: how to evaluate it? Comparison of different validation criteria and proposal of using the concordance correlation coefficient.

    PubMed

    Chirico, Nicola; Gramatica, Paola

    2011-09-26

    The main utility of QSAR models is their ability to predict activities/properties for new chemicals, and this external prediction ability is evaluated by means of various validation criteria. As a measure for such evaluation the OECD guidelines have proposed the predictive squared correlation coefficient Q(2)(F1) (Shi et al.). However, other validation criteria have been proposed by other authors: the Golbraikh-Tropsha method, r(2)(m) (Roy), Q(2)(F2) (Schüürmann et al.), Q(2)(F3) (Consonni et al.). In QSAR studies these measures are usually in accordance, though this is not always the case, thus doubts can arise when contradictory results are obtained. It is likely that none of the aforementioned criteria is the best in every situation, so a comparative study using simulated data sets is proposed here, using threshold values suggested by the proponents or those widely used in QSAR modeling. In addition, a different and simple external validation measure, the concordance correlation coefficient (CCC), is proposed and compared with other criteria. Huge data sets were used to study the general behavior of validation measures, and the concordance correlation coefficient was shown to be the most restrictive. On using simulated data sets of a more realistic size, it was found that CCC was broadly in agreement, about 96% of the time, with other validation measures in accepting models as predictive, and in almost all the examples it was the most precautionary. The proposed concordance correlation coefficient also works well on real data sets, where it seems to be more stable, and helps in making decisions when the validation measures are in conflict. Since it is conceptually simple, and given its stability and restrictiveness, we propose the concordance correlation coefficient as a complementary, or alternative, more prudent measure of a QSAR model to be externally predictive.

  13. Simulation of green roof runoff under different substrate depths and vegetation covers by coupling a simple conceptual and a physically based hydrological model.

    PubMed

    Soulis, Konstantinos X; Valiantzas, John D; Ntoulas, Nikolaos; Kargas, George; Nektarios, Panayiotis A

    2017-09-15

    In spite of the well-known green roof benefits, their widespread adoption in the management practices of urban drainage systems requires the use of adequate analytical and modelling tools. In the current study, green roof runoff modeling was accomplished by developing, testing, and jointly using a simple conceptual model and a physically based numerical simulation model utilizing HYDRUS-1D software. The use of such an approach combines the advantages of the conceptual model, namely simplicity, low computational requirements, and ability to be easily integrated in decision support tools with the capacity of the physically based simulation model to be easily transferred in conditions and locations other than those used for calibrating and validating it. The proposed approach was evaluated with an experimental dataset that included various green roof covers (either succulent plants - Sedum sediforme, or xerophytic plants - Origanum onites, or bare substrate without any vegetation) and two substrate depths (either 8 cm or 16 cm). Both the physically based and the conceptual models matched very closely the observed hydrographs. In general, the conceptual model performed better than the physically based simulation model but the overall performance of both models was sufficient in most cases as it is revealed by the Nash-Sutcliffe Efficiency index which was generally greater than 0.70. Finally, it was showcased how a physically based and a simple conceptual model can be jointly used to allow the use of the simple conceptual model for a wider set of conditions than the available experimental data and in order to support green roof design. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Pattern Adaptation and Normalization Reweighting.

    PubMed

    Westrick, Zachary M; Heeger, David J; Landy, Michael S

    2016-09-21

    Adaptation to an oriented stimulus changes both the gain and preferred orientation of neural responses in V1. Neurons tuned near the adapted orientation are suppressed, and their preferred orientations shift away from the adapter. We propose a model in which weights of divisive normalization are dynamically adjusted to homeostatically maintain response products between pairs of neurons. We demonstrate that this adjustment can be performed by a very simple learning rule. Simulations of this model closely match existing data from visual adaptation experiments. We consider several alternative models, including variants based on homeostatic maintenance of response correlations or covariance, as well as feedforward gain-control models with multiple layers, and we demonstrate that homeostatic maintenance of response products provides the best account of the physiological data. Adaptation is a phenomenon throughout the nervous system in which neural tuning properties change in response to changes in environmental statistics. We developed a model of adaptation that combines normalization (in which a neuron's gain is reduced by the summed responses of its neighbors) and Hebbian learning (in which synaptic strength, in this case divisive normalization, is increased by correlated firing). The model is shown to account for several properties of adaptation in primary visual cortex in response to changes in the statistics of contour orientation. Copyright © 2016 the authors 0270-6474/16/369805-12$15.00/0.

  15. Health belief model and reasoned action theory in predicting water saving behaviors in yazd, iran.

    PubMed

    Morowatisharifabad, Mohammad Ali; Momayyezi, Mahdieh; Ghaneian, Mohammad Taghi

    2012-01-01

    People's behaviors and intentions about healthy behaviors depend on their beliefs, values, and knowledge about the issue. Various models of health education are used in deter¬mining predictors of different healthy behaviors but their efficacy in cultural behaviors, such as water saving behaviors, are not studied. The study was conducted to explain water saving beha¬viors in Yazd, Iran on the basis of Health Belief Model and Reasoned Action Theory. The cross-sectional study used random cluster sampling to recruit 200 heads of households to collect the data. The survey questionnaire was tested for its content validity and reliability. Analysis of data included descriptive statistics, simple correlation, hierarchical multiple regression. Simple correlations between water saving behaviors and Reasoned Action Theory and Health Belief Model constructs were statistically significant. Health Belief Model and Reasoned Action Theory constructs explained 20.80% and 8.40% of the variances in water saving beha-viors, respectively. Perceived barriers were the strongest Predictor. Additionally, there was a sta¬tistically positive correlation between water saving behaviors and intention. In designing interventions aimed at water waste prevention, barriers of water saving behaviors should be addressed first, followed by people's attitude towards water saving. Health Belief Model constructs, with the exception of perceived severity and benefits, is more powerful than is Reasoned Action Theory in predicting water saving behavior and may be used as a framework for educational interventions aimed at improving water saving behaviors.

  16. Health Belief Model and Reasoned Action Theory in Predicting Water Saving Behaviors in Yazd, Iran

    PubMed Central

    Morowatisharifabad, Mohammad Ali; Momayyezi, Mahdieh; Ghaneian, Mohammad Taghi

    2012-01-01

    Background: People's behaviors and intentions about healthy behaviors depend on their beliefs, values, and knowledge about the issue. Various models of health education are used in deter¬mining predictors of different healthy behaviors but their efficacy in cultural behaviors, such as water saving behaviors, are not studied. The study was conducted to explain water saving beha¬viors in Yazd, Iran on the basis of Health Belief Model and Reasoned Action Theory. Methods: The cross-sectional study used random cluster sampling to recruit 200 heads of households to collect the data. The survey questionnaire was tested for its content validity and reliability. Analysis of data included descriptive statistics, simple correlation, hierarchical multiple regression. Results: Simple correlations between water saving behaviors and Reasoned Action Theory and Health Belief Model constructs were statistically significant. Health Belief Model and Reasoned Action Theory constructs explained 20.80% and 8.40% of the variances in water saving beha-viors, respectively. Perceived barriers were the strongest Predictor. Additionally, there was a sta¬tistically positive correlation between water saving behaviors and intention. Conclusion: In designing interventions aimed at water waste prevention, barriers of water saving behaviors should be addressed first, followed by people's attitude towards water saving. Health Belief Model constructs, with the exception of perceived severity and benefits, is more powerful than is Reasoned Action Theory in predicting water saving behavior and may be used as a framework for educational interventions aimed at improving water saving behaviors. PMID:24688927

  17. Critical space-time networks and geometric phase transitions from frustrated edge antiferromagnetism

    NASA Astrophysics Data System (ADS)

    Trugenberger, Carlo A.

    2015-12-01

    Recently I proposed a simple dynamical network model for discrete space-time that self-organizes as a graph with Hausdorff dimension dH=4 . The model has a geometric quantum phase transition with disorder parameter (dH-ds) , where ds is the spectral dimension of the dynamical graph. Self-organization in this network model is based on a competition between a ferromagnetic Ising model for vertices and an antiferromagnetic Ising model for edges. In this paper I solve a toy version of this model defined on a bipartite graph in the mean-field approximation. I show that the geometric phase transition corresponds exactly to the antiferromagnetic transition for edges, the dimensional disorder parameter of the former being mapped to the staggered magnetization order parameter of the latter. The model has a critical point with long-range correlations between edges, where a continuum random geometry can be defined, exactly as in Kazakov's famed 2D random lattice Ising model but now in any number of dimensions.

  18. Performance Metrics, Error Modeling, and Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling

    2016-01-01

    A common set of statistical metrics has been used to summarize the performance of models or measurements-­ the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying un­certainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling meth­odology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.

  19. Rapid determination of Swiss cheese composition by Fourier transform infrared/attenuated total reflectance spectroscopy.

    PubMed

    Rodriguez-Saona, L E; Koca, N; Harper, W J; Alvarez, V B

    2006-05-01

    There is a need for rapid and simple techniques that can be used to predict the quality of cheese. The aim of this research was to develop a simple and rapid screening tool for monitoring Swiss cheese composition by using Fourier transform infrared spectroscopy. Twenty Swiss cheese samples from different manufacturers and degree of maturity were evaluated. Direct measurements of Swiss cheese slices (approximately 0.5 g) were made using a MIRacle 3-reflection diamond attenuated total reflectance (ATR) accessory. Reference methods for moisture (vacuum oven), protein content (Kjeldahl), and fat (Babcock) were used. Calibration models were developed based on a cross-validated (leave-one-out approach) partial least squares regression. The information-rich infrared spectral range for Swiss cheese samples was from 3,000 to 2,800 cm(-1) and 1,800 to 900 cm(-1). The performance statistics for cross-validated models gave estimates for standard error of cross-validation of 0.45, 0.25, and 0.21% for moisture, protein, and fat respectively, and correlation coefficients r > 0.96. Furthermore, the ATR infrared protocol allowed for the classification of cheeses according to manufacturer and aging based on unique spectral information, especially of carbonyl groups, probably due to their distinctive lipid composition. Attenuated total reflectance infrared spectroscopy allowed for the rapid (approximately 3-min analysis time) and accurate analysis of the composition of Swiss cheese. This technique could contribute to the development of simple and rapid protocols for monitoring complex biochemical changes, and predicting the final quality of the cheese.

  20. Temporal Precision of Neuronal Information in a Rapid Perceptual Judgment

    PubMed Central

    Ghose, Geoffrey M.; Harrison, Ian T.

    2009-01-01

    In many situations, such as pedestrians crossing a busy street or prey evading predators, rapid decisions based on limited perceptual information are critical for survival. The brevity of these perceptual judgments constrains how neuronal signals are integrated or pooled over time because the underlying sequence of processes, from sensation to perceptual evaluation to motor planning and execution, all occur within several hundred milliseconds. Because most previous physiological studies of these processes have relied on tasks requiring considerably longer temporal integration, the neuronal basis of such rapid decisions remains largely unexplored. In this study, we examine the temporal precision of neuronal activity associated with a rapid perceptual judgment. We find that the activity of individual neurons over tens of milliseconds can reliably convey information about sensory events and was well correlated with the animals' judgments. There was a strong correlation between sensory reliability and the correlation with behavioral choice, suggesting that rapid decisions were preferentially based on the most reliable sensory signals. We also find that a simple model in which the responses of a small number of individual neurons (<5) are summed can completely explain behavioral performance. These results suggest that neuronal circuits are sufficiently precise to allow for cognitive decisions to be based on small numbers of action potentials from highly reliable neurons. PMID:19109454

  1. Temporal precision of neuronal information in a rapid perceptual judgment.

    PubMed

    Ghose, Geoffrey M; Harrison, Ian T

    2009-03-01

    In many situations, such as pedestrians crossing a busy street or prey evading predators, rapid decisions based on limited perceptual information are critical for survival. The brevity of these perceptual judgments constrains how neuronal signals are integrated or pooled over time because the underlying sequence of processes, from sensation to perceptual evaluation to motor planning and execution, all occur within several hundred milliseconds. Because most previous physiological studies of these processes have relied on tasks requiring considerably longer temporal integration, the neuronal basis of such rapid decisions remains largely unexplored. In this study, we examine the temporal precision of neuronal activity associated with a rapid perceptual judgment. We find that the activity of individual neurons over tens of milliseconds can reliably convey information about sensory events and was well correlated with the animals' judgments. There was a strong correlation between sensory reliability and the correlation with behavioral choice, suggesting that rapid decisions were preferentially based on the most reliable sensory signals. We also find that a simple model in which the responses of a small number of individual neurons (<5) are summed can completely explain behavioral performance. These results suggest that neuronal circuits are sufficiently precise to allow for cognitive decisions to be based on small numbers of action potentials from highly reliable neurons.

  2. Performance of wind turbines in a turbulent atmosphere

    NASA Technical Reports Server (NTRS)

    Sundar, R. M.; Sullivan, J. P.

    1981-01-01

    The effect of atmospheric turbulence on the power fluctuations of large wind turbines was studied. The significance of spatial non-uniformities of the wind is emphasized. The turbulent wind with correlation in time and space is simulated on the computer by Shinozukas method. The wind turbulence is modelled according to the Davenport spectrum with an exponential spatial correlation function. The rotor aerodynamics is modelled by simple blade element theory. Comparison of the spectrum of power output signal between 1-D and 3-D turbulence, shows the significant power fluctuations centered around the blade passage frequency.

  3. Computational modeling of in vitro biological responses on polymethacrylate surfaces

    PubMed Central

    Ghosh, Jayeeta; Lewitus, Dan Y; Chandra, Prafulla; Joy, Abraham; Bushman, Jared; Knight, Doyle; Kohn, Joachim

    2011-01-01

    The objective of this research was to examine the capabilities of QSPR (Quantitative Structure Property Relationship) modeling to predict specific biological responses (fibrinogen adsorption, cell attachment and cell proliferation index) on thin films of different polymethacrylates. Using 33 commercially available monomers it is theoretically possible to construct a library of over 40,000 distinct polymer compositions. A subset of these polymers were synthesized and solvent cast surfaces were prepared in 96 well plates for the measurement of fibrinogen adsorption. NIH 3T3 cell attachment and proliferation index were measured on spin coated thin films of these polymers. Based on the experimental results of these polymers, separate models were built for homo-, co-, and terpolymers in the library with good correlation between experiment and predicted values. The ability to predict biological responses by simple QSPR models for large numbers of polymers has important implications in designing biomaterials for specific biological or medical applications. PMID:21779132

  4. Measurement error: Implications for diagnosis and discrepancy models of developmental dyslexia.

    PubMed

    Cotton, Sue M; Crewther, David P; Crewther, Sheila G

    2005-08-01

    The diagnosis of developmental dyslexia (DD) is reliant on a discrepancy between intellectual functioning and reading achievement. Discrepancy-based formulae have frequently been employed to establish the significance of the difference between 'intelligence' and 'actual' reading achievement. These formulae, however, often fail to take into consideration test reliability and the error associated with a single test score. This paper provides an illustration of the potential effects that test reliability and measurement error can have on the diagnosis of dyslexia, with particular reference to discrepancy models. The roles of reliability and standard error of measurement (SEM) in classic test theory are also briefly reviewed. This is followed by illustrations of how SEM and test reliability can aid with the interpretation of a simple discrepancy-based formula of DD. It is proposed that a lack of consideration of test theory in the use of discrepancy-based models of DD can lead to misdiagnosis (both false positives and false negatives). Further, misdiagnosis in research samples affects reproducibility and generalizability of findings. This in turn, may explain current inconsistencies in research on the perceptual, sensory, and motor correlates of dyslexia.

  5. Microeconomics of yield learning and process control in semiconductor manufacturing

    NASA Astrophysics Data System (ADS)

    Monahan, Kevin M.

    2003-06-01

    Simple microeconomic models that directly link yield learning to profitability in semiconductor manufacturing have been rare or non-existent. In this work, we review such a model and provide links to inspection capability and cost. Using a small number of input parameters, we explain current yield management practices in 200mm factories. The model is then used to extrapolate requirements for 300mm factories, including the impact of technology transitions to 130nm design rules and below. We show that the dramatic increase in value per wafer at the 300mm transition becomes a driver for increasing metrology and inspection capability and sampling. These analyses correlate well wtih actual factory data and often identify millions of dollars in potential cost savings. We demonstrate this using the example of grating-based overlay metrology for the 65nm node.

  6. Hard X-ray emission from accretion shocks around galaxy clusters

    NASA Astrophysics Data System (ADS)

    Kushnir, Doron; Waxman, Eli

    2010-02-01

    We show that the hard X-ray (HXR) emission observed from several galaxy clusters is consistent with a simple model, in which the nonthermal emission is produced by inverse Compton scattering of cosmic microwave background photons by electrons accelerated in cluster accretion shocks: The dependence of HXR surface brightness on cluster temperature is consistent with that predicted by the model, and the observed HXR luminosity is consistent with the fraction of shock thermal energy deposited in relativistic electrons being lesssim0.1. Alternative models, where the HXR emission is predicted to be correlated with the cluster thermal emission, are disfavored by the data. The implications of our predictions to future HXR observations (e.g. by NuStar, Simbol-X) and to (space/ground based) γ-ray observations (e.g. by Fermi, HESS, MAGIC, VERITAS) are discussed.

  7. Cross-matching: a modified cross-correlation underlying threshold energy model and match-based depth perception

    PubMed Central

    Doi, Takahiro; Fujita, Ichiro

    2014-01-01

    Three-dimensional visual perception requires correct matching of images projected to the left and right eyes. The matching process is faced with an ambiguity: part of one eye's image can be matched to multiple parts of the other eye's image. This stereo correspondence problem is complicated for random-dot stereograms (RDSs), because dots with an identical appearance produce numerous potential matches. Despite such complexity, human subjects can perceive a coherent depth structure. A coherent solution to the correspondence problem does not exist for anticorrelated RDSs (aRDSs), in which luminance contrast is reversed in one eye. Neurons in the visual cortex reduce disparity selectivity for aRDSs progressively along the visual processing hierarchy. A disparity-energy model followed by threshold nonlinearity (threshold energy model) can account for this reduction, providing a possible mechanism for the neural matching process. However, the essential computation underlying the threshold energy model is not clear. Here, we propose that a nonlinear modification of cross-correlation, which we term “cross-matching,” represents the essence of the threshold energy model. We placed half-wave rectification within the cross-correlation of the left-eye and right-eye images. The disparity tuning derived from cross-matching was attenuated for aRDSs. We simulated a psychometric curve as a function of graded anticorrelation (graded mixture of aRDS and normal RDS); this simulated curve reproduced the match-based psychometric function observed in human near/far discrimination. The dot density was 25% for both simulation and observation. We predicted that as the dot density increased, the performance for aRDSs should decrease below chance (i.e., reversed depth), and the level of anticorrelation that nullifies depth perception should also decrease. We suggest that cross-matching serves as a simple computation underlying the match-based disparity signals in stereoscopic depth perception. PMID:25360107

  8. Validation of PC-based Sound Card with Biopac for Digitalization of ECG Recording in Short-term HRV Analysis.

    PubMed

    Maheshkumar, K; Dilara, K; Maruthy, K N; Sundareswaren, L

    2016-07-01

    Heart rate variability (HRV) analysis is a simple and noninvasive technique capable of assessing autonomic nervous system modulation on heart rate (HR) in healthy as well as disease conditions. The aim of the present study was to compare (validate) the HRV using a temporal series of electrocardiograms (ECG) obtained by simple analog amplifier with PC-based sound card (audacity) and Biopac MP36 module. Based on the inclusion criteria, 120 healthy participants, including 72 males and 48 females, participated in the present study. Following standard protocol, 5-min ECG was recorded after 10 min of supine rest by Portable simple analog amplifier PC-based sound card as well as by Biopac module with surface electrodes in Leads II position simultaneously. All the ECG data was visually screened and was found to be free of ectopic beats and noise. RR intervals from both ECG recordings were analyzed separately in Kubios software. Short-term HRV indexes in both time and frequency domain were used. The unpaired Student's t-test and Pearson correlation coefficient test were used for the analysis using the R statistical software. No statistically significant differences were observed when comparing the values analyzed by means of the two devices for HRV. Correlation analysis revealed perfect positive correlation (r = 0.99, P < 0.001) between the values in time and frequency domain obtained by the devices. On the basis of the results of the present study, we suggest that the calculation of HRV values in the time and frequency domains by RR series obtained from the PC-based sound card is probably as reliable as those obtained by the gold standard Biopac MP36.

  9. Supply based on demand dynamical model

    NASA Astrophysics Data System (ADS)

    Levi, Asaf; Sabuco, Juan; Sanjuán, Miguel A. F.

    2018-04-01

    We propose and numerically analyze a simple dynamical model that describes the firm behaviors under uncertainty of demand. Iterating this simple model and varying some parameter values, we observe a wide variety of market dynamics such as equilibria, periodic, and chaotic behaviors. Interestingly, the model is also able to reproduce market collapses.

  10. A simple approach to ignoring irrelevant variables by population decoding based on multisensory neurons

    PubMed Central

    Kim, HyungGoo R.; Pitkow, Xaq; Angelaki, Dora E.

    2016-01-01

    Sensory input reflects events that occur in the environment, but multiple events may be confounded in sensory signals. For example, under many natural viewing conditions, retinal image motion reflects some combination of self-motion and movement of objects in the world. To estimate one stimulus event and ignore others, the brain can perform marginalization operations, but the neural bases of these operations are poorly understood. Using computational modeling, we examine how multisensory signals may be processed to estimate the direction of self-motion (i.e., heading) and to marginalize out effects of object motion. Multisensory neurons represent heading based on both visual and vestibular inputs and come in two basic types: “congruent” and “opposite” cells. Congruent cells have matched heading tuning for visual and vestibular cues and have been linked to perceptual benefits of cue integration during heading discrimination. Opposite cells have mismatched visual and vestibular heading preferences and are ill-suited for cue integration. We show that decoding a mixed population of congruent and opposite cells substantially reduces errors in heading estimation caused by object motion. In addition, we present a general formulation of an optimal linear decoding scheme that approximates marginalization and can be implemented biologically by simple reinforcement learning mechanisms. We also show that neural response correlations induced by task-irrelevant variables may greatly exceed intrinsic noise correlations. Overall, our findings suggest a general computational strategy by which neurons with mismatched tuning for two different sensory cues may be decoded to perform marginalization operations that dissociate possible causes of sensory inputs. PMID:27334948

  11. Statistical analysis of co-occurrence patterns in microbial presence-absence datasets

    PubMed Central

    Bewick, Sharon; Thielen, Peter; Mehoke, Thomas; Breitwieser, Florian P.; Paudel, Shishir; Adhikari, Arjun; Wolfe, Joshua; Slud, Eric V.; Karig, David; Fagan, William F.

    2017-01-01

    Drawing on a long history in macroecology, correlation analysis of microbiome datasets is becoming a common practice for identifying relationships or shared ecological niches among bacterial taxa. However, many of the statistical issues that plague such analyses in macroscale communities remain unresolved for microbial communities. Here, we discuss problems in the analysis of microbial species correlations based on presence-absence data. We focus on presence-absence data because this information is more readily obtainable from sequencing studies, especially for whole-genome sequencing, where abundance estimation is still in its infancy. First, we show how Pearson’s correlation coefficient (r) and Jaccard’s index (J)–two of the most common metrics for correlation analysis of presence-absence data–can contradict each other when applied to a typical microbiome dataset. In our dataset, for example, 14% of species-pairs predicted to be significantly correlated by r were not predicted to be significantly correlated using J, while 37.4% of species-pairs predicted to be significantly correlated by J were not predicted to be significantly correlated using r. Mismatch was particularly common among species-pairs with at least one rare species (<10% prevalence), explaining why r and J might differ more strongly in microbiome datasets, where there are large numbers of rare taxa. Indeed 74% of all species-pairs in our study had at least one rare species. Next, we show how Pearson’s correlation coefficient can result in artificial inflation of positive taxon relationships and how this is a particular problem for microbiome studies. We then illustrate how Jaccard’s index of similarity (J) can yield improvements over Pearson’s correlation coefficient. However, the standard null model for Jaccard’s index is flawed, and thus introduces its own set of spurious conclusions. We thus identify a better null model based on a hypergeometric distribution, which appropriately corrects for species prevalence. This model is available from recent statistics literature, and can be used for evaluating the significance of any value of an empirically observed Jaccard’s index. The resulting simple, yet effective method for handling correlation analysis of microbial presence-absence datasets provides a robust means of testing and finding relationships and/or shared environmental responses among microbial taxa. PMID:29145425

  12. Heterogeneity of Purkinje cell simple spike-complex spike interactions: zebrin- and non-zebrin-related variations.

    PubMed

    Tang, Tianyu; Xiao, Jianqiang; Suh, Colleen Y; Burroughs, Amelia; Cerminara, Nadia L; Jia, Linjia; Marshall, Sarah P; Wise, Andrew K; Apps, Richard; Sugihara, Izumi; Lang, Eric J

    2017-08-01

    Cerebellar Purkinje cells (PCs) generate two types of action potentials, simple and complex spikes. Although they are generated by distinct mechanisms, interactions between the two spike types exist. Zebrin staining produces alternating positive and negative stripes of PCs across most of the cerebellar cortex. Thus, here we compared simple spike-complex spike interactions both within and across zebrin populations. Simple spike activity undergoes a complex modulation preceding and following a complex spike. The amplitudes of the pre- and post-complex spike modulation phases were correlated across PCs. On average, the modulation was larger for PCs in zebrin positive regions. Correlations between aspects of the complex spike waveform and simple spike activity were found, some of which varied between zebrin positive and negative PCs. The implications of the results are discussed with regard to hypotheses that complex spikes are triggered by rises in simple spike activity for either motor learning or homeostatic functions. Purkinje cells (PCs) generate two types of action potentials, called simple and complex spikes (SSs and CSs). We first investigated the CS-associated modulation of SS activity and its relationship to the zebrin status of the PC. The modulation pattern consisted of a pre-CS rise in SS activity, and then, following the CS, a pause, a rebound, and finally a late inhibition of SS activity for both zebrin positive (Z+) and negative (Z-) cells, though the amplitudes of the phases were larger in Z+ cells. Moreover, the amplitudes of the pre-CS rise with the late inhibitory phase of the modulation were correlated across PCs. In contrast, correlations between modulation phases across CSs of individual PCs were generally weak. Next, the relationship between CS spikelets and SS activity was investigated. The number of spikelets/CS correlated with the average SS firing rate only for Z+ cells. In contrast, correlations across CSs between spikelet numbers and the amplitudes of the SS modulation phases were generally weak. Division of spikelets into likely axonally propagated and non-propagated groups (based on their interspikelet interval) showed that the correlation of spikelet number with SS firing rate primarily reflected a relationship with non-propagated spikelets. In sum, the results show both zebrin-related and non-zebrin-related physiological heterogeneity in SS-CS interactions among PCs, which suggests that the cerebellar cortex is more functionally diverse than is assumed by standard theories of cerebellar function. © 2017 The Authors. The Journal of Physiology © 2017 The Physiological Society.

  13. Applications of statistical and atomic physics to the spectral line broadening and stock markets

    NASA Astrophysics Data System (ADS)

    Volodko, Dmitriy

    The purpose of this investigation is the application of time correlation function methodology on the theoretical research of the shift of hydrogen and hydrogen-like spectral lines due to electrons and ions interaction with the spectral line emitters-dipole ionic-electronic shift (DIES) and the describing a behavior of stock-market in terms of a simple physical model simulation which obeys Levy statistical distribution---the same as that of the real stock-market index. Using Generalized Theory of Stark broadening of electrons in plasma we discovered a new source of the shift of hydrogen and hydrogen-like spectral lines that we called a dipole ionic-electronic shift (DIES). This shift results from the indirect coupling of electron and ion microfields in plasmas which is facilitated by the radiating atom/ion. We have shown that the DIES, unlike all previously known shifts, is highly nonlinear and has a different sign for different ranges of plasma parameters. The most favorable conditions for observing the DIES correspond to plasmas of high densities, but of relatively low temperature. For the Balmer-alpha line of hydrogen with the most favorable observational conditions Ne > 1018 cm-3, T < 2 eV, the DIES has been already confirmed experimentally. Based on the study of the time correlations and of the probability distribution of fluctuations in the stock market, we developed a relatively simple physical model, which simulates the Dow Jones Industrials index and makes short-term (a couple of days) predictions of its trend.

  14. A Simple Demonstration of Concrete Structural Health Monitoring Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahadevan, Sankaran; Agarwal, Vivek; Cai, Guowei

    Assessment and management of aging concrete structures in nuclear power plants require a more systematic approach than simple reliance on existing code margins of safety. Structural health monitoring of concrete structures aims to understand the current health condition of a structure based on heterogeneous measurements to produce high confidence actionable information regarding structural integrity that supports operational and maintenance decisions. This ongoing research project is seeking to develop a probabilistic framework for health diagnosis and prognosis of aging concrete structures in a nuclear power plant subjected to physical, chemical, environment, and mechanical degradation. The proposed framework consists of four elements—damagemore » modeling, monitoring, data analytics, and uncertainty quantification. This report describes a proof-of-concept example on a small concrete slab subjected to a freeze-thaw experiment that explores techniques in each of the four elements of the framework and their integration. An experimental set-up at Vanderbilt University’s Laboratory for Systems Integrity and Reliability is used to research effective combination of full-field techniques that include infrared thermography, digital image correlation, and ultrasonic measurement. The measured data are linked to the probabilistic framework: the thermography, digital image correlation data, and ultrasonic measurement data are used for Bayesian calibration of model parameters, for diagnosis of damage, and for prognosis of future damage. The proof-of-concept demonstration presented in this report highlights the significance of each element of the framework and their integration.« less

  15. Model compilation: An approach to automated model derivation

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo

    1990-01-01

    An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.

  16. The structure of molten CuCl: Reverse Monte Carlo modeling with high-energy X-ray diffraction data and molecular dynamics of a polarizable ion model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alcaraz, Olga; Trullàs, Joaquim, E-mail: quim.trullas@upc.edu; Tahara, Shuta

    2016-09-07

    The results of the structural properties of molten copper chloride are reported from high-energy X-ray diffraction measurements, reverse Monte Carlo modeling method, and molecular dynamics simulations using a polarizable ion model. The simulated X-ray structure factor reproduces all trends observed experimentally, in particular the shoulder at around 1 Å{sup −1} related to intermediate range ordering, as well as the partial copper-copper correlations from the reverse Monte Carlo modeling, which cannot be reproduced by using a simple rigid ion model. It is shown that the shoulder comes from intermediate range copper-copper correlations caused by the polarized chlorides.

  17. On the interpretation of domain averaged Fermi hole analyses of correlated wavefunctions.

    PubMed

    Francisco, E; Martín Pendás, A; Costales, Aurora

    2014-03-14

    Few methods allow for a physically sound analysis of chemical bonds in cases where electron correlation may be a relevant factor. The domain averaged Fermi hole (DAFH) analysis, a tool firstly proposed by Robert Ponec in the 1990's to provide interpretations of the chemical bonding existing between two fragments Ω and Ω' that divide the real space exhaustively, is one of them. This method allows for a partition of the delocalization index or bond order between Ω and Ω' into one electron contributions, but the chemical interpretation of its parameters has been firmly established only for single determinant wavefunctions. In this paper we report a general interpretation based on the concept of excluded density that is also valid for correlated descriptions. Both analytical models and actual computations on a set of simple molecules (H2, N2, LiH, and CO) are discussed, and a classification of the possible DAFH situations is presented. Our results show that this kind of analysis may reveal several correlated assisted bonding patterns that might be difficult to detect using other methods. In agreement with previous knowledge, we find that the effective bond order in covalent links decreases due to localization of electrons driven by Coulomb correlation.

  18. Multivariate Generalizations of Student's t-Distribution. ONR Technical Report. [Biometric Lab Report No. 90-3.

    ERIC Educational Resources Information Center

    Gibbons, Robert D.; And Others

    In the process of developing a conditionally-dependent item response theory (IRT) model, the problem arose of modeling an underlying multivariate normal (MVN) response process with general correlation among the items. Without the assumption of conditional independence, for which the underlying MVN cdf takes on comparatively simple forms and can be…

  19. Is There a Critical Distance for Fickian Transport? - a Statistical Approach to Sub-Fickian Transport Modelling in Porous Media

    NASA Astrophysics Data System (ADS)

    Most, S.; Nowak, W.; Bijeljic, B.

    2014-12-01

    Transport processes in porous media are frequently simulated as particle movement. This process can be formulated as a stochastic process of particle position increments. At the pore scale, the geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Recent experimental data suggest that we have not yet reached the end of the need to generalize, because particle increments show statistical dependency beyond linear correlation and over many time steps. The goal of this work is to better understand the validity regions of commonly made assumptions. We are investigating after what transport distances can we observe: A statistical dependence between increments, that can be modelled as an order-k Markov process, boils down to order 1. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks would start. A bivariate statistical dependence that simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW). Complete absence of statistical dependence (validity of classical PTRW/CTRW). The approach is to derive a statistical model for pore-scale transport from a powerful experimental data set via copula analysis. The model is formulated as a non-Gaussian, mutually dependent Markov process of higher order, which allows us to investigate the validity ranges of simpler models.

  20. Comparison of ACCENT 2000 Shuttle Plume Data with SIMPLE Model Predictions

    NASA Astrophysics Data System (ADS)

    Swaminathan, P. K.; Taylor, J. C.; Ross, M. N.; Zittel, P. F.; Lloyd, S. A.

    2001-12-01

    The JHU/APL Stratospheric IMpact of PLume Effluents (SIMPLE)model was employed to analyze the trace species in situ composition data collected during the ACCENT 2000 intercepts of the space shuttle Space Transportation Launch System (STS) rocket plume as a function of time and radial location within the cold plume. The SIMPLE model is initialized using predictions for species depositions calculated using an afterburning model based on standard TDK/SPP nozzle and SPF plume flowfield codes with an expanded chemical kinetic scheme. The time dependent ambient stratospheric chemistry is fully coupled to the plume species evolution whose transport is based on empirically derived diffusion. Model/data comparisons are encouraging through capturing observed local ozone recovery times as well as overall morphology of chlorine chemistry.

  1. A two-dimensional model of water: Theory and computer simulations

    NASA Astrophysics Data System (ADS)

    Urbič, T.; Vlachy, V.; Kalyuzhnyi, Yu. V.; Southall, N. T.; Dill, K. A.

    2000-02-01

    We develop an analytical theory for a simple model of liquid water. We apply Wertheim's thermodynamic perturbation theory (TPT) and integral equation theory (IET) for associative liquids to the MB model, which is among the simplest models of water. Water molecules are modeled as 2-dimensional Lennard-Jones disks with three hydrogen bonding arms arranged symmetrically, resembling the Mercedes-Benz (MB) logo. The MB model qualitatively predicts both the anomalous properties of pure water and the anomalous solvation thermodynamics of nonpolar molecules. IET is based on the orientationally averaged version of the Ornstein-Zernike equation. This is one of the main approximations in the present work. IET correctly predicts the pair correlation function of the model water at high temperatures. Both TPT and IET are in semi-quantitative agreement with the Monte Carlo values of the molar volume, isothermal compressibility, thermal expansion coefficient, and heat capacity. A major advantage of these theories is that they require orders of magnitude less computer time than the Monte Carlo simulations.

  2. A Mathematical Model of a Simple Amplifier Using a Ferroelectric Transistor

    NASA Technical Reports Server (NTRS)

    Sayyah, Rana; Hunt, Mitchell; MacLeod, Todd C.; Ho, Fat D.

    2009-01-01

    This paper presents a mathematical model characterizing the behavior of a simple amplifier using a FeFET. The model is based on empirical data and incorporates several variables that affect the output, including frequency, load resistance, and gate-to-source voltage. Since the amplifier is the basis of many circuit configurations, a mathematical model that describes the behavior of a FeFET-based amplifier will help in the integration of FeFETs into many other circuits.

  3. Applying 3-PG, a simple process-based model designed to produce practical results, to data from loblolly pine experiments

    Treesearch

    Joe J. Landsberg; Kurt H. Johnsen; Timothy J. Albaugh; H. Lee Allen; Steven E. McKeand

    2001-01-01

    3-PG is a simple process-based model that requires few parameter values and only readily available input data. We tested the structure of the model by calibrating it against loblolly pine data from the control treatment of the SETRES experiment in Scotland County, NC, then altered the fertility rating to simulate the effects of fertilization. There was excellent...

  4. Complex versus Simple Modeling for DIF Detection: When the Intraclass Correlation Coefficient (?) of the Studied Item Is Less Than the ? of the Total Score

    ERIC Educational Resources Information Center

    Jin, Ying; Myers, Nicholas D.; Ahn, Soyeon

    2014-01-01

    Previous research has demonstrated that differential item functioning (DIF) methods that do not account for multilevel data structure could result in too frequent rejection of the null hypothesis (i.e., no DIF) when the intraclass correlation coefficient (?) of the studied item was the same as the ? of the total score. The current study extended…

  5. Drug Prices and Emergency Department Mentions for Cocaine and Heroin

    PubMed Central

    Caulkins, Jonathan P.

    2001-01-01

    Objectives. In this report, the author illustrates the historic relation between retail drug prices and emergency department mentions for cocaine and heroin. Methods. Price series based on the Drug Enforcement Administration's System to Retrieve Information From Drug Evidence database were correlated with data on emergency department mentions from the Drug Abuse Warning Network for cocaine (1978–1996) and heroin (1981–1996). Results. A simple model in which emergency department mentions are driven by only prices explains more than 95% of the variation in emergency department mentions. Conclusions. Fluctuations in prices are an important determinant of adverse health outcomes associated with drugs. PMID:11527779

  6. Kinetics of DSB rejoining and formation of simple chromosome exchange aberrations

    NASA Technical Reports Server (NTRS)

    Cucinotta, F. A.; Nikjoo, H.; O'Neill, P.; Goodhead, D. T.

    2000-01-01

    PURPOSE: To investigate the role of kinetics in the processing of DNA double strand breaks (DSB), and the formation of simple chromosome exchange aberrations following X-ray exposures to mammalian cells based on an enzymatic approach. METHODS: Using computer simulations based on a biochemical approach, rate-equations that describe the processing of DSB through the formation of a DNA-enzyme complex were formulated. A second model that allows for competition between two processing pathways was also formulated. The formation of simple exchange aberrations was modelled as misrepair during the recombination of single DSB with undamaged DNA. Non-linear coupled differential equations corresponding to biochemical pathways were solved numerically by fitting to experimental data. RESULTS: When mediated by a DSB repair enzyme complex, the processing of single DSB showed a complex behaviour that gives the appearance of fast and slow components of rejoining. This is due to the time-delay caused by the action time of enzymes in biomolecular reactions. It is shown that the kinetic- and dose-responses of simple chromosome exchange aberrations are well described by a recombination model of DSB interacting with undamaged DNA when aberration formation increases with linear dose-dependence. Competition between two or more recombination processes is shown to lead to the formation of simple exchange aberrations with a dose-dependence similar to that of a linear quadratic model. CONCLUSIONS: Using a minimal number of assumptions, the kinetics and dose response observed experimentally for DSB rejoining and the formation of simple chromosome exchange aberrations are shown to be consistent with kinetic models based on enzymatic reaction approaches. A non-linear dose response for simple exchange aberrations is possible in a model of recombination of DNA containing a DSB with undamaged DNA when two or more pathways compete for DSB repair.

  7. Time Evolving Fission Chain Theory and Fast Neutron and Gamma-Ray Counting Distributions

    DOE PAGES

    Kim, K. S.; Nakae, L. F.; Prasad, M. K.; ...

    2015-11-01

    Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less

  8. A model of interval timing by neural integration

    PubMed Central

    Simen, Patrick; Balci, Fuat; deSouza, Laura; Cohen, Jonathan D.; Holmes, Philip

    2011-01-01

    We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes; that correlations among them can be largely cancelled by balancing excitation and inhibition; that neural populations can act as integrators; and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule’s predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior. PMID:21697374

  9. On the complex relationship between energy expenditure and longevity: Reconciling the contradictory empirical results with a simple theoretical model.

    PubMed

    Hou, Chen; Amunugama, Kaushalya

    2015-07-01

    The relationship between energy expenditure and longevity has been a central theme in aging studies. Empirical studies have yielded controversial results, which cannot be reconciled by existing theories. In this paper, we present a simple theoretical model based on first principles of energy conservation and allometric scaling laws. The model takes into considerations the energy tradeoffs between life history traits and the efficiency of the energy utilization, and offers quantitative and qualitative explanations for a set of seemingly contradictory empirical results. We show that oxidative metabolism can affect cellular damage and longevity in different ways in animals with different life histories and under different experimental conditions. Qualitative data and the linearity between energy expenditure, cellular damage, and lifespan assumed in previous studies are not sufficient to understand the complexity of the relationships. Our model provides a theoretical framework for quantitative analyses and predictions. The model is supported by a variety of empirical studies, including studies on the cellular damage profile during ontogeny; the intra- and inter-specific correlations between body mass, metabolic rate, and lifespan; and the effects on lifespan of (1) diet restriction and genetic modification of growth hormone, (2) the cold and exercise stresses, and (3) manipulations of antioxidant. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  10. Transitions between strongly correlated and random steady-states for catalytic CO-oxidation on surfaces at high-pressure

    DOE PAGES

    Liu, Da -Jiang; Evans, James W.

    2015-04-02

    We explore simple lattice-gas reaction models for CO-oxidation on 1D and 2D periodic arrays of surface adsorption sites. The models are motivated by studies of CO-oxidation on RuO 2(110) at high-pressures. Although adspecies interactions are neglected, the effective absence of adspecies diffusion results in kinetically-induced spatial correlations. A transition occurs from a random mainly CO-populated steady-state at high CO-partial pressure p CO, to a strongly-correlated near-O-covered steady-state for low p CO as noted. In addition, we identify a second transition to a random near-O-covered steady-state at very low p CO.

  11. Long-Range Correlations in Stride Intervals May Emerge from Non-Chaotic Walking Dynamics

    PubMed Central

    Ahn, Jooeun; Hogan, Neville

    2013-01-01

    Stride intervals of normal human walking exhibit long-range temporal correlations. Similar to the fractal-like behaviors observed in brain and heart activity, long-range correlations in walking have commonly been interpreted to result from chaotic dynamics and be a signature of health. Several mathematical models have reproduced this behavior by assuming a dominant role of neural central pattern generators (CPGs) and/or nonlinear biomechanics to evoke chaos. In this study, we show that a simple walking model without a CPG or biomechanics capable of chaos can reproduce long-range correlations. Stride intervals of the model revealed long-range correlations observed in human walking when the model had moderate orbital stability, which enabled the current stride to affect a future stride even after many steps. This provides a clear counterexample to the common hypothesis that a CPG and/or chaotic dynamics is required to explain the long-range correlations in healthy human walking. Instead, our results suggest that the long-range correlation may result from a combination of noise that is ubiquitous in biological systems and orbital stability that is essential in general rhythmic movements. PMID:24086274

  12. A Kinematically Consistent Two-Point Correlation Function

    NASA Technical Reports Server (NTRS)

    Ristorcelli, J. R.

    1998-01-01

    A simple kinematically consistent expression for the longitudinal two-point correlation function related to both the integral length scale and the Taylor microscale is obtained. On the inner scale, in a region of width inversely proportional to the turbulent Reynolds number, the function has the appropriate curvature at the origin. The expression for two-point correlation is related to the nonlinear cascade rate, or dissipation epsilon, a quantity that is carried as part of a typical single-point turbulence closure simulation. Constructing an expression for the two-point correlation whose curvature at the origin is the Taylor microscale incorporates one of the fundamental quantities characterizing turbulence, epsilon, into a model for the two-point correlation function. The integral of the function also gives, as is required, an outer integral length scale of the turbulence independent of viscosity. The proposed expression is obtained by kinematic arguments; the intention is to produce a practically applicable expression in terms of simple elementary functions that allow an analytical evaluation, by asymptotic methods, of diverse functionals relevant to single-point turbulence closures. Using the expression devised an example of the asymptotic method by which functionals of the two-point correlation can be evaluated is given.

  13. Developing a Model for Forecasting Road Traffic Accident (RTA) Fatalities in Yemen

    NASA Astrophysics Data System (ADS)

    Karim, Fareed M. A.; Abdo Saleh, Ali; Taijoobux, Aref; Ševrović, Marko

    2017-12-01

    The aim of this paper is to develop a model for forecasting RTA fatalities in Yemen. The yearly fatalities was modeled as the dependent variable, while the number of independent variables included the population, number of vehicles, GNP, GDP and Real GDP per capita. It was determined that all these variables are highly correlated with the correlation coefficient (r ≈ 0.9); in order to avoid multicollinearity in the model, a single variable with the highest r value was selected (real GDP per capita). A simple regression model was developed; the model was very good (R2=0.916); however, the residuals were serially correlated. The Prais-Winsten procedure was used to overcome this violation of the regression assumption. The data for a 20-year period from 1991-2010 were analyzed to build the model; the model was validated by using data for the years 2011-2013; the historical fit for the period 1991 - 2011 was very good. Also, the validation for 2011-2013 proved accurate.

  14. Field-Scale Evaluation of Infiltration Parameters From Soil Texture for Hydrologic Analysis

    NASA Astrophysics Data System (ADS)

    Springer, Everett P.; Cundy, Terrance W.

    1987-02-01

    Recent interest in predicting soil hydraulic properties from simple physical properties such as texture has major implications in the parameterization of physically based models of surface runoff. This study was undertaken to (1) compare, on a field scale, soil hydraulic parameters predicted from texture to those derived from field measurements and (2) compare simulated overland flow response using these two parameter sets. The parameters for the Green-Ampt infiltration equation were obtained from field measurements and using texture-based predictors for two agricultural fields, which were mapped as single soil units. Results of the analyses were that (1) the mean and variance of the field-based parameters were not preserved by the texture-based estimates, (2) spatial and cross correlations between parameters were induced by the texture-based estimation procedures, (3) the overland flow simulations using texture-based parameters were significantly different than those from field-based parameters, and (4) simulations using field-measured hydraulic conductivities and texture-based storage parameters were very close to simulations using only field-based parameters.

  15. Synthesis of dynamic phase profile by the correlation technique for spatial control of optical beams in multiplexing and switching

    NASA Astrophysics Data System (ADS)

    Bugaychuk, Svitlana A.; Gnatovskyy, Vladimir O.; Sidorenko, Andrey V.; Pryadko, Igor I.; Negriyko, Anatoliy M.

    2015-11-01

    New approach for the correlation technique, which is based on multiple periodic structures to create a controllable angular spectrum, is proposed and investigated both theoretically and experimentally. The transformation of an initial laser beam occurs due to the actions of consecutive phase periodic structures, which may differ by their parameters. Then, after the Fourier transformation of a complex diffraction field, the output diffraction orders will be changed both by their intensities and by their spatial position. The controllable change of output angular spectrum is carried out by a simple control of the parameters of the periodic structures. We investigate several simple examples of such management.

  16. Support vector regression methodology for estimating global solar radiation in Algeria

    NASA Astrophysics Data System (ADS)

    Guermoui, Mawloud; Rabehi, Abdelaziz; Gairaa, Kacem; Benkaciali, Said

    2018-01-01

    Accurate estimation of Daily Global Solar Radiation (DGSR) has been a major goal for solar energy applications. In this paper we show the possibility of developing a simple model based on the Support Vector Regression (SVM-R), which could be used to estimate DGSR on the horizontal surface in Algeria based only on sunshine ratio as input. The SVM model has been developed and tested using a data set recorded over three years (2005-2007). The data was collected at the Applied Research Unit for Renewable Energies (URAER) in Ghardaïa city. The data collected between 2005-2006 are used to train the model while the 2007 data are used to test the performance of the selected model. The measured and the estimated values of DGSR were compared during the testing phase statistically using the Root Mean Square Error (RMSE), Relative Square Error (rRMSE), and correlation coefficient (r2), which amount to 1.59(MJ/m2), 8.46 and 97,4%, respectively. The obtained results show that the SVM-R is highly qualified for DGSR estimation using only sunshine ratio.

  17. Interface tension in the improved Blume-Capel model

    NASA Astrophysics Data System (ADS)

    Hasenbusch, Martin

    2017-09-01

    We study interfaces with periodic boundary conditions in the low-temperature phase of the improved Blume-Capel model on the simple cubic lattice. The interface free energy is defined by the difference of the free energy of a system with antiperiodic boundary conditions in one of the directions and that of a system with periodic boundary conditions in all directions. It is obtained by integration of differences of the corresponding internal energies over the inverse temperature. These differences can be computed efficiently by using a variance reduced estimator that is based on the exchange cluster algorithm. The interface tension is obtained from the interface free energy by using predictions based on effective interface models. By using our numerical results for the interface tension σ and the correlation length ξ obtained in previous work, we determine the universal amplitude ratios R2 nd ,+=σ0f2nd ,+ 2=0.3863 (6 ) , R2 nd ,-=σ0f2nd ,- 2=0.1028 (1 ) , and Rexp ,-=σ0fexp,- 2=0.1077 (3 ) . Our results are consistent with those obtained previously for the three-dimensional Ising model, confirming the universality hypothesis.

  18. Transcription, intercellular variability and correlated random walk.

    PubMed

    Müller, Johannes; Kuttler, Christina; Hense, Burkhard A; Zeiser, Stefan; Liebscher, Volkmar

    2008-11-01

    We develop a simple model for the random distribution of a gene product. It is assumed that the only source of variance is due to switching transcription on and off by a random process. Under the condition that the transition rates between on and off are constant we find that the amount of mRNA follows a scaled Beta distribution. Additionally, a simple positive feedback loop is considered. The simplicity of the model allows for an explicit solution also in this setting. These findings in turn allow, e.g., for easy parameter scans. We find that bistable behavior translates into bimodal distributions. These theoretical findings are in line with experimental results.

  19. Adaptive exponential integrate-and-fire model as an effective description of neuronal activity.

    PubMed

    Brette, Romain; Gerstner, Wulfram

    2005-11-01

    We introduce a two-dimensional integrate-and-fire model that combines an exponential spike mechanism with an adaptation equation, based on recent theoretical findings. We describe a systematic method to estimate its parameters with simple electrophysiological protocols (current-clamp injection of pulses and ramps) and apply it to a detailed conductance-based model of a regular spiking neuron. Our simple model predicts correctly the timing of 96% of the spikes (+/-2 ms) of the detailed model in response to injection of noisy synaptic conductances. The model is especially reliable in high-conductance states, typical of cortical activity in vivo, in which intrinsic conductances were found to have a reduced role in shaping spike trains. These results are promising because this simple model has enough expressive power to reproduce qualitatively several electrophysiological classes described in vitro.

  20. Optical identification of subjects at high risk for developing breast cancer

    NASA Astrophysics Data System (ADS)

    Taroni, Paola; Quarto, Giovanna; Pifferi, Antonio; Ieva, Francesca; Paganoni, Anna Maria; Abbate, Francesca; Balestreri, Nicola; Menna, Simona; Cassano, Enrico; Cubeddu, Rinaldo

    2013-06-01

    A time-domain multiwavelength (635 to 1060 nm) optical mammography was performed on 147 subjects with recent x-ray mammograms available, and average breast tissue composition (water, lipid, collagen, oxy- and deoxyhemoglobin) and scattering parameters (amplitude a and slope b) were estimated. Correlation was observed between optically derived parameters and mammographic density [Breast Imaging and Reporting Data System (BI-RADS) categories], which is a strong risk factor for breast cancer. A regression logistic model was obtained to best identify high-risk (BI-RADS 4) subjects, based on collagen content and scattering parameters. The model presents a total misclassification error of 12.3%, sensitivity of 69%, specificity of 94%, and simple kappa of 0.84, which compares favorably even with intraradiologist assignments of BI-RADS categories.

  1. Adaptive correlation filter-based video stabilization without accumulative global motion estimation

    NASA Astrophysics Data System (ADS)

    Koh, Eunjin; Lee, Chanyong; Jeong, Dong Gil

    2014-12-01

    We present a digital video stabilization approach that provides both robustness and efficiency for practical applications. In this approach, we adopt a stabilization model that maintains spatio-temporal information of past input frames efficiently and can track original stabilization position. Because of the stabilization model, the proposed method does not need accumulative global motion estimation and can recover the original position even if there is a failure in interframe motion estimation. It can also intelligently overcome the situation of damaged or interrupted video sequences. Moreover, because it is simple and suitable to parallel scheme, we implement it on a commercial field programmable gate array and a graphics processing unit board with compute unified device architecture in a breeze. Experimental results show that the proposed approach is both fast and robust.

  2. SU-G-201-15: Nomogram as an Efficient Dosimetric Verification Tool in HDR Prostate Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, J; Todor, D

    Purpose: Nomogram as a simple QA tool for HDR prostate brachytherapy treatment planning has been developed and validated clinically. Reproducibility including patient-to-patient and physician-to-physician variability was assessed. Methods: The study was performed on HDR prostate implants from physician A (n=34) and B (n=15) using different implant techniques and planning methodologies. A nomogram was implemented as an independent QA of computer-based treatment planning before plan execution. Normalized implant strength (total air kerma strength Sk*t in cGy cm{sup 2} divided by prescribed dose in cGy) was plotted as a function of PTV volume and total V100. A quadratic equation was used tomore » fit the data with R{sup 2} denoting the model predictive power. Results: All plans showed good target coverage while OARs met the dose constraint guidelines. Vastly different implant and planning styles were reflected on conformity index (entire dose matrix V100/PTV volume, physician A implants: 1.27±0.14, physician B: 1.47±0.17) and PTV V150/PTV volume ratio (physician A: 0.34±0.09, physician B: 0.24±0.07). The quadratic model provided a better fit for the curved relationship between normalized implant strength and total V100 (or PTV volume) than a simple linear function. Unlike the normalized implant strength versus PTV volume nomogram which differed between physicians, a unique quadratic model based nomogram (Sk*t)/D=−0.0008V2+0.0542V+1.1185 (R{sup 2}=0.9977) described the dependence of normalized implant strength on total V100 over all the patients from both physicians despite two different implant and planning philosophies. Normalized implant strength - total V100 model also generated less deviant points distorting the smoothed ones with a significantly higher correlation. Conclusion: A simple and universal, excel-based nomogram was created as an independent calculation tool for HDR prostate brachytherapy. Unlike similar attempts, our nomogram is insensitive to implant style and does not rely on reproducing dose calculations using TG-43 formalism, thus making it a truly independent check.« less

  3. Minimum variance geographic sampling

    NASA Technical Reports Server (NTRS)

    Terrell, G. R. (Principal Investigator)

    1980-01-01

    Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.

  4. Experimental recovery of quantum correlations in absence of system-environment back-action

    PubMed Central

    Xu, Jin-Shi; Sun, Kai; Li, Chuan-Feng; Xu, Xiao-Ye; Guo, Guang-Can; Andersson, Erika; Lo Franco, Rosario; Compagno, Giuseppe

    2013-01-01

    Revivals of quantum correlations in composite open quantum systems are a useful dynamical feature against detrimental effects of the environment. Their occurrence is attributed to flows of quantum information back and forth from systems to quantum environments. However, revivals also show up in models where the environment is classical, thus unable to store quantum correlations, and forbids system-environment back-action. This phenomenon opens basic issues about its interpretation involving the role of classical environments, memory effects, collective effects and system-environment correlations. Moreover, an experimental realization of back-action-free quantum revivals has applicative relevance as it leads to recover quantum resources without resorting to more demanding structured environments and correction procedures. Here we introduce a simple two-qubit model suitable to address these issues. We then report an all-optical experiment which simulates the model and permits us to recover and control, against decoherence, quantum correlations without back-action. We finally give an interpretation of the phenomenon by establishing the roles of the involved parties. PMID:24287554

  5. Experimental recovery of quantum correlations in absence of system-environment back-action.

    PubMed

    Xu, Jin-Shi; Sun, Kai; Li, Chuan-Feng; Xu, Xiao-Ye; Guo, Guang-Can; Andersson, Erika; Lo Franco, Rosario; Compagno, Giuseppe

    2013-01-01

    Revivals of quantum correlations in composite open quantum systems are a useful dynamical feature against detrimental effects of the environment. Their occurrence is attributed to flows of quantum information back and forth from systems to quantum environments. However, revivals also show up in models where the environment is classical, thus unable to store quantum correlations, and forbids system-environment back-action. This phenomenon opens basic issues about its interpretation involving the role of classical environments, memory effects, collective effects and system-environment correlations. Moreover, an experimental realization of back-action-free quantum revivals has applicative relevance as it leads to recover quantum resources without resorting to more demanding structured environments and correction procedures. Here we introduce a simple two-qubit model suitable to address these issues. We then report an all-optical experiment which simulates the model and permits us to recover and control, against decoherence, quantum correlations without back-action. We finally give an interpretation of the phenomenon by establishing the roles of the involved parties.

  6. The Monash University Interactive Simple Climate Model

    NASA Astrophysics Data System (ADS)

    Dommenget, D.

    2013-12-01

    The Monash university interactive simple climate model is a web-based interface that allows students and the general public to explore the physical simulation of the climate system with a real global climate model. It is based on the Globally Resolved Energy Balance (GREB) model, which is a climate model published by Dommenget and Floeter [2011] in the international peer review science journal Climate Dynamics. The model simulates most of the main physical processes in the climate system in a very simplistic way and therefore allows very fast and simple climate model simulations on a normal PC computer. Despite its simplicity the model simulates the climate response to external forcings, such as doubling of the CO2 concentrations very realistically (similar to state of the art climate models). The Monash simple climate model web-interface allows you to study the results of more than a 2000 different model experiments in an interactive way and it allows you to study a number of tutorials on the interactions of physical processes in the climate system and solve some puzzles. By switching OFF/ON physical processes you can deconstruct the climate and learn how all the different processes interact to generate the observed climate and how the processes interact to generate the IPCC predicted climate change for anthropogenic CO2 increase. The presentation will illustrate how this web-base tool works and what are the possibilities in teaching students with this tool are.

  7. Scaling laws between population and facility densities.

    PubMed

    Um, Jaegon; Son, Seung-Woo; Lee, Sung-Ik; Jeong, Hawoong; Kim, Beom Jun

    2009-08-25

    When a new facility like a grocery store, a school, or a fire station is planned, its location should ideally be determined by the necessities of people who live nearby. Empirically, it has been found that there exists a positive correlation between facility and population densities. In the present work, we investigate the ideal relation between the population and the facility densities within the framework of an economic mechanism governing microdynamics. In previous studies based on the global optimization of facility positions in minimizing the overall travel distance between people and facilities, it was shown that the density of facility D and that of population rho should follow a simple power law D approximately rho(2/3). In our empirical analysis, on the other hand, the power-law exponent alpha in D approximately rho(alpha) is not a fixed value but spreads in a broad range depending on facility types. To explain this discrepancy in alpha, we propose a model based on economic mechanisms that mimic the competitive balance between the profit of the facilities and the social opportunity cost for populations. Through our simple, microscopically driven model, we show that commercial facilities driven by the profit of the facilities have alpha = 1, whereas public facilities driven by the social opportunity cost have alpha = 2/3. We simulate this model to find the optimal positions of facilities on a real U.S. map and show that the results are consistent with the empirical data.

  8. Disaggregation and Refinement of System Dynamics Models via Agent-based Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nutaro, James J; Ozmen, Ozgur; Schryver, Jack C

    System dynamics models are usually used to investigate aggregate level behavior, but these models can be decomposed into agents that have more realistic individual behaviors. Here we develop a simple model of the STEM workforce to illuminate the impacts that arise from the disaggregation and refinement of system dynamics models via agent-based modeling. Particularly, alteration of Poisson assumptions, adding heterogeneity to decision-making processes of agents, and discrete-time formulation are investigated and their impacts are illustrated. The goal is to demonstrate both the promise and danger of agent-based modeling in the context of a relatively simple model and to delineate themore » importance of modeling decisions that are often overlooked.« less

  9. Relationships between autofocus methods for SAR and self-survey techniques for SONAR. [Synthetic Aperture Radar (SAR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wahl, D.E.; Jakowatz, C.V. Jr.; Ghiglia, D.C.

    1991-01-01

    Autofocus methods in SAR and self-survey techniques in SONAR have a common mathematical basis in that they both involve estimation and correction of phase errors introduced by sensor position uncertainties. Time delay estimation and correlation methods have been shown to be effective in solving the self-survey problem for towed SONAR arrays. Since it can be shown that platform motion errors introduce similar time-delay estimation problems in SAR imaging, the question arises as to whether such techniques could be effectively employed for autofocus of SAR imagery. With a simple mathematical model for motion errors in SAR, we will show why suchmore » correlation/time-delay techniques are not nearly as effective as established SAR autofocus algorithms such as phase gradient autofocus or sub-aperture based methods. This analysis forms an important bridge between signal processing methodologies for SAR and SONAR. 5 refs., 4 figs.« less

  10. Diffusion processes in tumors: A nuclear medicine approach

    NASA Astrophysics Data System (ADS)

    Amaya, Helman

    2016-07-01

    The number of counts used in nuclear medicine imaging techniques, only provides physical information about the desintegration of the nucleus present in the the radiotracer molecules that were uptaken in a particular anatomical region, but that information is not a real metabolic information. For this reason a mathematical method was used to find a correlation between number of counts and 18F-FDG mass concentration. This correlation allows a better interpretation of the results obtained in the study of diffusive processes in an agar phantom, and based on it, an image from the PETCETIX DICOM sample image set from OsiriX-viewer software was processed. PET-CT gradient magnitude and Laplacian images could show direct information on diffusive processes for radiopharmaceuticals that enter into the cells by simple diffusion. In the case of the radiopharmaceutical 18F-FDG is necessary to include pharmacokinetic models, to make a correct interpretation of the gradient magnitude and Laplacian of counts images.

  11. Computation of fluid flow and pore-space properties estimation on micro-CT images of rock samples

    NASA Astrophysics Data System (ADS)

    Starnoni, M.; Pokrajac, D.; Neilson, J. E.

    2017-09-01

    Accurate determination of the petrophysical properties of rocks, namely REV, mean pore and grain size and absolute permeability, is essential for a broad range of engineering applications. Here, the petrophysical properties of rocks are calculated using an integrated approach comprising image processing, statistical correlation and numerical simulations. The Stokes equations of creeping flow for incompressible fluids are solved using the Finite-Volume SIMPLE algorithm. Simulations are then carried out on three-dimensional digital images obtained from micro-CT scanning of two rock formations: one sandstone and one carbonate. Permeability is predicted from the computed flow field using Darcy's law. It is shown that REV, REA and mean pore and grain size are effectively estimated using the two-point spatial correlation function. Homogeneity and anisotropy are also evaluated using the same statistical tools. A comparison of different absolute permeability estimates is also presented, revealing a good agreement between the numerical value and the experimentally determined one for the carbonate sample, but a large discrepancy for the sandstone. Finally, a new convergence criterion for the SIMPLE algorithm, and more generally for the family of pressure-correction methods, is presented. This criterion is based on satisfaction of bulk momentum balance, which makes it particularly useful for pore-scale modelling of reservoir rocks.

  12. Simple prediction method of lumbar lordosis for planning of lumbar corrective surgery: radiological analysis in a Korean population.

    PubMed

    Lee, Chong Suh; Chung, Sung Soo; Park, Se Jun; Kim, Dong Min; Shin, Seong Kee

    2014-01-01

    This study aimed at deriving a lordosis predictive equation using the pelvic incidence and to establish a simple prediction method of lumbar lordosis for planning lumbar corrective surgery in Asians. Eighty-six asymptomatic volunteers were enrolled in the study. The maximal lumbar lordosis (MLL), lower lumbar lordosis (LLL), pelvic incidence (PI), and sacral slope (SS) were measured. The correlations between the parameters were analyzed using Pearson correlation analysis. Predictive equations of lumbar lordosis through simple regression analysis of the parameters and simple predictive values of lumbar lordosis using PI were derived. The PI strongly correlated with the SS (r = 0.78), and a strong correlation was found between the SS and LLL (r = 0.89), and between the SS and MLL (r = 0.83). Based on these correlations, the predictive equations of lumbar lordosis were found (SS = 0.80 + 0.74 PI (r = 0.78, R (2) = 0.61), LLL = 5.20 + 0.87 SS (r = 0.89, R (2) = 0.80), MLL = 17.41 + 0.96 SS (r = 0.83, R (2) = 0.68). When PI was between 30° to 35°, 40° to 50° and 55° to 60°, the equations predicted that MLL would be PI + 10°, PI + 5° and PI, and LLL would be PI - 5°, PI - 10° and PI - 15°, respectively. This simple calculation method can provide a more appropriate and simpler prediction of lumbar lordosis for Asian populations. The prediction of lumbar lordosis should be used as a reference for surgeons planning to restore the lumbar lordosis in lumbar corrective surgery.

  13. A simple nonlocal damage model for predicting failure of notched laminates

    NASA Technical Reports Server (NTRS)

    Kennedy, T. C.; Nahan, M. F.

    1995-01-01

    The ability to predict failure loads in notched composite laminates is a requirement in a variety of structural design circumstances. A complicating factor is the development of a zone of damaged material around the notch tip. The objective of this study was to develop a computational technique that simulates progressive damage growth around a notch in a manner that allows the prediction of failure over a wide range of notch sizes. This was accomplished through the use of a relatively simple, nonlocal damage model that incorporates strain-softening. This model was implemented in a two-dimensional finite element program. Calculations were performed for two different laminates with various notch sizes under tensile loading, and the calculations were found to correlate well with experimental results.

  14. Relationships Among Peripheral and Central Electrophysiological Measures of Spatial and Spectral Selectivity and Speech Perception in Cochlear Implant Users.

    PubMed

    Scheperle, Rachel A; Abbas, Paul J

    2015-01-01

    The ability to perceive speech is related to the listener's ability to differentiate among frequencies (i.e., spectral resolution). Cochlear implant (CI) users exhibit variable speech-perception and spectral-resolution abilities, which can be attributed in part to the extent of electrode interactions at the periphery (i.e., spatial selectivity). However, electrophysiological measures of peripheral spatial selectivity have not been found to correlate with speech perception. The purpose of this study was to evaluate auditory processing at the periphery and cortex using both simple and spectrally complex stimuli to better understand the stages of neural processing underlying speech perception. The hypotheses were that (1) by more completely characterizing peripheral excitation patterns than in previous studies, significant correlations with measures of spectral selectivity and speech perception would be observed, (2) adding information about processing at a level central to the auditory nerve would account for additional variability in speech perception, and (3) responses elicited with spectrally complex stimuli would be more strongly correlated with speech perception than responses elicited with spectrally simple stimuli. Eleven adult CI users participated. Three experimental processor programs (MAPs) were created to vary the likelihood of electrode interactions within each participant. For each MAP, a subset of 7 of 22 intracochlear electrodes was activated: adjacent (MAP 1), every other (MAP 2), or every third (MAP 3). Peripheral spatial selectivity was assessed using the electrically evoked compound action potential (ECAP) to obtain channel-interaction functions for all activated electrodes (13 functions total). Central processing was assessed by eliciting the auditory change complex with both spatial (electrode pairs) and spectral (rippled noise) stimulus changes. Speech-perception measures included vowel discrimination and the Bamford-Kowal-Bench Speech-in-Noise test. Spatial and spectral selectivity and speech perception were expected to be poorest with MAP 1 (closest electrode spacing) and best with MAP 3 (widest electrode spacing). Relationships among the electrophysiological and speech-perception measures were evaluated using mixed-model and simple linear regression analyses. All electrophysiological measures were significantly correlated with each other and with speech scores for the mixed-model analysis, which takes into account multiple measures per person (i.e., experimental MAPs). The ECAP measures were the best predictor. In the simple linear regression analysis on MAP 3 data, only the cortical measures were significantly correlated with speech scores; spectral auditory change complex amplitude was the strongest predictor. The results suggest that both peripheral and central electrophysiological measures of spatial and spectral selectivity provide valuable information about speech perception. Clinically, it is often desirable to optimize performance for individual CI users. These results suggest that ECAP measures may be most useful for within-subject applications when multiple measures are performed to make decisions about processor options. They also suggest that if the goal is to compare performance across individuals based on a single measure, then processing central to the auditory nerve (specifically, cortical measures of discriminability) should be considered.

  15. Influence of Water Saturation on Thermal Conductivity in Sandstones

    NASA Astrophysics Data System (ADS)

    Fehr, A.; Jorand, R.; Koch, A.; Clauser, C.

    2009-04-01

    Information on thermal conductivity of rocks and soils is essential in applied geothermal and hydrocarbon maturation research. In this study, we investigate the dependence of thermal conductivity on the degree of water saturation. Measurements were made on five sandstones from different outcrops in Germany. In a first step, we characterized the samples with respect to mineralogical composition, porosity, and microstructure by nuclear magnetic resonance (NMR) and mercury injection. We measured thermal conductivity with an optical scanner at different levels of water saturation. Finally we present a simple and easy model for the correlation of thermal conductivity and water saturation. Thermal conductivity decreases in the course of the drying of the rock. This behaviour is not linear and depends on the microstructure of the studied rock. We studied different mixing models for three phases: mineral skeleton, water and air. For argillaceous sandstones a modified arithmetic model works best which considers the irreducible water volume and different pore sizes. For pure quartz sandstones without clay minerals, we use the same model for low water saturations, but for high water saturations a modified geometric model. A clayey sandstone rich in feldspath shows a different behaviour which cannot be explained by simple models. A better understanding will require measurements on additional samples which will help to improve the derived correlations and substantiate our findings.

  16. Dependence of Thermal Conductivity on Water Saturation of Sandstones

    NASA Astrophysics Data System (ADS)

    Fehr, A.; Jorand, R.; Koch, A.; Clauser, C.

    2008-12-01

    Information on thermal conductivity of rocks and soils is essential in applied geothermal and hydrocarbon maturation research. In this study, we investigate the dependence of thermal conductivity on the degree of water saturation. Measurements were made on five sandstones from different outcrops in Germany. In a first step, we characterized the samples with respect to mineralogical composition, porosity, and microstructure by nuclear magnetic resonance (NMR) and mercury injection. We measured thermal conductivity with an optical scanner at different levels of water saturation. Finally we present a simple and easy model for the correlation of thermal conductivity and water saturation. Thermal conductivity decreases in the course of the drying of the rock. This behaviour is not linear and depends on the microstructure of the studied rock. We studied different mixing models for three phases: mineral skeleton, water and air. For argillaceous sandstones a modified arithmetic model works best which considers the irreducible water volume and different pore sizes. For pure quartz sandstones without clay minerals, we use the same model for low water saturations, but for high water saturations a modified geometric model. A clayey sandstone rich in feldspath shows a different behaviour which cannot be explained by simple models. A better understanding will require measurements on additional samples which will help to improve the derived correlations and substantiate our findings.

  17. Electron Correlation from the Adiabatic Connection for Multireference Wave Functions

    NASA Astrophysics Data System (ADS)

    Pernal, Katarzyna

    2018-01-01

    An adiabatic connection (AC) formula for the electron correlation energy is derived for a broad class of multireference wave functions. The AC expression recovers dynamic correlation energy and assures a balanced treatment of the correlation energy. Coupling the AC formalism with the extended random phase approximation allows one to find the correlation energy only from reference one- and two-electron reduced density matrices. If the generalized valence bond perfect pairing model is employed a simple closed-form expression for the approximate AC formula is obtained. This results in the overall M5 scaling of the computation cost making the method one of the most efficient multireference approaches accounting for dynamic electron correlation also for the strongly correlated systems.

  18. Coarse-grained hydrodynamics from correlation functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmer, Bruce

    This paper will describe a formalism for using correlation functions between different grid cells as the basis for determining coarse-grained hydrodynamic equations for modeling the behavior of mesoscopic fluid systems. Configuration from a molecular dynamics simulation are projected onto basis functions representing grid cells in a continuum hydrodynamic simulation. Equilbrium correlation functions between different grid cells are evaluated from the molecular simulation and used to determine the evolution operator for the coarse-grained hydrodynamic system. The formalism is applied to some simple hydrodynamic cases to determine the feasibility of applying this to realistic nanoscale systems.

  19. Adaptation in Tunably Rugged Fitness Landscapes: The Rough Mount Fuji Model

    PubMed Central

    Neidhart, Johannes; Szendro, Ivan G.; Krug, Joachim

    2014-01-01

    Much of the current theory of adaptation is based on Gillespie’s mutational landscape model (MLM), which assumes that the fitness values of genotypes linked by single mutational steps are independent random variables. On the other hand, a growing body of empirical evidence shows that real fitness landscapes, while possessing a considerable amount of ruggedness, are smoother than predicted by the MLM. In the present article we propose and analyze a simple fitness landscape model with tunable ruggedness based on the rough Mount Fuji (RMF) model originally introduced by Aita et al. in the context of protein evolution. We provide a comprehensive collection of results pertaining to the topographical structure of RMF landscapes, including explicit formulas for the expected number of local fitness maxima, the location of the global peak, and the fitness correlation function. The statistics of single and multiple adaptive steps on the RMF landscape are explored mainly through simulations, and the results are compared to the known behavior in the MLM model. Finally, we show that the RMF model can explain the large number of second-step mutations observed on a highly fit first-step background in a recent evolution experiment with a microvirid bacteriophage. PMID:25123507

  20. A Mathematical Model of Neutral Lipid Content in terms of Initial Nitrogen Concentration and Validation in Coelastrum sp. HA-1 and Application in Chlorella sorokiniana

    PubMed Central

    Zhao, Yue; Liu, Zhiyong; Liu, Chenfeng; Hu, Zhipeng

    2017-01-01

    Microalgae are considered to be a potential major biomass feedstock for biofuel due to their high lipid content. However, no correlation equations as a function of initial nitrogen concentration for lipid accumulation have been developed for simplicity to predict lipid production and optimize the lipid production process. In this study, a lipid accumulation model was developed with simple parameters based on the assumption protein synthesis shift to lipid synthesis by a linear function of nitrogen quota. The model predictions fitted well for the growth, lipid content, and nitrogen consumption of Coelastrum sp. HA-1 under various initial nitrogen concentrations. Then the model was applied successfully in Chlorella sorokiniana to predict the lipid content with different light intensities. The quantitative relationship between initial nitrogen concentrations and the final lipid content with sensitivity analysis of the model were also discussed. Based on the model results, the conversion efficiency from protein synthesis to lipid synthesis is higher and higher in microalgae metabolism process as nitrogen decreases; however, the carbohydrate composition content remains basically unchanged neither in HA-1 nor in C. sorokiniana. PMID:28194424

  1. Hard and soft acids and bases: structure and process.

    PubMed

    Reed, James L

    2012-07-05

    Under investigation is the structure and process that gives rise to hard-soft behavior in simple anionic atomic bases. That for simple atomic bases the chemical hardness is expected to be the only extrinsic component of acid-base strength, has been substantiated in the current study. A thermochemically based operational scale of chemical hardness was used to identify the structure within anionic atomic bases that is responsible for chemical hardness. The base's responding electrons have been identified as the structure, and the relaxation that occurs during charge transfer has been identified as the process giving rise to hard-soft behavior. This is in contrast the commonly accepted explanations that attribute hard-soft behavior to varying degrees of electrostatic and covalent contributions to the acid-base interaction. The ability of the atomic ion's responding electrons to cause hard-soft behavior has been assessed by examining the correlation of the estimated relaxation energies of the responding electrons with the operational chemical hardness. It has been demonstrated that the responding electrons are able to give rise to hard-soft behavior in simple anionic bases.

  2. Universal sensitivity of speckle intensity correlations to wavefront change in light diffusers

    PubMed Central

    Kim, KyungDuk; Yu, Hyeonseung; Lee, KyeoReh; Park, YongKeun

    2017-01-01

    Here, we present a concept based on the realization that a complex medium can be used as a simple interferometer. Changes in the wavefront of an incident coherent beam can be retrieved by analyzing changes in speckle patterns when the beam passes through a light diffuser. We demonstrate that the spatial intensity correlations of the speckle patterns are independent of the light diffusers, and are solely determined by the phase changes of an incident beam. With numerical simulations using the random matrix theory, and an experimental pressure-driven wavefront-deforming setup using a microfluidic channel, we theoretically and experimentally confirm the universal sensitivity of speckle intensity correlations, which is attributed to the conservation of optical field correlation despite multiple light scattering. This work demonstrates that a light diffuser works as a simple interferometer, and presents opportunities to retrieve phase information of optical fields with a compact scattering layer in various applications in metrology, analytical chemistry, and biomedicine. PMID:28322268

  3. A fluorescent immunochromatographic strip test using a quantum dot-antibody probe for rapid and quantitative detection of 1-aminohydantoin in edible animal tissues.

    PubMed

    Le, Tao; Zhang, Zhihao; Wu, Juan; Shi, Haixing; Cao, Xudong

    2018-01-01

    A rapid, simple, and sensitive fluorescent immunochromatographic strip test (ICST) based on quantum dots (QDs) has been developed to detect 1-aminohydantoin (AHD), a major metabolite of nitrofurantoin in animal tissues. To achieve this, QD-labeled antibody conjugates, which consist of CdSe/ZnS QDs and monoclonal antibodies, were prepared by an activated ester method. Under optimal conditions, with the nitrophenyl derivative of AHD as the target, the ICST had a linear range from 0.1 to 100 ng/mL, with a correlation coefficient of 0.9656 and a 50% inhibitory concentration of 4.51 ng/mL. The limit of detection was 0.14 ng/g, which was below the minimum required performance limit of 1 μg/kg for AHD established by the European Commission. The recoveries for AHD ranged from 81.5% to 108.2%, with coefficients of variation below 13%, based on intraday and interday analysis. Furthermore, for AHD in real samples, the ICST showed high reliability and high correlation with liquid chromatography-tandem mass spectrometry (correlation coefficient of 0.9916). To the best of our knowledge, this is the first report of a novel and sensitive method based on a fluorescent ICST to detect AHD below the minimum required performance limit. The ICST demonstrated high reliability, and could be ideally suited for rapid, simple, and on-site screening of AHD contamination in animal tissues. Graphical abstract A rapid, simple, and sensitive fluorescent immunochromatographic strip test that is based on quantum dots was developed to detect 1-aminohydantoin (AHD), a major metabolite of nitrofurantoin in animal tissues. 2-NBA 2-nitrobenzaldehyde, NP nitrophenyl.

  4. A CCA+ICA based model for multi-task brain imaging data fusion and its application to schizophrenia.

    PubMed

    Sui, Jing; Adali, Tülay; Pearlson, Godfrey; Yang, Honghui; Sponheim, Scott R; White, Tonya; Calhoun, Vince D

    2010-05-15

    Collection of multiple-task brain imaging data from the same subject has now become common practice in medical imaging studies. In this paper, we propose a simple yet effective model, "CCA+ICA", as a powerful tool for multi-task data fusion. This joint blind source separation (BSS) model takes advantage of two multivariate methods: canonical correlation analysis and independent component analysis, to achieve both high estimation accuracy and to provide the correct connection between two datasets in which sources can have either common or distinct between-dataset correlation. In both simulated and real fMRI applications, we compare the proposed scheme with other joint BSS models and examine the different modeling assumptions. The contrast images of two tasks: sensorimotor (SM) and Sternberg working memory (SB), derived from a general linear model (GLM), were chosen to contribute real multi-task fMRI data, both of which were collected from 50 schizophrenia patients and 50 healthy controls. When examining the relationship with duration of illness, CCA+ICA revealed a significant negative correlation with temporal lobe activation. Furthermore, CCA+ICA located sensorimotor cortex as the group-discriminative regions for both tasks and identified the superior temporal gyrus in SM and prefrontal cortex in SB as task-specific group-discriminative brain networks. In summary, we compared the new approach to some competitive methods with different assumptions, and found consistent results regarding each of their hypotheses on connecting the two tasks. Such an approach fills a gap in existing multivariate methods for identifying biomarkers from brain imaging data.

  5. Magnitude and sign of long-range correlated time series: Decomposition and surrogate signal generation.

    PubMed

    Gómez-Extremera, Manuel; Carpena, Pedro; Ivanov, Plamen Ch; Bernaola-Galván, Pedro A

    2016-04-01

    We systematically study the scaling properties of the magnitude and sign of the fluctuations in correlated time series, which is a simple and useful approach to distinguish between systems with different dynamical properties but the same linear correlations. First, we decompose artificial long-range power-law linearly correlated time series into magnitude and sign series derived from the consecutive increments in the original series, and we study their correlation properties. We find analytical expressions for the correlation exponent of the sign series as a function of the exponent of the original series. Such expressions are necessary for modeling surrogate time series with desired scaling properties. Next, we study linear and nonlinear correlation properties of series composed as products of independent magnitude and sign series. These surrogate series can be considered as a zero-order approximation to the analysis of the coupling of magnitude and sign in real data, a problem still open in many fields. We find analytical results for the scaling behavior of the composed series as a function of the correlation exponents of the magnitude and sign series used in the composition, and we determine the ranges of magnitude and sign correlation exponents leading to either single scaling or to crossover behaviors. Finally, we obtain how the linear and nonlinear properties of the composed series depend on the correlation exponents of their magnitude and sign series. Based on this information we propose a method to generate surrogate series with controlled correlation exponent and multifractal spectrum.

  6. Protoplanetary disc `isochrones' and the evolution of discs in the M˙-Md plane

    NASA Astrophysics Data System (ADS)

    Lodato, Giuseppe; Scardoni, Chiara E.; Manara, Carlo F.; Testi, Leonardo

    2017-12-01

    In this paper, we compare simple viscous diffusion models for the disc evolution with the results of recent surveys of the properties of young protoplanetary discs. We introduce the useful concept of 'disc isochrones' in the accretion rate-disc mass plane and explore a set of Monte Carlo realization of disc initial conditions. We find that such simple viscous models can provide a remarkable agreement with the available data in the Lupus star forming region, with the key requirement that the average viscous evolutionary time-scale of the discs is comparable to the cluster age. Our models produce naturally a correlation between mass accretion rate and disc mass that is shallower than linear, contrary to previous results and in agreement with observations. We also predict that a linear correlation, with a tighter scatter, should be found for more evolved disc populations. Finally, we find that such viscous models can reproduce the observations in the Lupus region only in the assumption that the efficiency of angular momentum transport is a growing function of radius, thus putting interesting constraints on the nature of the microscopic processes that lead to disc accretion.

  7. Linear free energy relationships of the 1H and 13C NMR chemical shifts of 3-methylene-2-substituted-1,4-pentadienes

    NASA Astrophysics Data System (ADS)

    Valentić, Nataša V.; Vitnik, Željko; Kozhushkov, Sergei I.; de Meijere, Armin; Ušćumlić, Gordana S.; Juranić, Ivan O.

    2005-06-01

    Linear free energy relationships (LFER) were applied to the 1H and 13C NMR chemical shifts ( δN, N= 1H and 13C, respectively) in the unsaturated backbone of cross-conjugated trienes 3-methylene-2-substituted-1,4-pentadienes. The NMR data were correlated using five different LFER models, based on the mono, the dual and the triple substituent parameter (MSP, DSP and TSP, respectively) treatment. The simple and extended Hammett equations, and the three postulated unconventional LFER models obtained by adaptation of the later, were used. The geometry data, which are needed in Karplus-type and McConnell-type analysis, were obtained using semi-empirical MNDO-PM3 calculations. In correlating the data the TSP approach was more successful than the MSP and DSP approaches. The fact that the calculated molecular geometries allow accurate prediction of the NMR data confirms the validity of unconventional LFER models used. These results suggest the s- cis conformation of the cross-conjugated triene as the preferred one. Postulated unconventional DSP and TSP equations enable the assessment of electronic substituent effects in the presence of other interfering influences.

  8. Tuning the overlap and the cross-layer correlations in two-layer networks: Application to a susceptible-infectious-recovered model with awareness dissemination

    NASA Astrophysics Data System (ADS)

    Juher, David; Saldaña, Joan

    2018-03-01

    We study the properties of the potential overlap between two networks A ,B sharing the same set of N nodes (a two-layer network) whose respective degree distributions pA(k ) ,pB(k ) are given. Defining the overlap coefficient α as the Jaccard index, we prove that α is very close to 0 when A and B are random and independently generated. We derive an upper bound αM for the maximum overlap coefficient permitted in terms of pA(k ) , pB(k ) , and N . Then we present an algorithm based on cross rewiring of links to obtain a two-layer network with any prescribed α inside the range (0 ,αM) . A refined version of the algorithm allows us to minimize the cross-layer correlations that unavoidably appear for values of α beyond a critical overlap αc<αM . Finally, we present a very simple example of a susceptible-infectious-recovered epidemic model with information dissemination and use the algorithms to determine the impact of the overlap on the final outbreak size predicted by the model.

  9. Quenched Large Deviations for Simple Random Walks on Percolation Clusters Including Long-Range Correlations

    NASA Astrophysics Data System (ADS)

    Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki

    2018-03-01

    We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2}). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3}) and the level sets of the Gaussian free field ({d≥ 3}). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.

  10. Quenched Large Deviations for Simple Random Walks on Percolation Clusters Including Long-Range Correlations

    NASA Astrophysics Data System (ADS)

    Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki

    2017-12-01

    We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2} ). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3} ) and the level sets of the Gaussian free field ({d≥ 3} ). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.

  11. A one-parametric formula relating the frequencies of twin-peak quasi-periodic oscillations

    NASA Astrophysics Data System (ADS)

    Török, Gabriel; Goluchová, Kateřina; Šrámková, Eva; Horák, Jiří; Bakala, Pavel; Urbanec, Martin

    2017-12-01

    Timing analysis of X-ray flux in more than a dozen low-mass X-ray binary systems containing a neutron star reveals remarkable correlations between frequencies of two characteristic peaks present in the power-density spectra. We find a simple analytic relation that well reproduces all these individual correlations. We link this relation to a physical model which involves accretion rate modulation caused by an oscillating torus.

  12. Predictive and Feedback Performance Errors are Signaled in the Simple Spike Discharge of Individual Purkinje Cells

    PubMed Central

    Popa, Laurentiu S.; Hewitt, Angela L.; Ebner, Timothy J.

    2012-01-01

    The cerebellum has been implicated in processing motor errors required for online control of movement and motor learning. The dominant view is that Purkinje cell complex spike discharge signals motor errors. This study investigated whether errors are encoded in the simple spike discharge of Purkinje cells in monkeys trained to manually track a pseudo-randomly moving target. Four task error signals were evaluated based on cursor movement relative to target movement. Linear regression analyses based on firing residuals ensured that the modulation with a specific error parameter was independent of the other error parameters and kinematics. The results demonstrate that simple spike firing in lobules IV–VI is significantly correlated with position, distance and directional errors. Independent of the error signals, the same Purkinje cells encode kinematics. The strongest error modulation occurs at feedback timing. However, in 72% of cells at least one of the R2 temporal profiles resulting from regressing firing with individual errors exhibit two peak R2 values. For these bimodal profiles, the first peak is at a negative τ (lead) and a second peak at a positive τ (lag), implying that Purkinje cells encode both prediction and feedback about an error. For the majority of the bimodal profiles, the signs of the regression coefficients or preferred directions reverse at the times of the peaks. The sign reversal results in opposing simple spike modulation for the predictive and feedback components. Dual error representations may provide the signals needed to generate sensory prediction errors used to update a forward internal model. PMID:23115173

  13. Cross-borehole flow analysis to characterize fracture connections in the Melechov Granite, Bohemian-Moravian Highland, Czech Republic

    USGS Publications Warehouse

    Paillet, Frederick L.; Williams, John H.; Urik, Joseph; Lukes, Joseph; Kobr, Miroslav; Mares, Stanislav

    2012-01-01

    Application of the cross-borehole flow method, in which short pumping cycles in one borehole are used to induce time-transient flow in another borehole, demonstrated that a simple hydraulic model can characterize the fracture connections in the bedrock mass between the two boreholes. The analysis determines the properties of fracture connections rather than those of individual fractures intersecting a single borehole; the model contains a limited number of adjustable parameters so that any correlation between measured and simulated flow test data is significant. The test was conducted in two 200-m deep boreholes spaced 21 m apart in the Melechov Granite in the Bohemian-Moravian Highland, Czech Republic. Transient flow was measured at depth stations between the identified transmissive fractures in one of the boreholes during short-term pumping and recovery periods in the other borehole. Simulated flows, based on simple model geometries, closely matched the measured flows. The relative transmissivity and storage of the inferred fracture connections were corroborated by tracer testing. The results demonstrate that it is possible to assess the properties of a fracture flow network despite being restricted to making measurements in boreholes in which a local population of discrete fractures regulates the hydraulic communication with the larger-scale aquifer system.

  14. An Information Theoretical Analysis of Human Insulin-Glucose System Toward the Internet of Bio-Nano Things.

    PubMed

    Abbasi, Naveed A; Akan, Ozgur B

    2017-12-01

    Molecular communication is an important tool to understand biological communications with many promising applications in Internet of Bio-Nano Things (IoBNT). The insulin-glucose system is of key significance among the major intra-body nanonetworks, since it fulfills metabolic requirements of the body. The study of biological networks from information and communication theoretical (ICT) perspective is necessary for their introduction in the IoBNT framework. Therefore, the objective of this paper is to provide and analyze for the first time in the literature, a simple molecular communication model of the human insulin-glucose system from ICT perspective. The data rate, channel capacity, and the group propagation delay are analyzed for a two-cell network between a pancreatic beta cell and a muscle cell that are connected through a capillary. The results point out a correlation between an increase in insulin resistance and a decrease in the data rate and channel capacity, an increase in the insulin transmission rate, and an increase in the propagation delay. We also propose applications for the introduction of the system in the IoBNT framework. Multi-cell insulin glucose system models may be based on this simple model to help in the investigation, diagnosis, and treatment of insulin resistance by means of novel IoBNT applications.

  15. Background-Error Correlation Model Based on the Implicit Solution of a Diffusion Equation

    DTIC Science & Technology

    2010-01-01

    1 Background- Error Correlation Model Based on the Implicit Solution of a Diffusion Equation Matthew J. Carrier* and Hans Ngodock...4. TITLE AND SUBTITLE Background- Error Correlation Model Based on the Implicit Solution of a Diffusion Equation 5a. CONTRACT NUMBER 5b. GRANT...2001), which sought to model error correlations based on the explicit solution of a generalized diffusion equation. The implicit solution is

  16. Real-Time Model-Based Leak-Through Detection within Cryogenic Flow Systems

    NASA Technical Reports Server (NTRS)

    Walker, M.; Figueroa, F.

    2015-01-01

    The timely detection of leaks within cryogenic fuel replenishment systems is of significant importance to operators on account of the safety and economic impacts associated with material loss and operational inefficiencies. Associated loss in control of pressure also effects the stability and ability to control the phase of cryogenic fluids during replenishment operations. Current research dedicated to providing Prognostics and Health Management (PHM) coverage of such cryogenic replenishment systems has focused on the detection of leaks to atmosphere involving relatively simple model-based diagnostic approaches that, while effective, are unable to isolate the fault to specific piping system components. The authors have extended this research to focus on the detection of leaks through closed valves that are intended to isolate sections of the piping system from the flow and pressurization of cryogenic fluids. The described approach employs model-based detection of leak-through conditions based on correlations of pressure changes across isolation valves and attempts to isolate the faults to specific valves. Implementation of this capability is enabled by knowledge and information embedded in the domain model of the system. The approach has been used effectively to detect such leak-through faults during cryogenic operational testing at the Cryogenic Testbed at NASA's Kennedy Space Center.

  17. Statistical theory of correlations in random packings of hard particles.

    PubMed

    Jin, Yuliang; Puckett, James G; Makse, Hernán A

    2014-05-01

    A random packing of hard particles represents a fundamental model for granular matter. Despite its importance, analytical modeling of random packings remains difficult due to the existence of strong correlations which preclude the development of a simple theory. Here, we take inspiration from liquid theories for the n-particle angular correlation function to develop a formalism of random packings of hard particles from the bottom up. A progressive expansion into a shell of particles converges in the large layer limit under a Kirkwood-like approximation of higher-order correlations. We apply the formalism to hard disks and predict the density of two-dimensional random close packing (RCP), ϕ(rcp) = 0.85 ± 0.01, and random loose packing (RLP), ϕ(rlp) = 0.67 ± 0.01. Our theory also predicts a phase diagram and angular correlation functions that are in good agreement with experimental and numerical data.

  18. Black holes from large N singlet models

    NASA Astrophysics Data System (ADS)

    Amado, Irene; Sundborg, Bo; Thorlacius, Larus; Wintergerst, Nico

    2018-03-01

    The emergent nature of spacetime geometry and black holes can be directly probed in simple holographic duals of higher spin gravity and tensionless string theory. To this end, we study time dependent thermal correlation functions of gauge invariant observables in suitably chosen free large N gauge theories. At low temperature and on short time scales the correlation functions encode propagation through an approximate AdS spacetime while interesting departures emerge at high temperature and on longer time scales. This includes the existence of evanescent modes and the exponential decay of time dependent boundary correlations, both of which are well known indicators of bulk black holes in AdS/CFT. In addition, a new time scale emerges after which the correlation functions return to a bulk thermal AdS form up to an overall temperature dependent normalization. A corresponding length scale was seen in equal time correlation functions in the same models in our earlier work.

  19. Reanalysis, compatibility and correlation in analysis of modified antenna structures

    NASA Technical Reports Server (NTRS)

    Levy, R.

    1989-01-01

    A simple computational procedure is synthesized to process changes in the microwave-antenna pathlength-error measure when there are changes in the antenna structure model. The procedure employs structural modification reanalysis methods combined with new extensions of correlation analysis to provide the revised rms pathlength error. Mainframe finite-element-method processing of the structure model is required only for the initial unmodified structure, and elementary postprocessor computations develop and deal with the effects of the changes. Several illustrative computational examples are included. The procedure adapts readily to processing spectra of changes for parameter studies or sensitivity analyses.

  20. 92 Years of the Ising Model: A High Resolution Monte Carlo Study

    NASA Astrophysics Data System (ADS)

    Xu, Jiahao; Ferrenberg, Alan M.; Landau, David P.

    2018-04-01

    Using extensive Monte Carlo simulations that employ the Wolff cluster flipping and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising model with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, we obtained the critical inverse temperature K c = 0.221 654 626(5) and the critical exponent of the correlation length ν = 0.629 912(86) with precision that improves upon previous Monte Carlo estimates.

  1. Understanding land surface evapotranspiration with satellite multispectral measurements

    NASA Technical Reports Server (NTRS)

    Menenti, M.

    1993-01-01

    Quantitative use of remote multispectral measurements to study and map land surface evapotranspiration has been a challenging issue for the past 20 years. Past work is reviewed against process physics. A simple two-layer combination-type model is used which is applicable to both vegetation and bare soil. The theoretic analysis is done to show which land surface properties are implicitly defined by such evaporation models and to assess whether they are measurable as a matter of principle. Conceptual implications of the spatial correlation of land surface properties, as observed by means of remote multispectral measurements, are illustrated with results of work done in arid zones. A normalization of spatial variability of land surface evaporation is proposed by defining a location-dependent potential evaporation and surface temperature range. Examples of the application of remote based estimates of evaporation to hydrological modeling studies in Egypt and Argentina are presented.

  2. Estimation of Locomotion States of a Rat by Neural Signals from the Motor Cortices Based on a Linear Correlation Model

    NASA Astrophysics Data System (ADS)

    Fukayama, Osamu; Taniguchi, Noriyuki; Suzuki, Takafumi; Mabuchi, Kunihiko

    We are developing a brain-machine interface (BMI) called “RatCar," a small vehicle controlled by the neural signals of a rat's brain. An unconfined adult rat with a set of bundled neural electrodes in the brain rides on the vehicle. Each bundle consists of four tungsten wires isolated with parylene polymer. These bundles were implanted in the primary motor and premotor cortices in both hemispheres of the brain. In this paper, methods and results for estimating locomotion speed and directional changes are described. Neural signals were recorded as the rat moved in a straight line and as it changed direction in a curve. Spike-like waveforms were then detected and classified into several clusters to calculate a firing rate for each neuron. The actual locomotion velocity and directional changes of the rat were recorded concurrently. Finally, the locomotion states were correlated with the neural firing rates using a simple linear model. As a result, the abstract estimation of the locomotion velocity and directional changes were achieved.

  3. Correlation of microstructure and tempered martensite embrittlement in two 4340 steels

    NASA Astrophysics Data System (ADS)

    Lee, S.; Lee, D. Y.; Asaro, R. J.

    1989-06-01

    This study is concerned with a correlation between the microstructure and fracture behavior of two AISI 4340 steels which were vacuum induction melted and then deoxidized with aluminum and titanium additions. This allowed a comparison between microstructures that underwent large increases in grain size and those that did not. When the steels were tempered at 350°C, K Ic and Charpy impact energy plots showed troughs which indicated tempered martensite embrittlement (TME). The TME results of plane strain fracture toughness are interpreted using a simple ductile fracture initiation model based on large strain deformation fields ahead of cracks, suggesting that K Icscales roughly with the square root of the spacing of cementite particles precipitated during the tempering treatment. The trough in Charpy impact energy is found to coincide well with the amount of intergranular fracture and the effect of segregation of phosphorus on the austenite grain boundaries. In addition, cementite particles are of primary importance in initiating the intergranular cracks and, consequently, reducing the Charpy energy. These findings suggest that TME in the two 4340 steels studied can be explained quantitatively using different fracture models.

  4. A novel health indicator for on-line lithium-ion batteries remaining useful life prediction

    NASA Astrophysics Data System (ADS)

    Zhou, Yapeng; Huang, Miaohua; Chen, Yupu; Tao, Ye

    2016-07-01

    Prediction of lithium-ion batteries remaining useful life (RUL) plays an important role in an intelligent battery management system. The capacity and internal resistance are often used as the batteries health indicator (HI) for quantifying degradation and predicting RUL. However, on-line measurement of capacity and internal resistance are hardly realizable due to the not fully charged and discharged condition and the extremely expensive cost, respectively. Therefore, there is a great need to find an optional way to deal with this plight. In this work, a novel HI is extracted from the operating parameters of lithium-ion batteries for degradation modeling and RUL prediction. Moreover, Box-Cox transformation is employed to improve HI performance. Then Pearson and Spearman correlation analyses are utilized to evaluate the similarity between real capacity and the estimated capacity derived from the HI. Next, both simple statistical regression technique and optimized relevance vector machine are employed to predict the RUL based on the presented HI. The correlation analyses and prediction results show the efficiency and effectiveness of the proposed HI for battery degradation modeling and RUL prediction.

  5. Reading Cooperatively or Independently? Study on ELL Student Reading Development

    ERIC Educational Resources Information Center

    Liu, Siping; Wang, Jian

    2015-01-01

    This study examines the effectiveness of cooperative reading teaching activities and independent reading activities for English language learner (ELL) students at 4th grade level. Based on simple linear regression and correlational analyses of data collected from two large data bases, PIRLS and NAEP, the study found that cooperative reading…

  6. Comparing convective heat fluxes derived from thermodynamics to a radiative-convective model and GCMs

    NASA Astrophysics Data System (ADS)

    Dhara, Chirag; Renner, Maik; Kleidon, Axel

    2015-04-01

    The convective transport of heat and moisture plays a key role in the climate system, but the transport is typically parameterized in models. Here, we aim at the simplest possible physical representation and treat convective heat fluxes as the result of a heat engine. We combine the well-known Carnot limit of this heat engine with the energy balances of the surface-atmosphere system that describe how the temperature difference is affected by convective heat transport, yielding a maximum power limit of convection. This results in a simple analytic expression for convective strength that depends primarily on surface solar absorption. We compare this expression with an idealized grey atmosphere radiative-convective (RC) model as well as Global Circulation Model (GCM) simulations at the grid scale. We find that our simple expression as well as the RC model can explain much of the geographic variation of the GCM output, resulting in strong linear correlations among the three approaches. The RC model, however, shows a lower bias than our simple expression. We identify the use of the prescribed convective adjustment in RC-like models as the reason for the lower bias. The strength of our model lies in its ability to capture the geographic variation of convective strength with a parameter-free expression. On the other hand, the comparison with the RC model indicates a method for improving the formulation of radiative transfer in our simple approach. We also find that the latent heat fluxes compare very well among the approaches, as well as their sensitivity to surface warming. What our comparison suggests is that the strength of convection and their sensitivity in the climatic mean can be estimated relatively robustly by rather simple approaches.

  7. Prescription of land-surface boundary conditions in GISS GCM 2: A simple method based on high-resolution vegetation data bases

    NASA Technical Reports Server (NTRS)

    Matthews, E.

    1984-01-01

    A simple method was developed for improved prescription of seasonal surface characteristics and parameterization of land-surface processes in climate models. This method, developed for the Goddard Institute for Space Studies General Circulation Model II (GISS GCM II), maintains the spatial variability of fine-resolution land-cover data while restricting to 8 the number of vegetation types handled in the model. This was achieved by: redefining the large number of vegetation classes in the 1 deg x 1 deg resolution Matthews (1983) vegetation data base as percentages of 8 simple types; deriving roughness length, field capacity, masking depth and seasonal, spectral reflectivity for the 8 types; and aggregating these surface features from the 1 deg x 1 deg resolution to coarser model resolutions, e.g., 8 deg latitude x 10 deg longitude or 4 deg latitude x 5 deg longitude.

  8. A robust collagen scoring method for human liver fibrosis by second harmonic microscopy.

    PubMed

    Guilbert, Thomas; Odin, Christophe; Le Grand, Yann; Gailhouste, Luc; Turlin, Bruno; Ezan, Frédérick; Désille, Yoann; Baffet, Georges; Guyader, Dominique

    2010-12-06

    Second Harmonic Generation (SHG) microscopy offers the opportunity to image collagen of type I without staining. We recently showed that a simple scoring method, based on SHG images of histological human liver biopsies, correlates well with the Metavir assessment of fibrosis level (Gailhouste et al., J. Hepatol., 2010). In this article, we present a detailed study of this new scoring method with two different objective lenses. By using measurements of the objectives point spread functions and of the photomultiplier gain, and a simple model of the SHG intensity, we show that our scoring method, applied to human liver biopsies, is robust to the objective's numerical aperture (NA) for low NA, the choice of the reference sample and laser power, and the spatial sampling rate. The simplicity and robustness of our collagen scoring method may open new opportunities in the quantification of collagen content in different organs, which is of main importance in providing diagnostic information and evaluation of therapeutic efficiency.

  9. On heart rate variability and autonomic activity in homeostasis and in systemic inflammation.

    PubMed

    Scheff, Jeremy D; Griffel, Benjamin; Corbett, Siobhan A; Calvano, Steve E; Androulakis, Ioannis P

    2014-06-01

    Analysis of heart rate variability (HRV) is a promising diagnostic technique due to the noninvasive nature of the measurements involved and established correlations with disease severity, particularly in inflammation-linked disorders. However, the complexities underlying the interpretation of HRV complicate understanding the mechanisms that cause variability. Despite this, such interpretations are often found in literature. In this paper we explored mathematical modeling of the relationship between the autonomic nervous system and the heart, incorporating basic mechanisms such as perturbing mean values of oscillating autonomic activities and saturating signal transduction pathways to explore their impacts on HRV. We focused our analysis on human endotoxemia, a well-established, controlled experimental model of systemic inflammation that provokes changes in HRV representative of acute stress. By contrasting modeling results with published experimental data and analyses, we found that even a simple model linking the autonomic nervous system and the heart confound the interpretation of HRV changes in human endotoxemia. Multiple plausible alternative hypotheses, encoded in a model-based framework, equally reconciled experimental results. In total, our work illustrates how conventional assumptions about the relationships between autonomic activity and frequency-domain HRV metrics break down, even in a simple model. This underscores the need for further experimental work towards unraveling the underlying mechanisms of autonomic dysfunction and HRV changes in systemic inflammation. Understanding the extent of information encoded in HRV signals is critical in appropriately analyzing prior and future studies. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. On heart rate variability and autonomic activity in homeostasis and in systemic inflammation

    PubMed Central

    Scheff, Jeremy D.; Griffel, Benjamin; Corbett, Siobhan A.; Calvano, Steve E.; Androulakis, Ioannis P.

    2014-01-01

    Analysis of heart rate variability (HRV) is a promising diagnostic technique due to the noninvasive nature of the measurements involved and established correlations with disease severity, particularly in inflammation-linked disorders. However, the complexities underlying the interpretation of HRV complicate understanding the mechanisms that cause variability. Despite this, such interpretations are often found in literature. In this paper we explored mathematical modeling of the relationship between the autonomic nervous system and the heart, incorporating basic mechanisms such as perturbing mean values of oscillating autonomic activities and saturating signal transduction pathways to explore their impacts on HRV. We focused our analysis on human endotoxemia, a well-established, controlled experimental model of systemic inflammation that provokes changes in HRV representative of acute stress. By contrasting modeling results with published experimental data and analyses, we found that even a simple model linking the autonomic nervous system and the heart confound the interpretation of HRV changes in human endotoxemia. Multiple plausible alternative hypotheses, encoded in a model-based framework, equally reconciled experimental results. In total, our work illustrates how conventional assumptions about the relationships between autonomic activity and frequency-domain HRV metrics break down, even in a simple model. This underscores the need for further experimental work towards unraveling the underlying mechanisms of autonomic dysfunction and HRV changes in systemic inflammation. Understanding the extent of information encoded in HRV signals is critical in appropriately analyzing prior and future studies. PMID:24680646

  11. Predicting Hail Size Using Model Vertical Velocities

    DTIC Science & Technology

    2008-03-01

    updrafts from a simple cloud model using forecasted soundings . The models used MM5 model data coinciding with severe hail events collected from the...updrafts from a simple cloud model using forecasted soundings . The models used MM5 model data coinciding with severe hail events collected from the...determine their accuracy. Plus they are based primary on observed upper air soundings . Obtaining upper air soundings in proximity to convective

  12. A bio-inspired kinematic controller for obstacle avoidance during reaching tasks with real robots.

    PubMed

    Srinivasa, Narayan; Bhattacharyya, Rajan; Sundareswara, Rashmi; Lee, Craig; Grossberg, Stephen

    2012-11-01

    This paper describes a redundant robot arm that is capable of learning to reach for targets in space in a self-organized fashion while avoiding obstacles. Self-generated movement commands that activate correlated visual, spatial and motor information are used to learn forward and inverse kinematic control models while moving in obstacle-free space using the Direction-to-Rotation Transform (DIRECT). Unlike prior DIRECT models, the learning process in this work was realized using an online Fuzzy ARTMAP learning algorithm. The DIRECT-based kinematic controller is fault tolerant and can handle a wide range of perturbations such as joint locking and the use of tools despite not having experienced them during learning. The DIRECT model was extended based on a novel reactive obstacle avoidance direction (DIRECT-ROAD) model to enable redundant robots to avoid obstacles in environments with simple obstacle configurations. However, certain configurations of obstacles in the environment prevented the robot from reaching the target with purely reactive obstacle avoidance. To address this complexity, a self-organized process of mental rehearsals of movements was modeled, inspired by human and animal experiments on reaching, to generate plans for movement execution using DIRECT-ROAD in complex environments. These mental rehearsals or plans are self-generated by using the Fuzzy ARTMAP algorithm to retrieve multiple solutions for reaching each target while accounting for all the obstacles in its environment. The key aspects of the proposed novel controller were illustrated first using simple examples. Experiments were then performed on real robot platforms to demonstrate successful obstacle avoidance during reaching tasks in real-world environments. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. FT-IR/ATR univariate and multivariate calibration models for in situ monitoring of sugars in complex microalgal culture media.

    PubMed

    Girard, Jean-Michel; Deschênes, Jean-Sébastien; Tremblay, Réjean; Gagnon, Jonathan

    2013-09-01

    The objective of this work is to develop a quick and simple method for the in situ monitoring of sugars in biological cultures. A new technology based on Attenuated Total Reflectance-Fourier Transform Infrared (FT-IR/ATR) spectroscopy in combination with an external light guiding fiber probe was tested, first to build predictive models from solutions of pure sugars, and secondly to use those models to monitor the sugars in the complex culture medium of mixotrophic microalgae. Quantification results from the univariate model were correlated with the total dissolved solids content (R(2)=0.74). A vector normalized multivariate model was used to proportionally quantify the different sugars present in the complex culture medium and showed a predictive accuracy of >90% for sugars representing >20% of the total. This method offers an alternative to conventional sugar monitoring assays and could be used at-line or on-line in commercial scale production systems. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. An alternative way to evaluate chemistry-transport model variability

    NASA Astrophysics Data System (ADS)

    Menut, Laurent; Mailler, Sylvain; Bessagnet, Bertrand; Siour, Guillaume; Colette, Augustin; Couvidat, Florian; Meleux, Frédérik

    2017-03-01

    A simple and complementary model evaluation technique for regional chemistry transport is discussed. The methodology is based on the concept that we can learn about model performance by comparing the simulation results with observational data available for time periods other than the period originally targeted. First, the statistical indicators selected in this study (spatial and temporal correlations) are computed for a given time period, using colocated observation and simulation data in time and space. Second, the same indicators are used to calculate scores for several other years while conserving the spatial locations and Julian days of the year. The difference between the results provides useful insights on the model capability to reproduce the observed day-to-day and spatial variability. In order to synthesize the large amount of results, a new indicator is proposed, designed to compare several error statistics between all the years of validation and to quantify whether the period and area being studied were well captured by the model for the correct reasons.

  15. Developing a multipoint titration method with a variable dose implementation for anaerobic digestion monitoring.

    PubMed

    Salonen, K; Leisola, M; Eerikäinen, T

    2009-01-01

    Determination of metabolites from an anaerobic digester with an acid base titration is considered as superior method for many reasons. This paper describes a practical at line compatible multipoint titration method. The titration procedure was improved by speed and data quality. A simple and novel control algorithm for estimating a variable titrant dose was derived for this purpose. This non-linear PI-controller like algorithm does not require any preliminary information from sample. Performance of this controller is superior compared to traditional linear PI-controllers. In addition, simplification for presenting polyprotic acids as a sum of multiple monoprotic acids is introduced along with a mathematical error examination. A method for inclusion of the ionic strength effect with stepwise iteration is shown. The titration model is presented with matrix notations enabling simple computation of all concentration estimates. All methods and algorithms are illustrated in the experimental part. A linear correlation better than 0.999 was obtained for both acetate and phosphate used as model compounds with slopes of 0.98 and 1.00 and average standard deviations of 0.6% and 0.8%, respectively. Furthermore, insensitivity of the presented method for overlapping buffer capacity curves was shown.

  16. Noise Modeling From Conductive Shields Using Kirchhoff Equations.

    PubMed

    Sandin, Henrik J; Volegov, Petr L; Espy, Michelle A; Matlashov, Andrei N; Savukov, Igor M; Schultz, Larry J

    2010-10-09

    Progress in the development of high-sensitivity magnetic-field measurements has stimulated interest in understanding the magnetic noise of conductive materials, especially of magnetic shields based on high-permeability materials and/or high-conductivity materials. For example, SQUIDs and atomic magnetometers have been used in many experiments with mu-metal shields, and additionally SQUID systems frequently have radio frequency shielding based on thin conductive materials. Typical existing approaches to modeling noise only work with simple shield and sensor geometries while common experimental setups today consist of multiple sensor systems with complex shield geometries. With complex sensor arrays used in, for example, MEG and Ultra Low Field MRI studies, knowledge of the noise correlation between sensors is as important as knowledge of the noise itself. This is crucial for incorporating efficient noise cancelation schemes for the system. We developed an approach that allows us to calculate the Johnson noise for arbitrary shaped shields and multiple sensor systems. The approach is efficient enough to be able to run on a single PC system and return results on a minute scale. With a multiple sensor system our approach calculates not only the noise for each sensor but also the noise correlation matrix between sensors. Here we will show how the algorithm can be implemented.

  17. Complex versus simple models: ion-channel cardiac toxicity prediction.

    PubMed

    Mistry, Hitesh B

    2018-01-01

    There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model B net was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the B net model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.

  18. Computational Model of Primary Visual Cortex Combining Visual Attention for Action Recognition

    PubMed Central

    Shu, Na; Gao, Zhiyong; Chen, Xiangan; Liu, Haihua

    2015-01-01

    Humans can easily understand other people’s actions through visual systems, while computers cannot. Therefore, a new bio-inspired computational model is proposed in this paper aiming for automatic action recognition. The model focuses on dynamic properties of neurons and neural networks in the primary visual cortex (V1), and simulates the procedure of information processing in V1, which consists of visual perception, visual attention and representation of human action. In our model, a family of the three-dimensional spatial-temporal correlative Gabor filters is used to model the dynamic properties of the classical receptive field of V1 simple cell tuned to different speeds and orientations in time for detection of spatiotemporal information from video sequences. Based on the inhibitory effect of stimuli outside the classical receptive field caused by lateral connections of spiking neuron networks in V1, we propose surround suppressive operator to further process spatiotemporal information. Visual attention model based on perceptual grouping is integrated into our model to filter and group different regions. Moreover, in order to represent the human action, we consider the characteristic of the neural code: mean motion map based on analysis of spike trains generated by spiking neurons. The experimental evaluation on some publicly available action datasets and comparison with the state-of-the-art approaches demonstrate the superior performance of the proposed model. PMID:26132270

  19. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance.

    PubMed

    Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J

    2015-09-30

    To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT ("face patches") did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. Significance statement: We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. Copyright © 2015 the authors 0270-6474/15/3513402-17$15.00/0.

  20. Modeling the pressure-strain correlation of turbulence: An invariant dynamical systems approach

    NASA Technical Reports Server (NTRS)

    Speziale, Charles G.; Sarkar, Sutanu; Gatski, Thomas B.

    1990-01-01

    The modeling of the pressure-strain correlation of turbulence is examined from a basic theoretical standpoint with a view toward developing improved second-order closure models. Invariance considerations along with elementary dynamical systems theory are used in the analysis of the standard hierarchy of closure models. In these commonly used models, the pressure-strain correlation is assumed to be a linear function of the mean velocity gradients with coefficients that depend algebraically on the anisotropy tensor. It is proven that for plane homogeneous turbulent flows the equilibrium structure of this hierarchy of models is encapsulated by a relatively simple model which is only quadratically nonlinear in the anisotropy tensor. This new quadratic model - the SSG model - is shown to outperform the Launder, Reece, and Rodi model (as well as more recent models that have a considerably more complex nonlinear structure) in a variety of homogeneous turbulent flows. Some deficiencies still remain for the description of rotating turbulent shear flows that are intrinsic to this general hierarchy of models and, hence, cannot be overcome by the mere introduction of more complex nonlinearities. It is thus argued that the recent trend of adding substantially more complex nonlinear terms containing the anisotropy tensor may be of questionable value in the modeling of the pressure-strain correlation. Possible alternative approaches are discussed briefly.

  1. Modelling the pressure-strain correlation of turbulence - An invariant dynamical systems approach

    NASA Technical Reports Server (NTRS)

    Speziale, Charles G.; Sarkar, Sutanu; Gatski, Thomas B.

    1991-01-01

    The modeling of the pressure-strain correlation of turbulence is examined from a basic theoretical standpoint with a view toward developing improved second-order closure models. Invariance considerations along with elementary dynamical systems theory are used in the analysis of the standard hierarchy of closure models. In these commonly used models, the pressure-strain correlation is assumed to be a linear function of the mean velocity gradients with coefficients that depend algebraically on the anisotropy tensor. It is proven that for plane homogeneous turbulent flows the equilibrium structure of this hierarchy of models is encapsulated by a relatively simple model which is only quadratically nonlinear in the anisotropy tensor. This new quadratic model - the SSG model - is shown to outperform the Launder, Reece, and Rodi model (as well as more recent models that have a considerably more complex nonlinear structure) in a variety of homogeneous turbulent flows. Some deficiencies still remain for the description of rotating turbulent shear flows that are intrinsic to this general hierarchy of models and, hence, cannot be overcome by the mere introduction of more complex nonlinearities. It is thus argued that the recent trend of adding substantially more complex nonlinear terms containing the anisotropy tensor may be of questionable value in the modeling of the pressure-strain correlation. Possible alternative approaches are discussed briefly.

  2. Theory of inhomogeneous quantum systems. III. Variational wave functions for Fermi fluids

    NASA Astrophysics Data System (ADS)

    Krotscheck, E.

    1985-04-01

    We develop a general variational theory for inhomogeneous Fermi systems such as the electron gas in a metal surface, the surface of liquid 3He, or simple models of heavy nuclei. The ground-state wave function is expressed in terms of two-body correlations, a one-body attenuation factor, and a model-system Slater determinant. Massive partial summations of cluster expansions are performed by means of Born-Green-Yvon and hypernetted-chain techniques. An optimal single-particle basis is generated by a generalized Hartree-Fock equation in which the two-body correlations screen the bare interparticle interaction. The optimization of the pair correlations leads to a state-averaged random-phase-approximation equation and a strictly microscopic determination of the particle-hole interaction.

  3. Modelling fluid accumulation in the neck using simple baseline fluid metrics: implications for sleep apnea.

    PubMed

    Vena, Daniel; Yadollahi, A; Bradley, T Douglas

    2014-01-01

    Obstructive sleep apnea (OSA) is a common respiratory disorder among adults. Recently we have shown that sedentary lifestyle causes an increase in diurnal leg fluid volume (LFV), which can shift into the neck at night when lying down to sleep and increase OSA severity. The purpose of this work was to investigate various metrics that represent baseline fluid retention in the legs and examine their correlation with neck fluid volume (NFV) and to develop a robust model for predicting fluid accumulation in the neck. In 13 healthy awake non-obese men, LFV and NFV were recorded continuously and simultaneously while standing for 5 minutes and then lying supine for 90 minutes. Simple regression was used to examine correlations between baseline LFV, baseline neck circumference (NC) and change in LFV with the outcome variables: change in NC (ΔNC) and in NFV (ΔNFV90) after lying supine for 90 minutes. An exhaustive grid search was implemented to find combinations of input variables which best modeled outcomes. We found strong positive correlations between baseline LFV (supine and standing) and ΔNFV90. Models developed for predicting ΔNFV90 included baseline standing LFV, baseline NC combined with change in LFV after lying supine for 90 minutes. These correlations and the developed models suggest that a greater baseline LFV might contribute to increased fluid accumulation in the neck. These results give more evidence that sedentary lifestyle might play a role in the pathogenesis of OSA by increasing the baseline LFV. The best models for predicting ΔNC include baseline LFV and NC; they improved accuracies of estimating ΔNC over individual predictors, suggesting that a combination of baseline fluid metrics is a good predictor of the change in NC while lying supine. Future work is aimed at adding additional baseline demographic features to improve model accuracy and eventually use it as a screening tool to predict severity of OSA prior to sleep.

  4. A new modal-based approach for modelling the bump foil structure in the simultaneous solution of foil-air bearing rotor dynamic problems

    NASA Astrophysics Data System (ADS)

    Bin Hassan, M. F.; Bonello, P.

    2017-05-01

    Recently-proposed techniques for the simultaneous solution of foil-air bearing (FAB) rotor dynamic problems have been limited to a simple bump foil model in which the individual bumps were modelled as independent spring-damper (ISD) subsystems. The present paper addresses this limitation by introducing a modal model of the bump foil structure into the simultaneous solution scheme. The dynamics of the corrugated bump foil structure are first studied using the finite element (FE) technique. This study is experimentally validated using a purpose-made corrugated foil structure. Based on the findings of this study, it is proposed that the dynamics of the full foil structure, including bump interaction and foil inertia, can be represented by a modal model comprising a limited number of modes. This full foil structure modal model (FFSMM) is then adapted into the rotordynamic FAB problem solution scheme, instead of the ISD model. Preliminary results using the FFSMM under static and unbalance excitation conditions are proven to be reliable by comparison against the corresponding ISD foil model results and by cross-correlating different methods for computing the deflection of the full foil structure. The rotor-bearing model is also validated against experimental and theoretical results in the literature.

  5. Geopressure modeling from petrophysical data: An example from East Kalimantan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herkommer, M.A.

    1994-07-01

    Localized models of abnormal formation pressure (geopressure) are important economic and safety tools frequently used for well planning and drilling operations. Simplified computer-based procedures have been developed that permit these models to be developed more rapidly and with greater accuracy. These techniques are broadly applicable to basins throughout the world where abnormal formation pressures occur. An example from the Attaka field of East Kalimantan, southeast Asia, shows how geopressure models are developed. Using petrophysical and engineering data, empirical correlations between observed pressure and petrophysical logs can be created by computer-assisted data-fitting techniques. These correlations serve as the basis for modelsmore » of the geopressure. By performing repeated analyses on wells at various locations, contour maps on the top of abnormal geopressure can be created. Methods that are simple in their development and application make the task of geopressure estimation less formidable to the geologist and petroleum engineer. Further, more accurate estimates can significantly improve drilling speeds while reducing the incidence of stuck pipe, kicks, and blowouts. In general, geopressure estimates are used in all phases of drilling operations: To develop mud plans and specify equipment ratings, to assist in the recognition of geopressured formations and determination of mud weights, and to improve predictions at offset locations and geologically comparable areas.« less

  6. A symmetric multivariate leakage correction for MEG connectomes

    PubMed Central

    Colclough, G.L.; Brookes, M.J.; Smith, S.M.; Woolrich, M.W.

    2015-01-01

    Ambiguities in the source reconstruction of magnetoencephalographic (MEG) measurements can cause spurious correlations between estimated source time-courses. In this paper, we propose a symmetric orthogonalisation method to correct for these artificial correlations between a set of multiple regions of interest (ROIs). This process enables the straightforward application of network modelling methods, including partial correlation or multivariate autoregressive modelling, to infer connectomes, or functional networks, from the corrected ROIs. Here, we apply the correction to simulated MEG recordings of simple networks and to a resting-state dataset collected from eight subjects, before computing the partial correlations between power envelopes of the corrected ROItime-courses. We show accurate reconstruction of our simulated networks, and in the analysis of real MEGresting-state connectivity, we find dense bilateral connections within the motor and visual networks, together with longer-range direct fronto-parietal connections. PMID:25862259

  7. Solvent Reaction Field Potential inside an Uncharged Globular Protein: A Bridge between Implicit and Explicit Solvent Models?

    PubMed Central

    Baker, Nathan A.; McCammon, J. Andrew

    2008-01-01

    The solvent reaction field potential of an uncharged protein immersed in Simple Point Charge/Extended (SPC/E) explicit solvent was computed over a series of molecular dynamics trajectories, intotal 1560 ns of simulation time. A finite, positive potential of 13 to 24 kbTec−1 (where T = 300K), dependent on the geometry of the solvent-accessible surface, was observed inside the biomolecule. The primary contribution to this potential arose from a layer of positive charge density 1.0 Å from the solute surface, on average 0.008 ec/Å3, which we found to be the product of a highly ordered first solvation shell. Significant second solvation shell effects, including additional layers of charge density and a slight decrease in the short-range solvent-solvent interaction strength, were also observed. The impact of these findings on implicit solvent models was assessed by running similar explicit-solvent simulations on the fully charged protein system. When the energy due to the solvent reaction field in the uncharged system is accounted for, correlation between per-atom electrostatic energies for the explicit solvent model and a simple implicit (Poisson) calculation is 0.97, and correlation between per-atom energies for the explicit solvent model and a previously published, optimized Poisson model is 0.99. PMID:17949217

  8. Solvent reaction field potential inside an uncharged globular protein: A bridge between implicit and explicit solvent models?

    NASA Astrophysics Data System (ADS)

    Cerutti, David S.; Baker, Nathan A.; McCammon, J. Andrew

    2007-10-01

    The solvent reaction field potential of an uncharged protein immersed in simple point charge/extended explicit solvent was computed over a series of molecular dynamics trajectories, in total 1560ns of simulation time. A finite, positive potential of 13-24 kbTec-1 (where T =300K), dependent on the geometry of the solvent-accessible surface, was observed inside the biomolecule. The primary contribution to this potential arose from a layer of positive charge density 1.0Å from the solute surface, on average 0.008ec/Å3, which we found to be the product of a highly ordered first solvation shell. Significant second solvation shell effects, including additional layers of charge density and a slight decrease in the short-range solvent-solvent interaction strength, were also observed. The impact of these findings on implicit solvent models was assessed by running similar explicit solvent simulations on the fully charged protein system. When the energy due to the solvent reaction field in the uncharged system is accounted for, correlation between per-atom electrostatic energies for the explicit solvent model and a simple implicit (Poisson) calculation is 0.97, and correlation between per-atom energies for the explicit solvent model and a previously published, optimized Poisson model is 0.99.

  9. Polarization effects in low-energy electron-CH4 elastic collisions in an exact exchange treatment

    NASA Astrophysics Data System (ADS)

    Jain, Ashok; Weatherford, C. A.; Thompson, D. G.; McNaughten, P.

    1989-12-01

    We have investigated the polarization effects in very-low-energy (below 1 eV) electron- CH4 collisions in an exact-exchange treatment. The two models of the parameter-free polarization potential are employed; one, the VpolJT potential, introduced by Jain and Thompson [J. Phys. B 15, L631 (1982)], is based on an approximate polarized-orbital method, and two, the correlation-polarization potential VpolCP, first proposed by O'Connel and Lane [Phys. Rev. A 27, 1893 (1983)], is given as a simple analytic form in terms of the charge density of the target. In this rather very low-energy region, the polarization effects play a decisive role, particularly in creating structure in the differential cross section (DCS) and producing the Ramsauer-Townsend minimum in the total cross section. Our DCS at 0.2, 0.4, and 0.6 eV are compared with recent measurements. We found that a local parameter-free approximation for the polarization potential is quite successful if it is determined under the polarized-orbital-type technique rather than based on the correlation-polarization approach.

  10. Simple systematization of vibrational excitation cross-section calculations for resonant electron-molecule scattering in the boomerang and impulse models.

    PubMed

    Sarma, Manabendra; Adhikari, S; Mishra, Manoj K

    2007-01-28

    Vibrational excitation (nu(f)<--nu(i)) cross-sections sigma(nu(f)<--nu(i) )(E) in resonant e-N(2) and e-H(2) scattering are calculated from transition matrix elements T(nu(f),nu(i) )(E) obtained using Fourier transform of the cross correlation function , where psi(nu(i))(R,t) approximately =e(-iH(A(2))-(R)t/h phi(nu(i))(R) with time evolution under the influence of the resonance anionic Hamiltonian H(A(2) (-))(A(2) (-)=N(2)(-)/H(2) (-)) implemented using Lanczos and fast Fourier transforms. The target (A(2)) vibrational eigenfunctions phi(nu(i))(R) and phi(nu(f))(R) are calculated using Fourier grid Hamiltonian method applied to potential energy (PE) curves of the neutral target. Application of this simple systematization to calculate vibrational structure in e-N(2) and e-H(2) scattering cross-sections provides mechanistic insights into features underlying presence/absence of structure in e-N(2) and e-H(2) scattering cross-sections. The results obtained with approximate PE curves are in reasonable agreement with experimental/calculated cross-section profiles, and cross correlation functions provide a simple demarcation between the boomerang and impulse models.

  11. Easy monitoring of velocity fields in microfluidic devices using spatiotemporal image correlation spectroscopy.

    PubMed

    Travagliati, Marco; Girardo, Salvatore; Pisignano, Dario; Beltram, Fabio; Cecchini, Marco

    2013-09-03

    Spatiotemporal image correlation spectroscopy (STICS) is a simple and powerful technique, well established as a tool to probe protein dynamics in cells. Recently, its potential as a tool to map velocity fields in lab-on-a-chip systems was discussed. However, the lack of studies on its performance has prevented its use for microfluidics applications. Here, we systematically and quantitatively explore STICS microvelocimetry in microfluidic devices. We exploit a simple experimental setup, based on a standard bright-field inverted microscope (no fluorescence required) and a high-fps camera, and apply STICS to map liquid flow in polydimethylsiloxane (PDMS) microchannels. Our data demonstrates optimal 2D velocimetry up to 10 mm/s flow and spatial resolution down to 5 μm.

  12. Agent Model Development for Assessing Climate-Induced Geopolitical Instability.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boslough, Mark B.; Backus, George A.

    2005-12-01

    We present the initial stages of development of new agent-based computational methods to generate and test hypotheses about linkages between environmental change and international instability. This report summarizes the first year's effort of an originally proposed three-year Laboratory Directed Research and Development (LDRD) project. The preliminary work focused on a set of simple agent-based models and benefited from lessons learned in previous related projects and case studies of human response to climate change and environmental scarcity. Our approach was to define a qualitative model using extremely simple cellular agent models akin to Lovelock's Daisyworld and Schelling's segregation model. Such modelsmore » do not require significant computing resources, and users can modify behavior rules to gain insights. One of the difficulties in agent-based modeling is finding the right balance between model simplicity and real-world representation. Our approach was to keep agent behaviors as simple as possible during the development stage (described herein) and to ground them with a realistic geospatial Earth system model in subsequent years. This work is directed toward incorporating projected climate data--including various C02 scenarios from the Intergovernmental Panel on Climate Change (IPCC) Third Assessment Report--and ultimately toward coupling a useful agent-based model to a general circulation model.3« less

  13. Linking indices for biodiversity monitoring to extinction risk theory.

    PubMed

    McCarthy, Michael A; Moore, Alana L; Krauss, Jochen; Morgan, John W; Clements, Christopher F

    2014-12-01

    Biodiversity indices often combine data from different species when used in monitoring programs. Heuristic properties can suggest preferred indices, but we lack objective ways to discriminate between indices with similar heuristics. Biodiversity indices can be evaluated by determining how well they reflect management objectives that a monitoring program aims to support. For example, the Convention on Biological Diversity requires reporting about extinction rates, so simple indices that reflect extinction risk would be valuable. We developed 3 biodiversity indices that are based on simple models of population viability that relate extinction risk to abundance. We based the first index on the geometric mean abundance of species and the second on a more general power mean. In a third index, we integrated the geometric mean abundance and trend. These indices require the same data as previous indices, but they also relate directly to extinction risk. Field data for butterflies and woodland plants and experimental studies of protozoan communities show that the indices correlate with local extinction rates. Applying the index based on the geometric mean to global data on changes in avian abundance suggested that the average extinction probability of birds has increased approximately 1% from 1970 to 2009. © 2014 The Authors. Conservation Biology published by Wiley Periodicals, Inc., on behalf of the Society for Conservation Biology.

  14. Type II Supernova Energetics and Comparison of Light Curves to Shock-cooling Models

    NASA Astrophysics Data System (ADS)

    Rubin, Adam; Gal-Yam, Avishay; De Cia, Annalisa; Horesh, Assaf; Khazov, Danny; Ofek, Eran O.; Kulkarni, S. R.; Arcavi, Iair; Manulis, Ilan; Yaron, Ofer; Vreeswijk, Paul; Kasliwal, Mansi M.; Ben-Ami, Sagi; Perley, Daniel A.; Cao, Yi; Cenko, S. Bradley; Rebbapragada, Umaa D.; Woźniak, P. R.; Filippenko, Alexei V.; Clubb, K. I.; Nugent, Peter E.; Pan, Y.-C.; Badenes, C.; Howell, D. Andrew; Valenti, Stefano; Sand, David; Sollerman, J.; Johansson, Joel; Leonard, Douglas C.; Horst, J. Chuck; Armen, Stephen F.; Fedrow, Joseph M.; Quimby, Robert M.; Mazzali, Paulo; Pian, Elena; Sternberg, Assaf; Matheson, Thomas; Sullivan, M.; Maguire, K.; Lazarevic, Sanja

    2016-03-01

    During the first few days after explosion, Type II supernovae (SNe) are dominated by relatively simple physics. Theoretical predictions regarding early-time SN light curves in the ultraviolet (UV) and optical bands are thus quite robust. We present, for the first time, a sample of 57 R-band SN II light curves that are well-monitored during their rise, with \\gt 5 detections during the first 10 days after discovery, and a well-constrained time of explosion to within 1-3 days. We show that the energy per unit mass (E/M) can be deduced to roughly a factor of five by comparing early-time optical data to the 2011 model of Rabinak & Waxman, while the progenitor radius cannot be determined based on R-band data alone. We find that SN II explosion energies span a range of E/M = (0.2-20) × 1051 erg/(10 {M}⊙ ), and have a mean energy per unit mass of < E/M> =0.85× {10}51 erg/(10 {M}⊙ ), corrected for Malmquist bias. Assuming a small spread in progenitor masses, this indicates a large intrinsic diversity in explosion energy. Moreover, E/M is positively correlated with the amount of 56Ni produced in the explosion, as predicted by some recent models of core-collapse SNe. We further present several empirical correlations. The peak magnitude is correlated with the decline rate ({{Δ }}{m}15), the decline rate is weakly correlated with the rise time, and the rise time is not significantly correlated with the peak magnitude. Faster declining SNe are more luminous and have longer rise times. This limits the possible power sources for such events.

  15. Type II supernova energetics and comparison of light curves to shock-cooling models

    DOE PAGES

    Rubin, Adam; Gal-Yam, Avishay; De Cia, Annalisa; ...

    2016-03-16

    During the first few days after explosion, Type II supernovae (SNe) are dominated by relatively simple physics. Theoretical predictions regarding early-time SN light curves in the ultraviolet (UV) and optical bands are thus quite robust. We present, for the first time, a sample of 57 R-band SN II light curves that are well-monitored during their rise, withmore » $$\\gt 5$$ detections during the first 10 days after discovery, and a well-constrained time of explosion to within 1–3 days. We show that the energy per unit mass (E/M) can be deduced to roughly a factor of five by comparing early-time optical data to the 2011 model of Rabinak & Waxman, while the progenitor radius cannot be determined based on R-band data alone. We find that SN II explosion energies span a range of E/M = (0.2–20) × 10 51 erg/(10 $${M}_{\\odot }$$), and have a mean energy per unit mass of $$\\langle E/M\\rangle =0.85\\times {10}^{51}$$ erg/(10 $${M}_{\\odot }$$), corrected for Malmquist bias. Assuming a small spread in progenitor masses, this indicates a large intrinsic diversity in explosion energy. Moreover, E/M is positively correlated with the amount of 56Ni produced in the explosion, as predicted by some recent models of core-collapse SNe. We further present several empirical correlations. The peak magnitude is correlated with the decline rate ($${\\rm{\\Delta }}{m}_{15}$$), the decline rate is weakly correlated with the rise time, and the rise time is not significantly correlated with the peak magnitude. Faster declining SNe are more luminous and have longer rise times. Lastly, this limits the possible power sources for such events.« less

  16. Maier-Saupe model of polymer nematics: Comparing free energies calculated with Self Consistent Field theory and Monte Carlo simulations.

    PubMed

    Greco, Cristina; Jiang, Ying; Chen, Jeff Z Y; Kremer, Kurt; Daoulas, Kostas Ch

    2016-11-14

    Self Consistent Field (SCF) theory serves as an efficient tool for studying mesoscale structure and thermodynamics of polymeric liquid crystals (LC). We investigate how some of the intrinsic approximations of SCF affect the description of the thermodynamics of polymeric LC, using a coarse-grained model. Polymer nematics are represented as discrete worm-like chains (WLC) where non-bonded interactions are defined combining an isotropic repulsive and an anisotropic attractive Maier-Saupe (MS) potential. The range of the potentials, σ, controls the strength of correlations due to non-bonded interactions. Increasing σ (which can be seen as an increase of coarse-graining) while preserving the integrated strength of the potentials reduces correlations. The model is studied with particle-based Monte Carlo (MC) simulations and SCF theory which uses partial enumeration to describe discrete WLC. In MC simulations the Helmholtz free energy is calculated as a function of strength of MS interactions to obtain reference thermodynamic data. To calculate the free energy of the nematic branch with respect to the disordered melt, we employ a special thermodynamic integration (TI) scheme invoking an external field to bypass the first-order isotropic-nematic transition. Methodological aspects which have not been discussed in earlier implementations of the TI to LC are considered. Special attention is given to the rotational Goldstone mode. The free-energy landscape in MC and SCF is directly compared. For moderate σ the differences highlight the importance of local non-bonded orientation correlations between segments, which SCF neglects. Simple renormalization of parameters in SCF cannot compensate the missing correlations. Increasing σ reduces correlations and SCF reproduces well the free energy in MC simulations.

  17. Type II Supernova Energetics and Comparison of Light Curves to Shock-Cooling Models

    NASA Technical Reports Server (NTRS)

    Rubin, Adam; Gal-Yam, Avishay; Cia, Annalisa De; Horesh, Assaf; Khazov, Danny; Ofek, Eran O.; Kulkarni, S. R.; Arcavi, Iair; Manulis, Ilan; Cenko, S. Bradley

    2016-01-01

    During the first few days after explosion, Type II supernovae (SNe) are dominated by relatively simple physics. Theoretical predictions regarding early-time SN light curves in the ultraviolet (UV) and optical bands are thus quite robust. We present, for the first time, a sample of 57 R-band SN II light curves that are well-monitored during their rise, with greater than 5 detections during the first 10 days after discovery, and a well-constrained time of explosion to within 13 days. We show that the energy per unit mass (E/M) can be deduced to roughly a factor of five by comparing early-time optical data to the 2011 model of Rabinak Waxman, while the progenitor radius cannot be determined based on R-band data alone. We find that SN II explosion energies span a range of EM = (0.2-20) x 10(exp 51) erg/(10 M stellar mass), and have a mean energy per unit mass of E/ M = 0.85 x 10(exp 51) erg(10 stellar mass), corrected for Malmquist bias. Assuming a small spread in progenitor masses, this indicates a large intrinsic diversity in explosion energy. Moreover, E/M is positively correlated with the amount of Ni-56 produced in the explosion, as predicted by some recent models of core-collapse SNe. We further present several empirical correlations. The peak magnitude is correlated with the decline rate (Delta m(sub15), the decline rate is weakly correlated with the rise time, and the rise time is not significantly correlated with the peak magnitude. Faster declining SNe are more luminous and have longer rise times. This limits the possible power sources for such events.

  18. Phenomenology of wall-bounded Newtonian turbulence.

    PubMed

    L'vov, Victor S; Pomyalov, Anna; Procaccia, Itamar; Zilitinkevich, Sergej S

    2006-01-01

    We construct a simple analytic model for wall-bounded turbulence, containing only four adjustable parameters. Two of these parameters are responsible for the viscous dissipation of the components of the Reynolds stress tensor. The other two parameters control the nonlinear relaxation of these objects. The model offers an analytic description of the profiles of the mean velocity and the correlation functions of velocity fluctuations in the entire boundary region, from the viscous sublayer, through the buffer layer, and further into the log-law turbulent region. In particular, the model predicts a very simple distribution of the turbulent kinetic energy in the log-law region between the velocity components: the streamwise component contains a half of the total energy whereas the wall-normal and cross-stream components contain a quarter each. In addition, the model predicts a very simple relation between the von Kármán slope k and the turbulent velocity in the log-law region v+ (in wall units): v+=6k. These predictions are in excellent agreement with direct numerical simulation data and with recent laboratory experiments.

  19. Effect of Stability on Mixing in Open Canopies. Chapter 4

    NASA Technical Reports Server (NTRS)

    Lee, Young-Hee; Mahrt, L.

    2005-01-01

    In open canopies, the within-canopy flux from the ground surface and understory can account for a significant fraction of the total flux above the canopy. This study incorporates the important influence of within-canopy stability on turbulent mixing and subcanopy fluxes into a first-order closure scheme. Toward this goal, we analyze within-canopy eddy-correlation data from the old aspen site in the Boreal Ecosystem - Atmosphere Study (BOREAS) and a mature ponderosa pine site in Central Oregon, USA. A formulation of within-canopy transport is framed in terms of a stability- dependent mixing length, which approaches Monin-Obukhov similarity theory above the canopy roughness sublayer. The new simple formulation is an improvement upon the usual neglect of the influence of within-canopy stability in simple models. However, frequent well-defined cold air drainage within the pine subcanopy inversion reduces the utility of simple models for nocturnal transport. Other shortcomings of the formulation are discussed.

  20. Transport properties of strongly correlated electrons in quantum dots studied with a simple circuit model.

    PubMed

    Martins, G B; Büsser, C A; Al-Hassanieh, K A; Anda, E V; Moreo, A; Dagotto, E

    2006-02-17

    Numerical calculations are shown to reproduce the main results of recent experiments involving nonlocal spin control in quantum dots [Craig, Science 304, 565 (2004).]. In particular, the experimentally reported zero-bias-peak splitting is clearly observed in our studies. To understand these results, a simple "circuit model" is introduced and shown to qualitatively describe the experiments. The main idea is that the splitting originates in a Fano antiresonance, which is caused by having one quantum dot side connected in relation to the current's path. This scenario provides an explanation of the results of Craig et al. that is an alternative to the RKKY proposal, also addressed here.

  1. Description of quasiparticle and satellite properties via cumulant expansions of the retarded one-particle Green's function

    DOE PAGES

    Mayers, Matthew Z.; Hybertsen, Mark S.; Reichman, David R.

    2016-08-22

    A cumulant-based GW approximation for the retarded one-particle Green's function is proposed, motivated by an exact relation between the improper Dyson self-energy and the cumulant generating function. We explore qualitative aspects of this method within a simple one-electron independent phonon model, where it is seen that the method preserves the energy moment of the spectral weight while also reproducing the exact Green's function in the weak-coupling limit. For the three-dimensional electron gas, this method predicts multiple satellites at the bottom of the band, albeit with inaccurate peak spacing. But, its quasiparticle properties and correlation energies are more accurate than bothmore » previous cumulant methods and standard G0W0. These results point to features that may be exploited within the framework of cumulant-based methods and suggest promising directions for future exploration and improvements of cumulant-based GW approaches.« less

  2. Chaos and simple determinism in reversed field pinch plasmas: Nonlinear analysis of numerical simulation and experimental data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watts, Christopher A.

    In this dissertation the possibility that chaos and simple determinism are governing the dynamics of reversed field pinch (RFP) plasmas is investigated. To properly assess this possibility, data from both numerical simulations and experiment are analyzed. A large repertoire of nonlinear analysis techniques is used to identify low dimensional chaos in the data. These tools include phase portraits and Poincare sections, correlation dimension, the spectrum of Lyapunov exponents and short term predictability. In addition, nonlinear noise reduction techniques are applied to the experimental data in an attempt to extract any underlying deterministic dynamics. Two model systems are used to simulatemore » the plasma dynamics. These are the DEBS code, which models global RFP dynamics, and the dissipative trapped electron mode (DTEM) model, which models drift wave turbulence. Data from both simulations show strong indications of low dimensional chaos and simple determinism. Experimental date were obtained from the Madison Symmetric Torus RFP and consist of a wide array of both global and local diagnostic signals. None of the signals shows any indication of low dimensional chaos or low simple determinism. Moreover, most of the analysis tools indicate the experimental system is very high dimensional with properties similar to noise. Nonlinear noise reduction is unsuccessful at extracting an underlying deterministic system.« less

  3. Rapid investigation of α-glucosidase inhibitory activity of Phaleria macrocarpa extracts using FTIR-ATR based fingerprinting.

    PubMed

    Easmin, Sabina; Sarker, Md Zaidul Islam; Ghafoor, Kashif; Ferdosh, Sahena; Jaffri, Juliana; Ali, Md Eaqub; Mirhosseini, Hamed; Al-Juhaimi, Fahad Y; Perumal, Vikneswari; Khatib, Alfi

    2017-04-01

    Phaleria macrocarpa, known as "Mahkota Dewa", is a widely used medicinal plant in Malaysia. This study focused on the characterization of α-glucosidase inhibitory activity of P. macrocarpa extracts using Fourier transform infrared spectroscopy (FTIR)-based metabolomics. P. macrocarpa and its extracts contain thousands of compounds having synergistic effect. Generally, their variability exists, and there are many active components in meager amounts. Thus, the conventional measurement methods of a single component for the quality control are time consuming, laborious, expensive, and unreliable. It is of great interest to develop a rapid prediction method for herbal quality control to investigate the α-glucosidase inhibitory activity of P. macrocarpa by multicomponent analyses. In this study, a rapid and simple analytical method was developed using FTIR spectroscopy-based fingerprinting. A total of 36 extracts of different ethanol concentrations were prepared and tested on inhibitory potential and fingerprinted using FTIR spectroscopy, coupled with chemometrics of orthogonal partial least square (OPLS) at the 4000-400 cm -1 frequency region and resolution of 4 cm -1 . The OPLS model generated the highest regression coefficient with R 2 Y = 0.98 and Q 2 Y = 0.70, lowest root mean square error estimation = 17.17, and root mean square error of cross validation = 57.29. A five-component (1+4+0) predictive model was build up to correlate FTIR spectra with activity, and the responsible functional groups, such as -CH, -NH, -COOH, and -OH, were identified for the bioactivity. A successful multivariate model was constructed using FTIR-attenuated total reflection as a simple and rapid technique to predict the inhibitory activity. Copyright © 2016. Published by Elsevier B.V.

  4. A Model and Simple Iterative Algorithm for Redundancy Analysis.

    ERIC Educational Resources Information Center

    Fornell, Claes; And Others

    1988-01-01

    This paper shows that redundancy maximization with J. K. Johansson's extension can be accomplished via a simple iterative algorithm based on H. Wold's Partial Least Squares. The model and the iterative algorithm for the least squares approach to redundancy maximization are presented. (TJH)

  5. Cerebellarlike corrective model inference engine for manipulation tasks.

    PubMed

    Luque, Niceto Rafael; Garrido, Jesús Alberto; Carrillo, Richard Rafael; Coenen, Olivier J-M D; Ros, Eduardo

    2011-10-01

    This paper presents how a simple cerebellumlike architecture can infer corrective models in the framework of a control task when manipulating objects that significantly affect the dynamics model of the system. The main motivation of this paper is to evaluate a simplified bio-mimetic approach in the framework of a manipulation task. More concretely, the paper focuses on how the model inference process takes place within a feedforward control loop based on the cerebellar structure and on how these internal models are built up by means of biologically plausible synaptic adaptation mechanisms. This kind of investigation may provide clues on how biology achieves accurate control of non-stiff-joint robot with low-power actuators which involve controlling systems with high inertial components. This paper studies how a basic temporal-correlation kernel including long-term depression (LTD) and a constant long-term potentiation (LTP) at parallel fiber-Purkinje cell synapses can effectively infer corrective models. We evaluate how this spike-timing-dependent plasticity correlates sensorimotor activity arriving through the parallel fibers with teaching signals (dependent on error estimates) arriving through the climbing fibers from the inferior olive. This paper addresses the study of how these LTD and LTP components need to be well balanced with each other to achieve accurate learning. This is of interest to evaluate the relevant role of homeostatic mechanisms in biological systems where adaptation occurs in a distributed manner. Furthermore, we illustrate how the temporal-correlation kernel can also work in the presence of transmission delays in sensorimotor pathways. We use a cerebellumlike spiking neural network which stores the corrective models as well-structured weight patterns distributed among the parallel fibers to Purkinje cell connections.

  6. Phase space effects on fast ion distribution function modeling in tokamaks

    NASA Astrophysics Data System (ADS)

    Podestà, M.; Gorelenkova, M.; Fredrickson, E. D.; Gorelenkov, N. N.; White, R. B.

    2016-05-01

    Integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities, ad-hoc models can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions. The kick model implemented in the tokamak transport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.

  7. Phase space effects on fast ion distribution function modeling in tokamaks

    DOE Data Explorer

    White, R. B. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Podesta, M. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Gorelenkova, M. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Fredrickson, E. D. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Gorelenkov, N. N. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States)

    2016-06-01

    Integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities, ad-hoc models can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions. The kick model implemented in the tokamak transport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.

  8. Multiparticle Collectivity from Initial State Correlations in High Energy Proton-Nucleus Collisions

    DOE PAGES

    Dusling, Kevin; Mace, Mark; Venugopalan, Raju

    2018-01-25

    Qualitative features of multiparticle correlations in light-heavy ion (p +A) collisions at RHIC and LHC are reproduced in a simple initial state model of partons in the projectile coherently scattering off localized domains of color charge in the heavy nuclear target. These include i) the ordering of the magnitudes of the azimuthal angle nth Fourier harmonics of two-particle correlations v n{2}, ii) the energy and transverse momentum dependence of the four-particle Fourier harmonic v 2{4}, and iii) the energy dependence of four-particle symmetric cumulants measuring correlations between different Fourier harmonics. Similar patterns are seen in an Abelian version of themore » model, where we observe v 2{2} > v 2{4} ≈ v 2{6} ≈ v 2{8} of two, four, six, and eight particle correlations. While such patterns are often interpreted as signatures of collectivity arising from hydrodynamic flow, our results provide an alternative description of the multiparticle correlations seen in p + A collisions.« less

  9. Multiparticle Collectivity from Initial State Correlations in High Energy Proton-Nucleus Collisions

    NASA Astrophysics Data System (ADS)

    Dusling, Kevin; Mace, Mark; Venugopalan, Raju

    2018-01-01

    Qualitative features of multiparticle correlations in light-heavy ion (p +A ) collisions at RHIC and LHC are reproduced in a simple initial state model of partons in the projectile coherently scattering off localized domains of color charge in the heavy nuclear target. These include (i) the ordering of the magnitudes of the azimuthal angle n th Fourier harmonics of two-particle correlations vn{2 }, (ii) the energy and transverse momentum dependence of the four-particle Fourier harmonic v2{4 }, and (iii) the energy dependence of four-particle symmetric cumulants measuring correlations between different Fourier harmonics. Similar patterns are seen in an Abelian version of the model, where we observe v2{2 }>v2{4 }≈v2{6 }≈v2{8 } of two, four, six, and eight particle correlations. While such patterns are often interpreted as signatures of collectivity arising from hydrodynamic flow, our results provide an alternative description of the multiparticle correlations seen in p +A collisions.

  10. Multiparticle Collectivity from Initial State Correlations in High Energy Proton-Nucleus Collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dusling, Kevin; Mace, Mark; Venugopalan, Raju

    Qualitative features of multiparticle correlations in light-heavy ion (p +A) collisions at RHIC and LHC are reproduced in a simple initial state model of partons in the projectile coherently scattering off localized domains of color charge in the heavy nuclear target. These include i) the ordering of the magnitudes of the azimuthal angle nth Fourier harmonics of two-particle correlations v n{2}, ii) the energy and transverse momentum dependence of the four-particle Fourier harmonic v 2{4}, and iii) the energy dependence of four-particle symmetric cumulants measuring correlations between different Fourier harmonics. Similar patterns are seen in an Abelian version of themore » model, where we observe v 2{2} > v 2{4} ≈ v 2{6} ≈ v 2{8} of two, four, six, and eight particle correlations. While such patterns are often interpreted as signatures of collectivity arising from hydrodynamic flow, our results provide an alternative description of the multiparticle correlations seen in p + A collisions.« less

  11. Binding stability of peptides on major histocompatibility complex class I proteins: role of entropy and dynamics.

    PubMed

    Gul, Ahmet; Erman, Burak

    2018-01-16

    Prediction of peptide binding on specific human leukocyte antigens (HLA) has long been studied with successful results. We herein describe the effects of entropy and dynamics by investigating the binding stabilities of 10 nanopeptides on various HLA Class I alleles using a theoretical model based on molecular dynamics simulations. The fluctuational entropies of the peptides are estimated over a temperature range of 310-460 K. The estimated entropies correlate well with experimental binding affinities of the peptides: peptides that have higher binding affinities have lower entropies compared to non-binders, which have significantly larger entropies. The computation of the entropies is based on a simple model that requires short molecular dynamics trajectories and allows for approximate but rapid determination. The paper draws attention to the long neglected dynamic aspects of peptide binding, and provides a fast computation scheme that allows for rapid scanning of large numbers of peptides on selected HLA antigens, which may be useful in defining the right peptides for personal immunotherapy.

  12. Unified analysis of optical absorption spectra of carotenoids based on a stochastic model.

    PubMed

    Uragami, Chiasa; Saito, Keisuke; Yoshizawa, Masayuki; Molnár, Péter; Hashimoto, Hideki

    2018-05-03

    The chemical structures of the carotenoid molecules are very simple and one might think that the electronic feature of it is easily predicted. However, it still has so much unknown information except the correlation between the electronic energy state and the length of effective conjugation chain of carotenoids. To investigate the electronic feature of the carotenoids, the most essential method is measuring the optical absorption spectra, but simulating it from the resonance Raman spectra is also the effective way. From this reason, we studied the optical absorption spectra as well as resonance Raman spectra of 15 different kinds of cyclic carotenoid molecules, recorded in tetrahydrofuran (THF) solutions at room temperature. The whole band shapes of the absorption spectra of all these carotenoid molecules were successfully simulated based on a stochastic model using Brownian oscillators. The parameters obtained from the simulation made it possible to discuss the intermolecular interaction between carotenoids and solvent THF molecules quantitatively. Copyright © 2018. Published by Elsevier Inc.

  13. Binding stability of peptides on major histocompatibility complex class I proteins: role of entropy and dynamics

    NASA Astrophysics Data System (ADS)

    Gul, Ahmet; Erman, Burak

    2018-03-01

    Prediction of peptide binding on specific human leukocyte antigens (HLA) has long been studied with successful results. We herein describe the effects of entropy and dynamics by investigating the binding stabilities of 10 nanopeptides on various HLA Class I alleles using a theoretical model based on molecular dynamics simulations. The fluctuational entropies of the peptides are estimated over a temperature range of 310-460 K. The estimated entropies correlate well with experimental binding affinities of the peptides: peptides that have higher binding affinities have lower entropies compared to non-binders, which have significantly larger entropies. The computation of the entropies is based on a simple model that requires short molecular dynamics trajectories and allows for approximate but rapid determination. The paper draws attention to the long neglected dynamic aspects of peptide binding, and provides a fast computation scheme that allows for rapid scanning of large numbers of peptides on selected HLA antigens, which may be useful in defining the right peptides for personal immunotherapy.

  14. A study of the electric field in an open magnetospheric model

    NASA Technical Reports Server (NTRS)

    Stern, D. P.

    1973-01-01

    Recently, Svalgaard and Heppner reported two separate features of the polar electromagnetic field that correlate with the dawn-dusk component of the interplanetary magnetic field. This work attempts to explain these findings in terms of properties of the open magnetosphere. The topology and qualitative properties of the open magnetosphere are first studied by means of a simple model, consisting of a dipole in a constant field. Many such properties are found to depend on the separation line, a curve connecting neutral points and separating different field line regimes. In the simple model it turns out that the electric field in the central polar cap tends to point from dawn to dusk for a wide variety of external fields, but, near the boundary of the polar cap, electric equipotentials are deformed into crescents.

  15. Fluorescence Correlation Spectroscopy and Nonlinear Stochastic Reaction-Diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Del Razo, Mauricio; Pan, Wenxiao; Qian, Hong

    2014-05-30

    The currently existing theory of fluorescence correlation spectroscopy (FCS) is based on the linear fluctuation theory originally developed by Einstein, Onsager, Lax, and others as a phenomenological approach to equilibrium fluctuations in bulk solutions. For mesoscopic reaction-diffusion systems with nonlinear chemical reactions among a small number of molecules, a situation often encountered in single-cell biochemistry, it is expected that FCS time correlation functions of a reaction-diffusion system can deviate from the classic results of Elson and Magde [Biopolymers (1974) 13:1-27]. We first discuss this nonlinear effect for reaction systems without diffusion. For nonlinear stochastic reaction-diffusion systems there are no closedmore » solutions; therefore, stochastic Monte-Carlo simulations are carried out. We show that the deviation is small for a simple bimolecular reaction; the most significant deviations occur when the number of molecules is small and of the same order. Extending Delbrück-Gillespie’s theory for stochastic nonlinear reactions with rapidly stirring to reaction-diffusion systems provides a mesoscopic model for chemical and biochemical reactions at nanometric and mesoscopic level such as a single biological cell.« less

  16. Satellite-based high-resolution mapping of rainfall over southern Africa

    NASA Astrophysics Data System (ADS)

    Meyer, Hanna; Drönner, Johannes; Nauss, Thomas

    2017-06-01

    A spatially explicit mapping of rainfall is necessary for southern Africa for eco-climatological studies or nowcasting but accurate estimates are still a challenging task. This study presents a method to estimate hourly rainfall based on data from the Meteosat Second Generation (MSG) Spinning Enhanced Visible and Infrared Imager (SEVIRI). Rainfall measurements from about 350 weather stations from 2010-2014 served as ground truth for calibration and validation. SEVIRI and weather station data were used to train neural networks that allowed the estimation of rainfall area and rainfall quantities over all times of the day. The results revealed that 60 % of recorded rainfall events were correctly classified by the model (probability of detection, POD). However, the false alarm ratio (FAR) was high (0.80), leading to a Heidke skill score (HSS) of 0.18. Estimated hourly rainfall quantities were estimated with an average hourly correlation of ρ = 0. 33 and a root mean square error (RMSE) of 0.72. The correlation increased with temporal aggregation to 0.52 (daily), 0.67 (weekly) and 0.71 (monthly). The main weakness was the overestimation of rainfall events. The model results were compared to the Integrated Multi-satellitE Retrievals for GPM (IMERG) of the Global Precipitation Measurement (GPM) mission. Despite being a comparably simple approach, the presented MSG-based rainfall retrieval outperformed GPM IMERG in terms of rainfall area detection: GPM IMERG had a considerably lower POD. The HSS was not significantly different compared to the MSG-based retrieval due to a lower FAR of GPM IMERG. There were no further significant differences between the MSG-based retrieval and GPM IMERG in terms of correlation with the observed rainfall quantities. The MSG-based retrieval, however, provides rainfall in a higher spatial resolution. Though estimating rainfall from satellite data remains challenging, especially at high temporal resolutions, this study showed promising results towards improved spatio-temporal estimates of rainfall over southern Africa.

  17. Unwinding the hairball graph: Pruning algorithms for weighted complex networks

    NASA Astrophysics Data System (ADS)

    Dianati, Navid

    2016-01-01

    Empirical networks of weighted dyadic relations often contain "noisy" edges that alter the global characteristics of the network and obfuscate the most important structures therein. Graph pruning is the process of identifying the most significant edges according to a generative null model and extracting the subgraph consisting of those edges. Here, we focus on integer-weighted graphs commonly arising when weights count the occurrences of an "event" relating the nodes. We introduce a simple and intuitive null model related to the configuration model of network generation and derive two significance filters from it: the marginal likelihood filter (MLF) and the global likelihood filter (GLF). The former is a fast algorithm assigning a significance score to each edge based on the marginal distribution of edge weights, whereas the latter is an ensemble approach which takes into account the correlations among edges. We apply these filters to the network of air traffic volume between US airports and recover a geographically faithful representation of the graph. Furthermore, compared with thresholding based on edge weight, we show that our filters extract a larger and significantly sparser giant component.

  18. Binding Affinity prediction with Property Encoded Shape Distribution signatures

    PubMed Central

    Das, Sourav; Krein, Michael P.

    2010-01-01

    We report the use of the molecular signatures known as “Property-Encoded Shape Distributions” (PESD) together with standard Support Vector Machine (SVM) techniques to produce validated models that can predict the binding affinity of a large number of protein ligand complexes. This “PESD-SVM” method uses PESD signatures that encode molecular shapes and property distributions on protein and ligand surfaces as features to build SVM models that require no subjective feature selection. A simple protocol was employed for tuning the SVM models during their development, and the results were compared to SFCscore – a regression-based method that was previously shown to perform better than 14 other scoring functions. Although the PESD-SVM method is based on only two surface property maps, the overall results were comparable. For most complexes with a dominant enthalpic contribution to binding (ΔH/-TΔS > 3), a good correlation between true and predicted affinities was observed. Entropy and solvent were not considered in the present approach and further improvement in accuracy would require accounting for these components rigorously. PMID:20095526

  19. Spatial memory in foraging games.

    PubMed

    Kerster, Bryan E; Rhodes, Theo; Kello, Christopher T

    2016-03-01

    Foraging and foraging-like processes are found in spatial navigation, memory, visual search, and many other search functions in human cognition and behavior. Foraging is commonly theorized using either random or correlated movements based on Lévy walks, or a series of decisions to remain or leave proximal areas known as "patches". Neither class of model makes use of spatial memory, but search performance may be enhanced when information about searched and unsearched locations is encoded. A video game was developed to test the role of human spatial memory in a canonical foraging task. Analyses of search trajectories from over 2000 human players yielded evidence that foraging movements were inherently clustered, and that clustering was facilitated by spatial memory cues and influenced by memory for spatial locations of targets found. A simple foraging model is presented in which spatial memory is used to integrate aspects of Lévy-based and patch-based foraging theories to perform a kind of area-restricted search, and thereby enhance performance as search unfolds. Using only two free parameters, the model accounts for a variety of findings that individually support competing theories, but together they argue for the integration of spatial memory into theories of foraging. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Complexity analysis based on generalized deviation for financial markets

    NASA Astrophysics Data System (ADS)

    Li, Chao; Shang, Pengjian

    2018-03-01

    In this paper, a new modified method is proposed as a measure to investigate the correlation between past price and future volatility for financial time series, known as the complexity analysis based on generalized deviation. In comparison with the former retarded volatility model, the new approach is both simple and computationally efficient. The method based on the generalized deviation function presents us an exhaustive way showing the quantization of the financial market rules. Robustness of this method is verified by numerical experiments with both artificial and financial time series. Results show that the generalized deviation complexity analysis method not only identifies the volatility of financial time series, but provides a comprehensive way distinguishing the different characteristics between stock indices and individual stocks. Exponential functions can be used to successfully fit the volatility curves and quantify the changes of complexity for stock market data. Then we study the influence for negative domain of deviation coefficient and differences during the volatile periods and calm periods. after the data analysis of the experimental model, we found that the generalized deviation model has definite advantages in exploring the relationship between the historical returns and future volatility.

  1. A spatially adaptive spectral re-ordering technique for lossless coding of hyper-spectral images

    NASA Technical Reports Server (NTRS)

    Memon, Nasir D.; Galatsanos, Nikolas

    1995-01-01

    In this paper, we propose a new approach, applicable to lossless compression of hyper-spectral images, that alleviates some limitations of linear prediction as applied to this problem. According to this approach, an adaptive re-ordering of the spectral components of each pixel is performed prior to prediction and encoding. This re-ordering adaptively exploits, on a pixel-by pixel basis, the presence of inter-band correlations for prediction. Furthermore, the proposed approach takes advantage of spatial correlations, and does not introduce any coding overhead to transmit the order of the spectral bands. This is accomplished by using the assumption that two spatially adjacent pixels are expected to have similar spectral relationships. We thus have a simple technique to exploit spectral and spatial correlations in hyper-spectral data sets, leading to compression performance improvements as compared to our previously reported techniques for lossless compression. We also look at some simple error modeling techniques for further exploiting any structure that remains in the prediction residuals prior to entropy coding.

  2. Analysis of the correlative factors for velopharyngeal closure of patients with cleft palate after primary repair.

    PubMed

    Chen, Qi; Li, Yang; Shi, Bing; Yin, Heng; Zheng, Guang-Ning; Zheng, Qian

    2013-12-01

    The objective of this study was to analyze the correlative factors for velopharyngeal closure of patients with cleft palate after primary repair. Ninety-five nonsyndromic patients with cleft palate were enrolled. Two surgical techniques were applied in the patients: simple palatoplasty and combined palatoplasty with pharyngoplasty. All patients were assessed 6 months after the operation. The postoperative velopharyngeal closure (VPC) rate was compared by χ(2) test and the correlative factors were analyzed with logistic regression model. The postoperative VPC rate of young patients was higher than that of old patients, the group with incomplete cleft palate was higher than the group with complete cleft palate, and combined palatoplasty with pharyngoplasty was higher than simple palatoplasty. Operative age, cleft type, and surgical technique were the contributing factors for postoperative VPC rate. Operative age, cleft type, and surgical technique were significant factors influencing postoperative VPC rate of patients with cleft palate. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Memory-Based Simple Heuristics as Attribute Substitution: Competitive Tests of Binary Choice Inference Models

    ERIC Educational Resources Information Center

    Honda, Hidehito; Matsuka, Toshihiko; Ueda, Kazuhiro

    2017-01-01

    Some researchers on binary choice inference have argued that people make inferences based on simple heuristics, such as recognition, fluency, or familiarity. Others have argued that people make inferences based on available knowledge. To examine the boundary between heuristic and knowledge usage, we examine binary choice inference processes in…

  4. Combinatorial structures to modeling simple games and applications

    NASA Astrophysics Data System (ADS)

    Molinero, Xavier

    2017-09-01

    We connect three different topics: combinatorial structures, game theory and chemistry. In particular, we establish the bases to represent some simple games, defined as influence games, and molecules, defined from atoms, by using combinatorial structures. First, we characterize simple games as influence games using influence graphs. It let us to modeling simple games as combinatorial structures (from the viewpoint of structures or graphs). Second, we formally define molecules as combinations of atoms. It let us to modeling molecules as combinatorial structures (from the viewpoint of combinations). It is open to generate such combinatorial structures using some specific techniques as genetic algorithms, (meta-)heuristics algorithms and parallel programming, among others.

  5. Masquerade Detection Using a Taxonomy-Based Multinomial Modeling Approach in UNIX Systems

    DTIC Science & Technology

    2008-08-25

    primarily the modeling of statistical features , such as the frequency of events, the duration of events, the co- occurrence of multiple events...are identified, we can extract features representing such behavior while auditing the user’s behavior. Figure1: Taxonomy of Linux and Unix...achieved when the features are extracted just from simple commands. Method Hit Rate False Positive Rate ocSVM using simple cmds (freq.-based

  6. Physics of giant electromagnetic pulse generation in short-pulse laser experiments.

    PubMed

    Poyé, A; Hulin, S; Bailly-Grandvaux, M; Dubois, J-L; Ribolzi, J; Raffestin, D; Bardon, M; Lubrano-Lavaderci, F; D'Humières, E; Santos, J J; Nicolaï, Ph; Tikhonchuk, V

    2015-04-01

    In this paper we describe the physical processes that lead to the generation of giant electromagnetic pulses (GEMPs) at powerful laser facilities. Our study is based on experimental measurements of both the charging of a solid target irradiated by an ultra-short, ultra-intense laser and the detection of the electromagnetic emission in the GHz domain. An unambiguous correlation between the neutralization current in the target holder and the electromagnetic emission shows that the source of the GEMP is the remaining positive charge inside the target after the escape of fast electrons accelerated by the ultra-intense laser. A simple model for calculating this charge in the thick target case is presented. From this model and knowing the geometry of the target holder, it becomes possible to estimate the intensity and the dominant frequencies of the GEMP at any facility.

  7. Investigation of a protein complex network

    NASA Astrophysics Data System (ADS)

    Mashaghi, A. R.; Ramezanpour, A.; Karimipour, V.

    2004-09-01

    The budding yeast Saccharomyces cerevisiae is the first eukaryote whose genome has been completely sequenced. It is also the first eukaryotic cell whose proteome (the set of all proteins) and interactome (the network of all mutual interactions between proteins) has been analyzed. In this paper we study the structure of the yeast protein complex network in which weighted edges between complexes represent the number of shared proteins. It is found that the network of protein complexes is a small world network with scale free behavior for many of its distributions. However we find that there are no strong correlations between the weights and degrees of neighboring complexes. To reveal non-random features of the network we also compare it with a null model in which the complexes randomly select their proteins. Finally we propose a simple evolutionary model based on duplication and divergence of proteins.

  8. Statistical image reconstruction from correlated data with applications to PET

    PubMed Central

    Alessio, Adam; Sauer, Ken; Kinahan, Paul

    2008-01-01

    Most statistical reconstruction methods for emission tomography are designed for data modeled as conditionally independent Poisson variates. In reality, due to scanner detectors, electronics and data processing, correlations are introduced into the data resulting in dependent variates. In general, these correlations are ignored because they are difficult to measure and lead to computationally challenging statistical reconstruction algorithms. This work addresses the second concern, seeking to simplify the reconstruction of correlated data and provide a more precise image estimate than the conventional independent methods. In general, correlated variates have a large non-diagonal covariance matrix that is computationally challenging to use as a weighting term in a reconstruction algorithm. This work proposes two methods to simplify the use of a non-diagonal covariance matrix as the weighting term by (a) limiting the number of dimensions in which the correlations are modeled and (b) adopting flexible, yet computationally tractable, models for correlation structure. We apply and test these methods with simple simulated PET data and data processed with the Fourier rebinning algorithm which include the one-dimensional correlations in the axial direction and the two-dimensional correlations in the transaxial directions. The methods are incorporated into a penalized weighted least-squares 2D reconstruction and compared with a conventional maximum a posteriori approach. PMID:17921576

  9. Effect of correlated observation error on parameters, predictions, and uncertainty

    USGS Publications Warehouse

    Tiedeman, Claire; Green, Christopher T.

    2013-01-01

    Correlations among observation errors are typically omitted when calculating observation weights for model calibration by inverse methods. We explore the effects of omitting these correlations on estimates of parameters, predictions, and uncertainties. First, we develop a new analytical expression for the difference in parameter variance estimated with and without error correlations for a simple one-parameter two-observation inverse model. Results indicate that omitting error correlations from both the weight matrix and the variance calculation can either increase or decrease the parameter variance, depending on the values of error correlation (ρ) and the ratio of dimensionless scaled sensitivities (rdss). For small ρ, the difference in variance is always small, but for large ρ, the difference varies widely depending on the sign and magnitude of rdss. Next, we consider a groundwater reactive transport model of denitrification with four parameters and correlated geochemical observation errors that are computed by an error-propagation approach that is new for hydrogeologic studies. We compare parameter estimates, predictions, and uncertainties obtained with and without the error correlations. Omitting the correlations modestly to substantially changes parameter estimates, and causes both increases and decreases of parameter variances, consistent with the analytical expression. Differences in predictions for the models calibrated with and without error correlations can be greater than parameter differences when both are considered relative to their respective confidence intervals. These results indicate that including observation error correlations in weighting for nonlinear regression can have important effects on parameter estimates, predictions, and their respective uncertainties.

  10. Grass Grows, the Cow Eats: A Simple Grazing Systems Model with Emergent Properties

    ERIC Educational Resources Information Center

    Ungar, Eugene David; Seligman, Noam G.; Noy-Meir, Imanuel

    2004-01-01

    We describe a simple, yet intellectually challenging model of grazing systems that introduces basic concepts in ecology and systems analysis. The practical is suitable for high-school and university curricula with a quantitative orientation, and requires only basic skills in mathematics and spreadsheet use. The model is based on Noy-Meir's (1975)…

  11. On the Impact of Electrostatic Correlations on the Double-Layer Polarization of a Spherical Particle in an Alternating Current Field.

    PubMed

    Alidoosti, Elaheh; Zhao, Hui

    2018-05-15

    At concentrated electrolytes, the ion-ion electrostatic correlation effect is considered an important factor in electrokinetics. In this paper, we compute, in theory and simulation, the dipole moment for a spherical particle (charged, dielectric) under the action of an alternating electric field using the modified continuum Poisson-Nernst-Planck (PNP) model by Bazant et al. [ Double Layer in Ionic Liquids: Overscreening Versus Crowding . Phys. Rev. Lett. 2011 , 106 , 046102 ] We investigate the dependency of the dipole moment in terms of frequency and its variation with such quantities like ζ-potential, electrostatic correlation length, and double-layer thickness. With thin electric double layers, we develop simple models through performing an asymptotic analysis of the modified PNP model. We also present numerical results for an arbitrary Debye screening length and electrostatic correlation length. From the results, we find a complicated impact of electrostatic correlations on the dipole moment. For instance, with increasing the electrostatic correlation length, the dipole moment decreases and reaches a minimum and then it goes up. This is because of initially decreasing of surface conduction and finally increasing due to the impact of ion-ion electrostatic correlations on ion's convection and migration. Also, we show that in contrast to the standard PNP model, the modified PNP model can qualitatively explain the data from the experimental results in multivalent electrolytes.

  12. Simple models to predict grassland ecosystem C exchange and actual evapotranspiration using NDVI and environmental variables

    USDA-ARS?s Scientific Manuscript database

    Semiarid grasslands contribute significantly to net terrestrial carbon flux as plant productivity and heterotrophic respiration in these moisture-limited systems are correlated with metrics related to water availability (e.g., precipitation, Actual EvapoTranspiration or AET). These variables are als...

  13. Characterization of commercial magnetorheological fluids at high shear rate: influence of the gap

    NASA Astrophysics Data System (ADS)

    Golinelli, Nicola; Spaggiari, Andrea

    2018-07-01

    This paper reports the experimental tests on the behaviour of a commercial MR fluid at high shear rates and the effect of the gap. Three gaps were considered at multiple magnetic fields and shear rates. From an extended set of almost two hundred experimental flow curves, a set of parameters for the apparent viscosity are retrieved by using the Ostwald de Waele model for non-Newtonian fluids. It is possible to simplify the parameter correlation by making the following considerations: the consistency of the model depends only on the magnetic field, the flow index depends on the fluid type and the gap shows an important effect only at null or very low magnetic fields. This lead to a simple and useful model, especially in the design phase of a MR based product. During the off state, with no applied field, it is possible to use a standard viscous model. During the active state, with high magnetic field, a strong non-Newtonian nature becomes prevalent over the viscous one even at very high shear rate; the magnetic field dominates the apparent viscosity change, while the gap does not play any relevant role on the system behaviour. This simple assumption allows the designer to dimension the gap only considering the non-active state, as in standard viscous systems, and taking into account only the magnetic effect in the active state, where the gap does not change the proposed fluid model.

  14. Energy Weighted Angular Correlations Between Hadrons Produced in Electron-Positron Annihilation.

    NASA Astrophysics Data System (ADS)

    Strharsky, Roger Joseph

    Electron-positron annihilation at large center of mass energy produces many hadronic particles. Experimentalists then measure the energies of these particles in calorimeters. This study investigated correlations between the angular locations of one or two such calorimeters and the angular orientation of the electron beam in the laboratory frame of reference. The calculation of these correlations includes weighting by the fraction of the total center of mass energy which the calorimeter measures. Starting with the assumption that the reaction proceeeds through the intermediate production of a single quark/anti-quark pair, a simple statistical model was developed to provide a phenomenological description of the distribution of final state hadrons. The model distributions were then used to calculate the one- and two-calorimeter correlation functions. Results of these calculations were compared with available data and several predictions were made for those quantities which had not yet been measured. Failure of the model to reproduce all of the data was discussed in terms of quantum chromodynamics, a fundamental theory which includes quark interactions.

  15. Associations between Verbal Learning Slope and Neuroimaging Markers across the Cognitive Aging Spectrum.

    PubMed

    Gifford, Katherine A; Phillips, Jeffrey S; Samuels, Lauren R; Lane, Elizabeth M; Bell, Susan P; Liu, Dandan; Hohman, Timothy J; Romano, Raymond R; Fritzsche, Laura R; Lu, Zengqi; Jefferson, Angela L

    2015-07-01

    A symptom of mild cognitive impairment (MCI) and Alzheimer's disease (AD) is a flat learning profile. Learning slope calculation methods vary, and the optimal method for capturing neuroanatomical changes associated with MCI and early AD pathology is unclear. This study cross-sectionally compared four different learning slope measures from the Rey Auditory Verbal Learning Test (simple slope, regression-based slope, two-slope method, peak slope) to structural neuroimaging markers of early AD neurodegeneration (hippocampal volume, cortical thickness in parahippocampal gyrus, precuneus, and lateral prefrontal cortex) across the cognitive aging spectrum [normal control (NC); (n=198; age=76±5), MCI (n=370; age=75±7), and AD (n=171; age=76±7)] in ADNI. Within diagnostic group, general linear models related slope methods individually to neuroimaging variables, adjusting for age, sex, education, and APOE4 status. Among MCI, better learning performance on simple slope, regression-based slope, and late slope (Trial 2-5) from the two-slope method related to larger parahippocampal thickness (all p-values<.01) and hippocampal volume (p<.01). Better regression-based slope (p<.01) and late slope (p<.01) were related to larger ventrolateral prefrontal cortex in MCI. No significant associations emerged between any slope and neuroimaging variables for NC (p-values ≥.05) or AD (p-values ≥.02). Better learning performances related to larger medial temporal lobe (i.e., hippocampal volume, parahippocampal gyrus thickness) and ventrolateral prefrontal cortex in MCI only. Regression-based and late slope were most highly correlated with neuroimaging markers and explained more variance above and beyond other common memory indices, such as total learning. Simple slope may offer an acceptable alternative given its ease of calculation.

  16. Patterns of arm muscle activation involved in octopus reaching movements.

    PubMed

    Gutfreund, Y; Flash, T; Fiorito, G; Hochner, B

    1998-08-01

    The extreme flexibility of the octopus arm allows it to perform many different movements, yet octopuses reach toward a target in a stereotyped manner using a basic invariant motor structure: a bend traveling from the base of the arm toward the tip (Gutfreund et al., 1996a). To study the neuronal control of these movements, arm muscle activation [electromyogram (EMG)] was measured together with the kinematics of reaching movements. The traveling bend is associated with a propagating wave of muscle activation, with maximal muscle activation slightly preceding the traveling bend. Tonic activation was occasionally maintained afterward. Correlation of the EMG signals with the kinematic variables (velocities and accelerations) reveals that a significant part of the kinematic variability can be explained by the level of muscle activation. Furthermore, the EMG level measured during the initial stages of movement predicts the peak velocity attained toward the end of the reaching movement. These results suggest that feed-forward motor commands play an important role in the control of movement velocity and that simple adjustment of the excitation levels at the initial stages of the movement can set the velocity profile of the whole movement. A simple model of octopus arm extension is proposed in which the driving force is set initially and is then decreased in proportion to arm diameter at the bend. The model qualitatively reproduces the typical velocity profiles of octopus reaching movements, suggesting a simple control mechanism for bend propagation in the octopus arm.

  17. Parallel Representation of Value-Based and Finite State-Based Strategies in the Ventral and Dorsal Striatum

    PubMed Central

    Ito, Makoto; Doya, Kenji

    2015-01-01

    Previous theoretical studies of animal and human behavioral learning have focused on the dichotomy of the value-based strategy using action value functions to predict rewards and the model-based strategy using internal models to predict environmental states. However, animals and humans often take simple procedural behaviors, such as the “win-stay, lose-switch” strategy without explicit prediction of rewards or states. Here we consider another strategy, the finite state-based strategy, in which a subject selects an action depending on its discrete internal state and updates the state depending on the action chosen and the reward outcome. By analyzing choice behavior of rats in a free-choice task, we found that the finite state-based strategy fitted their behavioral choices more accurately than value-based and model-based strategies did. When fitted models were run autonomously with the same task, only the finite state-based strategy could reproduce the key feature of choice sequences. Analyses of neural activity recorded from the dorsolateral striatum (DLS), the dorsomedial striatum (DMS), and the ventral striatum (VS) identified significant fractions of neurons in all three subareas for which activities were correlated with individual states of the finite state-based strategy. The signal of internal states at the time of choice was found in DMS, and for clusters of states was found in VS. In addition, action values and state values of the value-based strategy were encoded in DMS and VS, respectively. These results suggest that both the value-based strategy and the finite state-based strategy are implemented in the striatum. PMID:26529522

  18. An egalitarian network model for the emergence of simple and complex cells in visual cortex

    PubMed Central

    Tao, Louis; Shelley, Michael; McLaughlin, David; Shapley, Robert

    2004-01-01

    We explain how simple and complex cells arise in a large-scale neuronal network model of the primary visual cortex of the macaque. Our model consists of ≈4,000 integrate-and-fire, conductance-based point neurons, representing the cells in a small, 1-mm2 patch of an input layer of the primary visual cortex. In the model the local connections are isotropic and nonspecific, and convergent input from the lateral geniculate nucleus confers cortical cells with orientation and spatial phase preference. The balance between lateral connections and lateral geniculate nucleus drive determines whether individual neurons in this recurrent circuit are simple or complex. The model reproduces qualitatively the experimentally observed distributions of both extracellular and intracellular measures of simple and complex response. PMID:14695891

  19. Improved CORF model of simple cell combined with non-classical receptive field and its application on edge detection

    NASA Astrophysics Data System (ADS)

    Sun, Xiao; Chai, Guobei; Liu, Wei; Bao, Wenzhuo; Zhao, Xiaoning; Ming, Delie

    2018-02-01

    Simple cells in primary visual cortex are believed to extract local edge information from a visual scene. In this paper, inspired by different receptive field properties and visual information flow paths of neurons, an improved Combination of Receptive Fields (CORF) model combined with non-classical receptive fields was proposed to simulate the responses of simple cell's receptive fields. Compared to the classical model, the proposed model is able to better imitate simple cell's physiologic structure with consideration of facilitation and suppression of non-classical receptive fields. And on this base, an edge detection algorithm as an application of the improved CORF model was proposed. Experimental results validate the robustness of the proposed algorithm to noise and background interference.

  20. Models of Quantitative Estimations: Rule-Based and Exemplar-Based Processes Compared

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2009-01-01

    The cognitive processes underlying quantitative estimations vary. Past research has identified task-contingent changes between rule-based and exemplar-based processes (P. Juslin, L. Karlsson, & H. Olsson, 2008). B. von Helversen and J. Rieskamp (2008), however, proposed a simple rule-based model--the mapping model--that outperformed the…

Top