NASA Astrophysics Data System (ADS)
Inc, Mustafa; Yusuf, Abdullahi; Isa Aliyu, Aliyu; Baleanu, Dumitru
2018-03-01
This research analyzes the symmetry analysis, explicit solutions and convergence analysis to the time fractional Cahn-Allen (CA) and time-fractional Klein-Gordon (KG) equations with Riemann-Liouville (RL) derivative. The time fractional CA and time fractional KG are reduced to respective nonlinear ordinary differential equation of fractional order. We solve the reduced fractional ODEs using an explicit power series method. The convergence analysis for the obtained explicit solutions are investigated. Some figures for the obtained explicit solutions are also presented.
NASA Astrophysics Data System (ADS)
Inc, Mustafa; Yusuf, Abdullahi; Aliyu, Aliyu Isa; Baleanu, Dumitru
2018-04-01
This paper studies the symmetry analysis, explicit solutions, convergence analysis, and conservation laws (Cls) for two different space-time fractional nonlinear evolution equations with Riemann-Liouville (RL) derivative. The governing equations are reduced to nonlinear ordinary differential equation (ODE) of fractional order using their Lie point symmetries. In the reduced equations, the derivative is in Erdelyi-Kober (EK) sense, power series technique is applied to derive an explicit solutions for the reduced fractional ODEs. The convergence of the obtained power series solutions is also presented. Moreover, the new conservation theorem and the generalization of the Noether operators are developed to construct the nonlocal Cls for the equations . Some interesting figures for the obtained explicit solutions are presented.
Charles H. Luce; Daniele Tonina; Frank Gariglio; Ralph Applebee
2013-01-01
Work over the last decade has documented methods for estimating fluxes between streams and streambeds from time series of temperature at two depths in the streambed. We present substantial extension to the existing theory and practice of using temperature time series to estimate streambed water fluxes and thermal properties, including (1) a new explicit analytical...
Scott L. Powell; Warren B. Cohen; Sean P. Healey; Robert E. Kennedy; Gretchen G. Moisen; Kenneth B. Pierce; Janet L. Ohmann
2010-01-01
Spatially and temporally explicit knowledge of biomass dynamics at broad scales is critical to understanding how forest disturbance and regrowth processes influence carbon dynamics. We modeled live, aboveground tree biomass using Forest Inventory and Analysis (FIA) field data and applied the models to 20+ year time-series of Landsat satellite imagery to...
Volatility of linear and nonlinear time series
NASA Astrophysics Data System (ADS)
Kalisky, Tomer; Ashkenazy, Yosef; Havlin, Shlomo
2005-07-01
Previous studies indicated that nonlinear properties of Gaussian distributed time series with long-range correlations, ui , can be detected and quantified by studying the correlations in the magnitude series ∣ui∣ , the “volatility.” However, the origin for this empirical observation still remains unclear and the exact relation between the correlations in ui and the correlations in ∣ui∣ is still unknown. Here we develop analytical relations between the scaling exponent of linear series ui and its magnitude series ∣ui∣ . Moreover, we find that nonlinear time series exhibit stronger (or the same) correlations in the magnitude time series compared with linear time series with the same two-point correlations. Based on these results we propose a simple model that generates multifractal time series by explicitly inserting long range correlations in the magnitude series; the nonlinear multifractal time series is generated by multiplying a long-range correlated time series (that represents the magnitude series) with uncorrelated time series [that represents the sign series sgn(ui) ]. We apply our techniques on daily deep ocean temperature records from the equatorial Pacific, the region of the El-Ninõ phenomenon, and find: (i) long-range correlations from several days to several years with 1/f power spectrum, (ii) significant nonlinear behavior as expressed by long-range correlations of the volatility series, and (iii) broad multifractal spectrum.
Detecting unstable periodic orbits in chaotic time series using synchronization
NASA Astrophysics Data System (ADS)
Olyaei, Ali Azimi; Wu, Christine; Kinsner, Witold
2017-07-01
An alternative approach of detecting unstable periodic orbits in chaotic time series is proposed using synchronization techniques. A master-slave synchronization scheme is developed, in which the chaotic system drives a system of harmonic oscillators through a proper coupling condition. The proposed scheme is designed so that the power of the coupling signal exhibits notches that drop to zero once the system approaches an unstable orbit yielding an explicit indication of the presence of a periodic motion. The results shows that the proposed approach is particularly suitable in practical situations, where the time series is short and noisy, or it is obtained from high-dimensional chaotic systems.
NASA Astrophysics Data System (ADS)
Gibbes, C.; Southworth, J.; Waylen, P. R.
2013-05-01
How do climate variability and climate change influence vegetation cover and vegetation change in savannas? A landscape scale investigation of the effect of changes in precipitation on vegetation is undertaken through the employment of a time series analysis. The multi-national study region is located within the Kavango-Zambezi region, and is delineated by the Okavango, Kwando, and Zambezi watersheds. A mean-variance time-series analysis quantifies vegetation dynamics and characterizes vegetation response to climate. The spatially explicit approach used to quantify the persistence of vegetation productivity permits the extraction of information regarding long term climate-landscape dynamics. Results show a pattern of reduced mean annual precipitation and increased precipitation variability across key social and ecological areas within the study region. Despite decreased mean annual precipitation since the mid to late 1970's vegetation trends predominantly indicate increasing biomass. The limited areas which have diminished vegetative cover relate to specific vegetation types, and are associated with declines in precipitation variability. Results indicate that in addition to short term changes in vegetation cover, long term trends in productive biomass are apparent, relate to spatial differences in precipitation variability, and potentially represent shifts vegetation composition. This work highlights the importance of time-series analyses for examining climate-vegetation linkages in a spatially explicit manner within a highly vulnerable region of the world.
Time since maximum of Brownian motion and asymmetric Lévy processes
NASA Astrophysics Data System (ADS)
Martin, R. J.; Kearney, M. J.
2018-07-01
Motivated by recent studies of record statistics in relation to strongly correlated time series, we consider explicitly the drawdown time of a Lévy process, which is defined as the time since it last achieved its running maximum when observed over a fixed time period . We show that the density function of this drawdown time, in the case of a completely asymmetric jump process, may be factored as a function of t multiplied by a function of T ‑ t. This extends a known result for the case of pure Brownian motion. We state the factors explicitly for the cases of exponential down-jumps with drift, and for the downward inverse Gaussian Lévy process with drift.
Testing and validating environmental models
Kirchner, J.W.; Hooper, R.P.; Kendall, C.; Neal, C.; Leavesley, G.
1996-01-01
Generally accepted standards for testing and validating ecosystem models would benefit both modellers and model users. Universally applicable test procedures are difficult to prescribe, given the diversity of modelling approaches and the many uses for models. However, the generally accepted scientific principles of documentation and disclosure provide a useful framework for devising general standards for model evaluation. Adequately documenting model tests requires explicit performance criteria, and explicit benchmarks against which model performance is compared. A model's validity, reliability, and accuracy can be most meaningfully judged by explicit comparison against the available alternatives. In contrast, current practice is often characterized by vague, subjective claims that model predictions show 'acceptable' agreement with data; such claims provide little basis for choosing among alternative models. Strict model tests (those that invalid models are unlikely to pass) are the only ones capable of convincing rational skeptics that a model is probably valid. However, 'false positive' rates as low as 10% can substantially erode the power of validation tests, making them insufficiently strict to convince rational skeptics. Validation tests are often undermined by excessive parameter calibration and overuse of ad hoc model features. Tests are often also divorced from the conditions under which a model will be used, particularly when it is designed to forecast beyond the range of historical experience. In such situations, data from laboratory and field manipulation experiments can provide particularly effective tests, because one can create experimental conditions quite different from historical data, and because experimental data can provide a more precisely defined 'target' for the model to hit. We present a simple demonstration showing that the two most common methods for comparing model predictions to environmental time series (plotting model time series against data time series, and plotting predicted versus observed values) have little diagnostic power. We propose that it may be more useful to statistically extract the relationships of primary interest from the time series, and test the model directly against them.
Jung, Kwanghee; Takane, Yoshio; Hwang, Heungsun; Woodward, Todd S
2016-06-01
We extend dynamic generalized structured component analysis (GSCA) to enhance its data-analytic capability in structural equation modeling of multi-subject time series data. Time series data of multiple subjects are typically hierarchically structured, where time points are nested within subjects who are in turn nested within a group. The proposed approach, named multilevel dynamic GSCA, accommodates the nested structure in time series data. Explicitly taking the nested structure into account, the proposed method allows investigating subject-wise variability of the loadings and path coefficients by looking at the variance estimates of the corresponding random effects, as well as fixed loadings between observed and latent variables and fixed path coefficients between latent variables. We demonstrate the effectiveness of the proposed approach by applying the method to the multi-subject functional neuroimaging data for brain connectivity analysis, where time series data-level measurements are nested within subjects.
Parameter and uncertainty estimation for mechanistic, spatially explicit epidemiological models
NASA Astrophysics Data System (ADS)
Finger, Flavio; Schaefli, Bettina; Bertuzzo, Enrico; Mari, Lorenzo; Rinaldo, Andrea
2014-05-01
Epidemiological models can be a crucially important tool for decision-making during disease outbreaks. The range of possible applications spans from real-time forecasting and allocation of health-care resources to testing alternative intervention mechanisms such as vaccines, antibiotics or the improvement of sanitary conditions. Our spatially explicit, mechanistic models for cholera epidemics have been successfully applied to several epidemics including, the one that struck Haiti in late 2010 and is still ongoing. Calibration and parameter estimation of such models represents a major challenge because of properties unusual in traditional geoscientific domains such as hydrology. Firstly, the epidemiological data available might be subject to high uncertainties due to error-prone diagnosis as well as manual (and possibly incomplete) data collection. Secondly, long-term time-series of epidemiological data are often unavailable. Finally, the spatially explicit character of the models requires the comparison of several time-series of model outputs with their real-world counterparts, which calls for an appropriate weighting scheme. It follows that the usual assumption of a homoscedastic Gaussian error distribution, used in combination with classical calibration techniques based on Markov chain Monte Carlo algorithms, is likely to be violated, whereas the construction of an appropriate formal likelihood function seems close to impossible. Alternative calibration methods, which allow for accurate estimation of total model uncertainty, particularly regarding the envisaged use of the models for decision-making, are thus needed. Here we present the most recent developments regarding methods for parameter and uncertainty estimation to be used with our mechanistic, spatially explicit models for cholera epidemics, based on informal measures of goodness of fit.
Characterization of chaotic attractors under noise: A recurrence network perspective
NASA Astrophysics Data System (ADS)
Jacob, Rinku; Harikrishnan, K. P.; Misra, R.; Ambika, G.
2016-12-01
We undertake a detailed numerical investigation to understand how the addition of white and colored noise to a chaotic time series changes the topology and the structure of the underlying attractor reconstructed from the time series. We use the methods and measures of recurrence plot and recurrence network generated from the time series for this analysis. We explicitly show that the addition of noise obscures the property of recurrence of trajectory points in the phase space which is the hallmark of every dynamical system. However, the structure of the attractor is found to be robust even upto high noise levels of 50%. An advantage of recurrence network measures over the conventional nonlinear measures is that they can be applied on short and non stationary time series data. By using the results obtained from the above analysis, we go on to analyse the light curves from a dominant black hole system and show that the recurrence network measures are capable of identifying the nature of noise contamination in a time series.
Approximating high-dimensional dynamics by barycentric coordinates with linear programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics ofmore » the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.« less
Approximating high-dimensional dynamics by barycentric coordinates with linear programming.
Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma
2015-01-01
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.
1989-01-01
This paper develops techniques to evaluate the discrete Fourier transform (DFT), the autocorrelation function (ACF), and the cross-correlation function (CCF) of time series which are not evenly sampled. The series may consist of quantized point data (e.g., yes/no processes such as photon arrival). The DFT, which can be inverted to recover the original data and the sampling, is used to compute correlation functions by means of a procedure which is effectively, but not explicitly, an interpolation. The CCF can be computed for two time series not even sampled at the same set of times. Techniques for removing the distortion of the correlation functions caused by the sampling, determining the value of a constant component to the data, and treating unequally weighted data are also discussed. FORTRAN code for the Fourier transform algorithm and numerical examples of the techniques are given.
Phenomapping of rangelands in South Africa using time series of RapidEye data
NASA Astrophysics Data System (ADS)
Parplies, André; Dubovyk, Olena; Tewes, Andreas; Mund, Jan-Peter; Schellberg, Jürgen
2016-12-01
Phenomapping is an approach which allows the derivation of spatial patterns of vegetation phenology and rangeland productivity based on time series of vegetation indices. In our study, we propose a new spatial mapping approach which combines phenometrics derived from high resolution (HR) satellite time series with spatial logistic regression modeling to discriminate land management systems in rangelands. From the RapidEye time series for selected rangelands in South Africa, we calculated bi-weekly noise reduced Normalized Difference Vegetation Index (NDVI) images. For the growing season of 20112012, we further derived principal phenology metrics such as start, end and length of growing season and related phenological variables such as amplitude, left derivative and small integral of the NDVI curve. We then mapped these phenometrics across two different tenure systems, communal and commercial, at the very detailed spatial resolution of 5 m. The result of a binary logistic regression (BLR) has shown that the amplitude and the left derivative of the NDVI curve were statistically significant. These indicators are useful to discriminate commercial from communal rangeland systems. We conclude that phenomapping combined with spatial modeling is a powerful tool that allows efficient aggregation of phenology and productivity metrics for spatially explicit analysis of the relationships of crop phenology with site conditions and management. This approach has particular potential for disaggregated and patchy environments such as in farming systems in semi-arid South Africa, where phenology varies considerably among and within years. Further, we see a strong perspective for phenomapping to support spatially explicit modelling of vegetation.
NASA Astrophysics Data System (ADS)
Wilde, M. V.; Sergeeva, N. V.
2018-05-01
An explicit asymptotic model extracting the contribution of a surface wave to the dynamic response of a viscoelastic half-space is derived. Fractional exponential Rabotnov's integral operators are used for describing of material properties. The model is derived by extracting the principal part of the poles corresponding to the surface waves after applying Laplace and Fourier transforms. The simplified equations for the originals are written by using power series expansions. Padè approximation is constructed to unite short-time and long-time models. The form of this approximation allows to formulate the explicit model using a fractional exponential Rabotnov's integral operator with parameters depending on the properties of surface wave. The applicability of derived models is studied by comparing with the exact solutions of a model problem. It is revealed that the model based on Padè approximation is highly effective for all the possible time domains.
Robust extrema features for time-series data analysis.
Vemulapalli, Pramod K; Monga, Vishal; Brennan, Sean N
2013-06-01
The extraction of robust features for comparing and analyzing time series is a fundamentally important problem. Research efforts in this area encompass dimensionality reduction using popular signal analysis tools such as the discrete Fourier and wavelet transforms, various distance metrics, and the extraction of interest points from time series. Recently, extrema features for analysis of time-series data have assumed increasing significance because of their natural robustness under a variety of practical distortions, their economy of representation, and their computational benefits. Invariably, the process of encoding extrema features is preceded by filtering of the time series with an intuitively motivated filter (e.g., for smoothing), and subsequent thresholding to identify robust extrema. We define the properties of robustness, uniqueness, and cardinality as a means to identify the design choices available in each step of the feature generation process. Unlike existing methods, which utilize filters "inspired" from either domain knowledge or intuition, we explicitly optimize the filter based on training time series to optimize robustness of the extracted extrema features. We demonstrate further that the underlying filter optimization problem reduces to an eigenvalue problem and has a tractable solution. An encoding technique that enhances control over cardinality and uniqueness is also presented. Experimental results obtained for the problem of time series subsequence matching establish the merits of the proposed algorithm.
Distinguishing time-delayed causal interactions using convergent cross mapping
Ye, Hao; Deyle, Ethan R.; Gilarranz, Luis J.; Sugihara, George
2015-01-01
An important problem across many scientific fields is the identification of causal effects from observational data alone. Recent methods (convergent cross mapping, CCM) have made substantial progress on this problem by applying the idea of nonlinear attractor reconstruction to time series data. Here, we expand upon the technique of CCM by explicitly considering time lags. Applying this extended method to representative examples (model simulations, a laboratory predator-prey experiment, temperature and greenhouse gas reconstructions from the Vostok ice core, and long-term ecological time series collected in the Southern California Bight), we demonstrate the ability to identify different time-delayed interactions, distinguish between synchrony induced by strong unidirectional-forcing and true bidirectional causality, and resolve transitive causal chains. PMID:26435402
NASA Astrophysics Data System (ADS)
Piburn, J.; Stewart, R.; Morton, A.
2017-10-01
Identifying erratic or unstable time-series is an area of interest to many fields. Recently, there have been successful developments towards this goal. These new developed methodologies however come from domains where it is typical to have several thousand or more temporal observations. This creates a challenge when attempting to apply these methodologies to time-series with much fewer temporal observations such as for socio-cultural understanding, a domain where a typical time series of interest might only consist of 20-30 annual observations. Most existing methodologies simply cannot say anything interesting with so few data points, yet researchers are still tasked to work within in the confines of the data. Recently a method for characterizing instability in a time series with limitedtemporal observations was published. This method, Attribute Stability Index (ASI), uses an approximate entropy based method tocharacterize a time series' instability. In this paper we propose an explicitly spatially weighted extension of the Attribute StabilityIndex. By including a mechanism to account for spatial autocorrelation, this work represents a novel approach for the characterizationof space-time instability. As a case study we explore national youth male unemployment across the world from 1991-2014.
Time series models of environmental exposures: Good predictions or good understanding.
Barnett, Adrian G; Stephen, Dimity; Huang, Cunrui; Wolkewitz, Martin
2017-04-01
Time series data are popular in environmental epidemiology as they make use of the natural experiment of how changes in exposure over time might impact on disease. Many published time series papers have used parameter-heavy models that fully explained the second order patterns in disease to give residuals that have no short-term autocorrelation or seasonality. This is often achieved by including predictors of past disease counts (autoregression) or seasonal splines with many degrees of freedom. These approaches give great residuals, but add little to our understanding of cause and effect. We argue that modelling approaches should rely more on good epidemiology and less on statistical tests. This includes thinking about causal pathways, making potential confounders explicit, fitting a limited number of models, and not over-fitting at the cost of under-estimating the true association between exposure and disease. Copyright © 2017 Elsevier Inc. All rights reserved.
Zhang, Yatao; Wei, Shoushui; Liu, Hai; Zhao, Lina; Liu, Chengyu
2016-09-01
The Lempel-Ziv (LZ) complexity and its variants have been extensively used to analyze the irregularity of physiological time series. To date, these measures cannot explicitly discern between the irregularity and the chaotic characteristics of physiological time series. Our study compared the performance of an encoding LZ (ELZ) complexity algorithm, a novel variant of the LZ complexity algorithm, with those of the classic LZ (CLZ) and multistate LZ (MLZ) complexity algorithms. Simulation experiments on Gaussian noise, logistic chaotic, and periodic time series showed that only the ELZ algorithm monotonically declined with the reduction in irregularity in time series, whereas the CLZ and MLZ approaches yielded overlapped values for chaotic time series and time series mixed with Gaussian noise, demonstrating the accuracy of the proposed ELZ algorithm in capturing the irregularity, rather than the complexity, of physiological time series. In addition, the effect of sequence length on the ELZ algorithm was more stable compared with those on CLZ and MLZ, especially when the sequence length was longer than 300. A sensitivity analysis for all three LZ algorithms revealed that both the MLZ and the ELZ algorithms could respond to the change in time sequences, whereas the CLZ approach could not. Cardiac interbeat (RR) interval time series from the MIT-BIH database were also evaluated, and the results showed that the ELZ algorithm could accurately measure the inherent irregularity of the RR interval time series, as indicated by lower LZ values yielded from a congestive heart failure group versus those yielded from a normal sinus rhythm group (p < 0.01). Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Koopman Operator Framework for Time Series Modeling and Analysis
NASA Astrophysics Data System (ADS)
Surana, Amit
2018-01-01
We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.
Ensemble Deep Learning for Biomedical Time Series Classification
2016-01-01
Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost. PMID:27725828
NASA Astrophysics Data System (ADS)
Hermosilla, Txomin; Wulder, Michael A.; White, Joanne C.; Coops, Nicholas C.; Hobart, Geordie W.
2017-12-01
The use of time series satellite data allows for the temporally dense, systematic, transparent, and synoptic capture of land dynamics over time. Subsequent to the opening of the Landsat archive, several time series approaches for characterizing landscape change have been developed, often representing a particular analytical time window. The information richness and widespread utility of these time series data have created a need to maintain the currency of time series information via the addition of new data, as it becomes available. When an existing time series is temporally extended, it is critical that previously generated change information remains consistent, thereby not altering reported change statistics or science outcomes based on that change information. In this research, we investigate the impacts and implications of adding additional years to an existing 29-year annual Landsat time series for forest change. To do so, we undertook a spatially explicit comparison of the 29 overlapping years of a time series representing 1984-2012, with a time series representing 1984-2016. Surface reflectance values, and presence, year, and type of change were compared. We found that the addition of years to extend the time series had minimal effect on the annual surface reflectance composites, with slight band-specific differences (r ≥ 0.1) in the final years of the original time series being updated. The area of stand replacing disturbances and determination of change year are virtually unchanged for the overlapping period between the two time-series products. Over the overlapping temporal period (1984-2012), the total area of change differs by 0.53%, equating to an annual difference in change area of 0.019%. Overall, the spatial and temporal agreement of the changes detected by both time series was 96%. Further, our findings suggest that the entire pre-existing historic time series does not need to be re-processed during the update process. Critically, given the time series change detection and update approach followed here, science outcomes or reports representing one temporal epoch can be considered stable and will not be altered when a time series is updated with newly available data.
Explicit Scaffolding Increases Simple Helping in Younger Infants
ERIC Educational Resources Information Center
Dahl, Audun; Satlof-Bedrick, Emma S.; Hammond, Stuart I.; Drummond, Jesse K.; Waugh, Whitney E.; Brownell, Celia A.
2017-01-01
Infants become increasingly helpful during the second year. We investigated experimentally whether adults' explicit scaffolding influences this development. Infants (N = 69, 13-18 months old) participated in a series of simple helping tasks. Half of infants received explicit scaffolding (encouragement and praise), whereas the other half did not.…
The Response of Abortion Demand to Changes in Abortion Costs
ERIC Educational Resources Information Center
Medoff, Marshall H.
2008-01-01
This study uses pooled cross-section time-series data, over the years 1982, 1992 and 2000, to estimate the impact of various restrictive abortion laws on the demand for abortion. This study complements and extends prior research by explicitly including the price of obtaining an abortion in the estimation. The empirical results show that the real…
Power laws reveal phase transitions in landscape controls of fire regimes
Donald McKenzie; Maureen C. Kennedy
2012-01-01
Understanding the environmental controls on historical wildfires, and how they changed across spatial scales, is difficult because there are no surviving explicit records of either weather or vegetation (fuels). Here we show how power laws associated with fire-event time series arise in limited domains of parameters that represent critical transitions in the controls...
NASA Astrophysics Data System (ADS)
Zhong, Jiaqi; Zeng, Cheng; Yuan, Yupeng; Zhang, Yuzhe; Zhang, Ye
2018-04-01
The aim of this paper is to present an explicit numerical algorithm based on improved spectral Galerkin method for solving the unsteady diffusion-convection-reaction equation. The principal characteristics of this approach give the explicit eigenvalues and eigenvectors based on the time-space separation method and boundary condition analysis. With the help of Fourier series and Galerkin truncation, we can obtain the finite-dimensional ordinary differential equations which facilitate the system analysis and controller design. By comparing with the finite element method, the numerical solutions are demonstrated via two examples. It is shown that the proposed method is effective.
Estimating survival rates with time series of standing age‐structure data
Udevitz, Mark S.; Gogan, Peter J.
2012-01-01
It has long been recognized that age‐structure data contain useful information for assessing the status and dynamics of wildlife populations. For example, age‐specific survival rates can be estimated with just a single sample from the age distribution of a stable, stationary population. For a population that is not stable, age‐specific survival rates can be estimated using techniques such as inverse methods that combine time series of age‐structure data with other demographic data. However, estimation of survival rates using these methods typically requires numerical optimization, a relatively long time series of data, and smoothing or other constraints to provide useful estimates. We developed general models for possibly unstable populations that combine time series of age‐structure data with other demographic data to provide explicit maximum likelihood estimators of age‐specific survival rates with as few as two years of data. As an example, we applied these methods to estimate survival rates for female bison (Bison bison) in Yellowstone National Park, USA. This approach provides a simple tool for monitoring survival rates based on age‐structure data.
Time Series Expression Analyses Using RNA-seq: A Statistical Approach
Oh, Sunghee; Song, Seongho; Grabowski, Gregory; Zhao, Hongyu; Noonan, James P.
2013-01-01
RNA-seq is becoming the de facto standard approach for transcriptome analysis with ever-reducing cost. It has considerable advantages over conventional technologies (microarrays) because it allows for direct identification and quantification of transcripts. Many time series RNA-seq datasets have been collected to study the dynamic regulations of transcripts. However, statistically rigorous and computationally efficient methods are needed to explore the time-dependent changes of gene expression in biological systems. These methods should explicitly account for the dependencies of expression patterns across time points. Here, we discuss several methods that can be applied to model timecourse RNA-seq data, including statistical evolutionary trajectory index (SETI), autoregressive time-lagged regression (AR(1)), and hidden Markov model (HMM) approaches. We use three real datasets and simulation studies to demonstrate the utility of these dynamic methods in temporal analysis. PMID:23586021
Time series expression analyses using RNA-seq: a statistical approach.
Oh, Sunghee; Song, Seongho; Grabowski, Gregory; Zhao, Hongyu; Noonan, James P
2013-01-01
RNA-seq is becoming the de facto standard approach for transcriptome analysis with ever-reducing cost. It has considerable advantages over conventional technologies (microarrays) because it allows for direct identification and quantification of transcripts. Many time series RNA-seq datasets have been collected to study the dynamic regulations of transcripts. However, statistically rigorous and computationally efficient methods are needed to explore the time-dependent changes of gene expression in biological systems. These methods should explicitly account for the dependencies of expression patterns across time points. Here, we discuss several methods that can be applied to model timecourse RNA-seq data, including statistical evolutionary trajectory index (SETI), autoregressive time-lagged regression (AR(1)), and hidden Markov model (HMM) approaches. We use three real datasets and simulation studies to demonstrate the utility of these dynamic methods in temporal analysis.
A time-spectral approach to numerical weather prediction
NASA Astrophysics Data System (ADS)
Scheffel, Jan; Lindvall, Kristoffer; Yik, Hiu Fai
2018-05-01
Finite difference methods are traditionally used for modelling the time domain in numerical weather prediction (NWP). Time-spectral solution is an attractive alternative for reasons of accuracy and efficiency and because time step limitations associated with causal CFL-like criteria, typical for explicit finite difference methods, are avoided. In this work, the Lorenz 1984 chaotic equations are solved using the time-spectral algorithm GWRM (Generalized Weighted Residual Method). Comparisons of accuracy and efficiency are carried out for both explicit and implicit time-stepping algorithms. It is found that the efficiency of the GWRM compares well with these methods, in particular at high accuracy. For perturbative scenarios, the GWRM was found to be as much as four times faster than the finite difference methods. A primary reason is that the GWRM time intervals typically are two orders of magnitude larger than those of the finite difference methods. The GWRM has the additional advantage to produce analytical solutions in the form of Chebyshev series expansions. The results are encouraging for pursuing further studies, including spatial dependence, of the relevance of time-spectral methods for NWP modelling.
Incompressible spectral-element method: Derivation of equations
NASA Technical Reports Server (NTRS)
Deanna, Russell G.
1993-01-01
A fractional-step splitting scheme breaks the full Navier-Stokes equations into explicit and implicit portions amenable to the calculus of variations. Beginning with the functional forms of the Poisson and Helmholtz equations, we substitute finite expansion series for the dependent variables and derive the matrix equations for the unknown expansion coefficients. This method employs a new splitting scheme which differs from conventional three-step (nonlinear, pressure, viscous) schemes. The nonlinear step appears in the conventional, explicit manner, the difference occurs in the pressure step. Instead of solving for the pressure gradient using the nonlinear velocity, we add the viscous portion of the Navier-Stokes equation from the previous time step to the velocity before solving for the pressure gradient. By combining this 'predicted' pressure gradient with the nonlinear velocity in an explicit term, and the Crank-Nicholson method for the viscous terms, we develop a Helmholtz equation for the final velocity.
Statistical modeling of isoform splicing dynamics from RNA-seq time series data.
Huang, Yuanhua; Sanguinetti, Guido
2016-10-01
Isoform quantification is an important goal of RNA-seq experiments, yet it remains problematic for genes with low expression or several isoforms. These difficulties may in principle be ameliorated by exploiting correlated experimental designs, such as time series or dosage response experiments. Time series RNA-seq experiments, in particular, are becoming increasingly popular, yet there are no methods that explicitly leverage the experimental design to improve isoform quantification. Here, we present DICEseq, the first isoform quantification method tailored to correlated RNA-seq experiments. DICEseq explicitly models the correlations between different RNA-seq experiments to aid the quantification of isoforms across experiments. Numerical experiments on simulated datasets show that DICEseq yields more accurate results than state-of-the-art methods, an advantage that can become considerable at low coverage levels. On real datasets, our results show that DICEseq provides substantially more reproducible and robust quantifications, increasing the correlation of estimates from replicate datasets by up to 10% on genes with low or moderate expression levels (bottom third of all genes). Furthermore, DICEseq permits to quantify the trade-off between temporal sampling of RNA and depth of sequencing, frequently an important choice when planning experiments. Our results have strong implications for the design of RNA-seq experiments, and offer a novel tool for improved analysis of such datasets. Python code is freely available at http://diceseq.sf.net G.Sanguinetti@ed.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Inverse sequential procedures for the monitoring of time series
NASA Technical Reports Server (NTRS)
Radok, Uwe; Brown, Timothy J.
1995-01-01
When one or more new values are added to a developing time series, they change its descriptive parameters (mean, variance, trend, coherence). A 'change index (CI)' is developed as a quantitative indicator that the changed parameters remain compatible with the existing 'base' data. CI formulate are derived, in terms of normalized likelihood ratios, for small samples from Poisson, Gaussian, and Chi-Square distributions, and for regression coefficients measuring linear or exponential trends. A substantial parameter change creates a rapid or abrupt CI decrease which persists when the length of the bases is changed. Except for a special Gaussian case, the CI has no simple explicit regions for tests of hypotheses. However, its design ensures that the series sampled need not conform strictly to the distribution form assumed for the parameter estimates. The use of the CI is illustrated with both constructed and observed data samples, processed with a Fortran code 'Sequitor'.
NASA Astrophysics Data System (ADS)
Huttenlau, Matthias; Schneeberger, Klaus; Winter, Benjamin; Pazur, Robert; Förster, Kristian; Achleitner, Stefan; Bolliger, Janine
2017-04-01
Devastating flood events have caused substantial economic damage across Europe during past decades. Flood risk management has therefore become a topic of crucial interest across state agencies, research communities and the public sector including insurances. There is consensus that mitigating flood risk relies on impact assessments which quantitatively account for a broad range of aspects in a (changing) environment. Flood risk assessments which take into account the interaction between the drivers climate change, land-use change and socio-economic change might bring new insights to the understanding of the magnitude and spatial characteristic of flood risks. Furthermore, the comparative assessment of different adaptation measures can give valuable information for decision-making. With this contribution we present an inter- and transdisciplinary research project aiming at developing and applying such an impact assessment relying on a coupled modelling framework for the Province of Vorarlberg in Austria. Stakeholder engagement ensures that the final outcomes of our study are accepted and successfully implemented in flood management practice. The study addresses three key questions: (i) What are scenarios of land- use and climate change for the study area? (ii) How will the magnitude and spatial characteristic of future flood risk change as a result of changes in climate and land use? (iii) Are there spatial planning and building-protection measures which effectively reduce future flood risk? The modelling framework has a modular structure comprising modules (i) climate change, (ii) land-use change, (iii) hydrologic modelling, (iv) flood risk analysis, and (v) adaptation measures. Meteorological time series are coupled with spatially explicit scenarios of land-use change to model runoff time series. The runoff time series are combined with impact indicators such as building damages and results are statistically assessed to analyse flood risk scenarios. Thus, the regional flood risk can be expressed in terms of expected annual damage and damages associated with a low probability of occurrence. We consider building protection measures explicitly as part of the consequence analysis of flood risk whereas spatial planning measures are already considered as explicit scenarios in the course of land-use change modelling.
Geometric Series via Probability
ERIC Educational Resources Information Center
Tesman, Barry
2012-01-01
Infinite series is a challenging topic in the undergraduate mathematics curriculum for many students. In fact, there is a vast literature in mathematics education research on convergence issues. One of the most important types of infinite series is the geometric series. Their beauty lies in the fact that they can be evaluated explicitly and that…
Ghalyan, Najah F; Miller, David J; Ray, Asok
2018-06-12
Estimation of a generating partition is critical for symbolization of measurements from discrete-time dynamical systems, where a sequence of symbols from a (finite-cardinality) alphabet may uniquely specify the underlying time series. Such symbolization is useful for computing measures (e.g., Kolmogorov-Sinai entropy) to identify or characterize the (possibly unknown) dynamical system. It is also useful for time series classification and anomaly detection. The seminal work of Hirata, Judd, and Kilminster (2004) derives a novel objective function, akin to a clustering objective, that measures the discrepancy between a set of reconstruction values and the points from the time series. They cast estimation of a generating partition via the minimization of their objective function. Unfortunately, their proposed algorithm is nonconvergent, with no guarantee of finding even locally optimal solutions with respect to their objective. The difficulty is a heuristic-nearest neighbor symbol assignment step. Alternatively, we develop a novel, locally optimal algorithm for their objective. We apply iterative nearest-neighbor symbol assignments with guaranteed discrepancy descent, by which joint, locally optimal symbolization of the entire time series is achieved. While most previous approaches frame generating partition estimation as a state-space partitioning problem, we recognize that minimizing the Hirata et al. (2004) objective function does not induce an explicit partitioning of the state space, but rather the space consisting of the entire time series (effectively, clustering in a (countably) infinite-dimensional space). Our approach also amounts to a novel type of sliding block lossy source coding. Improvement, with respect to several measures, is demonstrated over popular methods for symbolizing chaotic maps. We also apply our approach to time-series anomaly detection, considering both chaotic maps and failure application in a polycrystalline alloy material.
NASA Astrophysics Data System (ADS)
Nordemann, D. J. R.; Rigozo, N. R.; de Souza Echer, M. P.; Echer, E.
2008-11-01
We present here an implementation of a least squares iterative regression method applied to the sine functions embedded in the principal components extracted from geophysical time series. This method seems to represent a useful improvement for the non-stationary time series periodicity quantitative analysis. The principal components determination followed by the least squares iterative regression method was implemented in an algorithm written in the Scilab (2006) language. The main result of the method is to obtain the set of sine functions embedded in the series analyzed in decreasing order of significance, from the most important ones, likely to represent the physical processes involved in the generation of the series, to the less important ones that represent noise components. Taking into account the need of a deeper knowledge of the Sun's past history and its implication to global climate change, the method was applied to the Sunspot Number series (1750-2004). With the threshold and parameter values used here, the application of the method leads to a total of 441 explicit sine functions, among which 65 were considered as being significant and were used for a reconstruction that gave a normalized mean squared error of 0.146.
NASA Astrophysics Data System (ADS)
Diao, Chunyuan
In today's big data era, the increasing availability of satellite and airborne platforms at various spatial and temporal scales creates unprecedented opportunities to understand the complex and dynamic systems (e.g., plant invasion). Time series remote sensing is becoming more and more important to monitor the earth system dynamics and interactions. To date, most of the time series remote sensing studies have been conducted with the images acquired at coarse spatial scale, due to their relatively high temporal resolution. The construction of time series at fine spatial scale, however, is limited to few or discrete images acquired within or across years. The objective of this research is to advance the time series remote sensing at fine spatial scale, particularly to shift from discrete time series remote sensing to continuous time series remote sensing. The objective will be achieved through the following aims: 1) Advance intra-annual time series remote sensing under the pure-pixel assumption; 2) Advance intra-annual time series remote sensing under the mixed-pixel assumption; 3) Advance inter-annual time series remote sensing in monitoring the land surface dynamics; and 4) Advance the species distribution model with time series remote sensing. Taking invasive saltcedar as an example, four methods (i.e., phenological time series remote sensing model, temporal partial unmixing method, multiyear spectral angle clustering model, and time series remote sensing-based spatially explicit species distribution model) were developed to achieve the objectives. Results indicated that the phenological time series remote sensing model could effectively map saltcedar distributions through characterizing the seasonal phenological dynamics of plant species throughout the year. The proposed temporal partial unmixing method, compared to conventional unmixing methods, could more accurately estimate saltcedar abundance within a pixel by exploiting the adequate temporal signatures of saltcedar. The multiyear spectral angle clustering model could guide the selection of the most representative remotely sensed image for repetitive saltcedar mapping over space and time. Through incorporating spatial autocorrelation, the species distribution model developed in the study could identify the suitable habitats of saltcedar at a fine spatial scale and locate appropriate areas at high risk of saltcedar infestation. Among 10 environmental variables, the distance to the river and the phenological attributes summarized by the time series remote sensing were regarded as the most important. These methods developed in the study provide new perspectives on how the continuous time series can be leveraged under various conditions to investigate the plant invasion dynamics.
The study of Thai stock market across the 2008 financial crisis
NASA Astrophysics Data System (ADS)
Kanjamapornkul, K.; Pinčák, Richard; Bartoš, Erik
2016-11-01
The cohomology theory for financial market can allow us to deform Kolmogorov space of time series data over time period with the explicit definition of eight market states in grand unified theory. The anti-de Sitter space induced from a coupling behavior field among traders in case of a financial market crash acts like gravitational field in financial market spacetime. Under this hybrid mathematical superstructure, we redefine a behavior matrix by using Pauli matrix and modified Wilson loop for time series data. We use it to detect the 2008 financial market crash by using a degree of cohomology group of sphere over tensor field in correlation matrix over all possible dominated stocks underlying Thai SET50 Index Futures. The empirical analysis of financial tensor network was performed with the help of empirical mode decomposition and intrinsic time scale decomposition of correlation matrix and the calculation of closeness centrality of planar graph.
Baker, Nathan A.; McCammon, J. Andrew
2008-01-01
The solvent reaction field potential of an uncharged protein immersed in Simple Point Charge/Extended (SPC/E) explicit solvent was computed over a series of molecular dynamics trajectories, intotal 1560 ns of simulation time. A finite, positive potential of 13 to 24 kbTec−1 (where T = 300K), dependent on the geometry of the solvent-accessible surface, was observed inside the biomolecule. The primary contribution to this potential arose from a layer of positive charge density 1.0 Å from the solute surface, on average 0.008 ec/Å3, which we found to be the product of a highly ordered first solvation shell. Significant second solvation shell effects, including additional layers of charge density and a slight decrease in the short-range solvent-solvent interaction strength, were also observed. The impact of these findings on implicit solvent models was assessed by running similar explicit-solvent simulations on the fully charged protein system. When the energy due to the solvent reaction field in the uncharged system is accounted for, correlation between per-atom electrostatic energies for the explicit solvent model and a simple implicit (Poisson) calculation is 0.97, and correlation between per-atom energies for the explicit solvent model and a previously published, optimized Poisson model is 0.99. PMID:17949217
NASA Astrophysics Data System (ADS)
Cerutti, David S.; Baker, Nathan A.; McCammon, J. Andrew
2007-10-01
The solvent reaction field potential of an uncharged protein immersed in simple point charge/extended explicit solvent was computed over a series of molecular dynamics trajectories, in total 1560ns of simulation time. A finite, positive potential of 13-24 kbTec-1 (where T =300K), dependent on the geometry of the solvent-accessible surface, was observed inside the biomolecule. The primary contribution to this potential arose from a layer of positive charge density 1.0Å from the solute surface, on average 0.008ec/Å3, which we found to be the product of a highly ordered first solvation shell. Significant second solvation shell effects, including additional layers of charge density and a slight decrease in the short-range solvent-solvent interaction strength, were also observed. The impact of these findings on implicit solvent models was assessed by running similar explicit solvent simulations on the fully charged protein system. When the energy due to the solvent reaction field in the uncharged system is accounted for, correlation between per-atom electrostatic energies for the explicit solvent model and a simple implicit (Poisson) calculation is 0.97, and correlation between per-atom energies for the explicit solvent model and a previously published, optimized Poisson model is 0.99.
State-space based analysis and forecasting of macroscopic road safety trends in Greece.
Antoniou, Constantinos; Yannis, George
2013-11-01
In this paper, macroscopic road safety trends in Greece are analyzed using state-space models and data for 52 years (1960-2011). Seemingly unrelated time series equations (SUTSE) models are developed first, followed by richer latent risk time-series (LRT) models. As reliable estimates of vehicle-kilometers are not available for Greece, the number of vehicles in circulation is used as a proxy to the exposure. Alternative considered models are presented and discussed, including diagnostics for the assessment of their model quality and recommendations for further enrichment of this model. Important interventions were incorporated in the models developed (1986 financial crisis, 1991 old-car exchange scheme, 1996 new road fatality definition) and found statistically significant. Furthermore, the forecasting results using data up to 2008 were compared with final actual data (2009-2011) indicating that the models perform properly, even in unusual situations, like the current strong financial crisis in Greece. Forecasting results up to 2020 are also presented and compared with the forecasts of a model that explicitly considers the currently on-going recession. Modeling the recession, and assuming that it will end by 2013, results in more reasonable estimates of risk and vehicle-kilometers for the 2020 horizon. This research demonstrates the benefits of using advanced state-space modeling techniques for modeling macroscopic road safety trends, such as allowing the explicit modeling of interventions. The challenges associated with the application of such state-of-the-art models for macroscopic phenomena, such as traffic fatalities in a region or country, are also highlighted. Furthermore, it is demonstrated that it is possible to apply such complex models using the relatively short time-series that are available in macroscopic road safety analysis. Copyright © 2013 Elsevier Ltd. All rights reserved.
C. E. Naficy; T. T. Veblen; P. F. Hessburg
2015-01-01
Within the last decade, mixed-severity fire regimes (MSFRs) have gained increasing attention in both the scientific and management communities (Arno and others 2000, Baker and others 2007, Hessburg and others 2007, Perry and others 2011, Halofsky and others 2011, Stine and others 2014). The growing influence of the MSFR model derives from several factors including: (1...
NASA Astrophysics Data System (ADS)
Pardo-Igúzquiza, Eulogio; Rodríguez-Tovar, Francisco J.
2012-12-01
Many spectral analysis techniques have been designed assuming sequences taken with a constant sampling interval. However, there are empirical time series in the geosciences (sediment cores, fossil abundance data, isotope analysis, …) that do not follow regular sampling because of missing data, gapped data, random sampling or incomplete sequences, among other reasons. In general, interpolating an uneven series in order to obtain a succession with a constant sampling interval alters the spectral content of the series. In such cases it is preferable to follow an approach that works with the uneven data directly, avoiding the need for an explicit interpolation step. The Lomb-Scargle periodogram is a popular choice in such circumstances, as there are programs available in the public domain for its computation. One new computer program for spectral analysis improves the standard Lomb-Scargle periodogram approach in two ways: (1) It explicitly adjusts the statistical significance to any bias introduced by variance reduction smoothing, and (2) it uses a permutation test to evaluate confidence levels, which is better suited than parametric methods when neighbouring frequencies are highly correlated. Another novel program for cross-spectral analysis offers the advantage of estimating the Lomb-Scargle cross-periodogram of two uneven time series defined on the same interval, and it evaluates the confidence levels of the estimated cross-spectra by a non-parametric computer intensive permutation test. Thus, the cross-spectrum, the squared coherence spectrum, the phase spectrum, and the Monte Carlo statistical significance of the cross-spectrum and the squared-coherence spectrum can be obtained. Both of the programs are written in ANSI Fortran 77, in view of its simplicity and compatibility. The program code is of public domain, provided on the website of the journal (http://www.iamg.org/index.php/publisher/articleview/frmArticleID/112/). Different examples (with simulated and real data) are described in this paper to corroborate the methodology and the implementation of these two new programs.
Comparison between four dissimilar solar panel configurations
NASA Astrophysics Data System (ADS)
Suleiman, K.; Ali, U. A.; Yusuf, Ibrahim; Koko, A. D.; Bala, S. I.
2017-12-01
Several studies on photovoltaic systems focused on how it operates and energy required in operating it. Little attention is paid on its configurations, modeling of mean time to system failure, availability, cost benefit and comparisons of parallel and series-parallel designs. In this research work, four system configurations were studied. Configuration I consists of two sub-components arranged in parallel with 24 V each, configuration II consists of four sub-components arranged logically in parallel with 12 V each, configuration III consists of four sub-components arranged in series-parallel with 8 V each, and configuration IV has six sub-components with 6 V each arranged in series-parallel. Comparative analysis was made using Chapman Kolmogorov's method. The derivation for explicit expression of mean time to system failure, steady state availability and cost benefit analysis were performed, based on the comparison. Ranking method was used to determine the optimal configuration of the systems. The results of analytical and numerical solutions of system availability and mean time to system failure were determined and it was found that configuration I is the optimal configuration.
Enabling Web-Based Analysis of CUAHSI HIS Hydrologic Data Using R and Web Processing Services
NASA Astrophysics Data System (ADS)
Ames, D. P.; Kadlec, J.; Bayles, M.; Seul, M.; Hooper, R. P.; Cummings, B.
2015-12-01
The CUAHSI Hydrologic Information System (CUAHSI HIS) provides open access to a large number of hydrological time series observation and modeled data from many parts of the world. Several software tools have been designed to simplify searching and access to the CUAHSI HIS datasets. These software tools include: Desktop client software (HydroDesktop, HydroExcel), developer libraries (WaterML R Package, OWSLib, ulmo), and the new interactive search website, http://data.cuahsi.org. An issue with using the time series data from CUAHSI HIS for further analysis by hydrologists (for example for verification of hydrological and snowpack models) is the large heterogeneity of the time series data. The time series may be regular or irregular, contain missing data, have different time support, and be recorded in different units. R is a widely used computational environment for statistical analysis of time series and spatio-temporal data that can be used to assess fitness and perform scientific analyses on observation data. R includes the ability to record a data analysis in the form of a reusable script. The R script together with the input time series dataset can be shared with other users, making the analysis more reproducible. The major goal of this study is to examine the use of R as a Web Processing Service for transforming time series data from the CUAHSI HIS and sharing the results on the Internet within HydroShare. HydroShare is an online data repository and social network for sharing large hydrological data sets such as time series, raster datasets, and multi-dimensional data. It can be used as a permanent cloud storage space for saving the time series analysis results. We examine the issues associated with running R scripts online: including code validation, saving of outputs, reporting progress, and provenance management. An explicit goal is that the script which is run locally should produce exactly the same results as the script run on the Internet. Our design can be used as a model for other studies that need to run R scripts on the web.
NASA Astrophysics Data System (ADS)
Nikolaev, A. S.
2015-03-01
We study the structure of the canonical Poincaré-Lindstedt perturbation series in the Deprit operator formalism and establish its connection to the Kato resolvent expansion. A discussion of invariant definitions for averaging and integrating perturbation operators and their canonical identities reveals a regular pattern in the series for the Deprit generator. This regularity is explained using Kato series and the relation of the perturbation operators to the Laurent coefficients for the resolvent of the Liouville operator. This purely canonical approach systematizes the series and leads to an explicit expression for the Deprit generator in any order of the perturbation theory: , where is the partial pseudoinverse of the perturbed Liouville operator. The corresponding Kato series provides a reasonably effective computational algorithm. The canonical connection of the perturbed and unperturbed averaging operators allows describing ambiguities in the generator and transformed Hamiltonian, while Gustavson integrals turn out to be insensitive to the normalization style. We use nonperturbative examples for illustration.
Thermal form-factor approach to dynamical correlation functions of integrable lattice models
NASA Astrophysics Data System (ADS)
Göhmann, Frank; Karbach, Michael; Klümper, Andreas; Kozlowski, Karol K.; Suzuki, Junji
2017-11-01
We propose a method for calculating dynamical correlation functions at finite temperature in integrable lattice models of Yang-Baxter type. The method is based on an expansion of the correlation functions as a series over matrix elements of a time-dependent quantum transfer matrix rather than the Hamiltonian. In the infinite Trotter-number limit the matrix elements become time independent and turn into the thermal form factors studied previously in the context of static correlation functions. We make this explicit with the example of the XXZ model. We show how the form factors can be summed utilizing certain auxiliary functions solving finite sets of nonlinear integral equations. The case of the XX model is worked out in more detail leading to a novel form-factor series representation of the dynamical transverse two-point function.
Explicit analytical tuning rules for digital PID controllers via the magnitude optimum criterion.
Papadopoulos, Konstantinos G; Yadav, Praveen K; Margaris, Nikolaos I
2017-09-01
Analytical tuning rules for digital PID type-I controllers are presented regardless of the process complexity. This explicit solution allows control engineers 1) to make an accurate examination of the effect of the controller's sampling time to the control loop's performance both in the time and frequency domain 2) to decide when the control has to be I, PI and when the derivative, D, term has to be added or omitted 3) apply this control action to a series of stable benchmark processes regardless of their complexity. The former advantages are considered critical in industry applications, since 1) most of the times the choice of the digital controller's sampling time is based on heuristics and past criteria, 2) there is little a-priori knowledge of the controlled process making the choice of the type of the controller a trial and error exercise 3) model parameters change often depending on the control loop's operating point making in this way, the problem of retuning the controller's parameter a much challenging issue. Basis of the proposed control law is the principle of the PID tuning via the Magnitude Optimum criterion. The final control law involves the controller's sampling time T s within the explicit solution of the controller's parameters. Finally, the potential of the proposed method is justified by comparing its performance with the conventional PID tuning when controlling the same process. Further investigation regarding the choice of the controller's sampling time T s is also presented and useful conclusions for control engineers are derived. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Shao, Chenxi; Xue, Yong; Fang, Fang; Bai, Fangzhou; Yin, Peifeng; Wang, Binghong
2015-07-01
The self-controlling feedback control method requires an external periodic oscillator with special design, which is technically challenging. This paper proposes a chaos control method based on time series non-uniform rational B-splines (SNURBS for short) signal feedback. It first builds the chaos phase diagram or chaotic attractor with the sampled chaotic time series and any target orbit can then be explicitly chosen according to the actual demand. Second, we use the discrete timing sequence selected from the specific target orbit to build the corresponding external SNURBS chaos periodic signal, whose difference from the system current output is used as the feedback control signal. Finally, by properly adjusting the feedback weight, we can quickly lead the system to an expected status. We demonstrate both the effectiveness and efficiency of our method by applying it to two classic chaotic systems, i.e., the Van der Pol oscillator and the Lorenz chaotic system. Further, our experimental results show that compared with delayed feedback control, our method takes less time to obtain the target point or periodic orbit (from the starting point) and that its parameters can be fine-tuned more easily.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, Chenxi, E-mail: cxshao@ustc.edu.cn; Xue, Yong; Fang, Fang
2015-07-15
The self-controlling feedback control method requires an external periodic oscillator with special design, which is technically challenging. This paper proposes a chaos control method based on time series non-uniform rational B-splines (SNURBS for short) signal feedback. It first builds the chaos phase diagram or chaotic attractor with the sampled chaotic time series and any target orbit can then be explicitly chosen according to the actual demand. Second, we use the discrete timing sequence selected from the specific target orbit to build the corresponding external SNURBS chaos periodic signal, whose difference from the system current output is used as the feedbackmore » control signal. Finally, by properly adjusting the feedback weight, we can quickly lead the system to an expected status. We demonstrate both the effectiveness and efficiency of our method by applying it to two classic chaotic systems, i.e., the Van der Pol oscillator and the Lorenz chaotic system. Further, our experimental results show that compared with delayed feedback control, our method takes less time to obtain the target point or periodic orbit (from the starting point) and that its parameters can be fine-tuned more easily.« less
NASA Technical Reports Server (NTRS)
Hailperin, Max
1993-01-01
This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that our techniques allow more accurate estimation of the global system load ing, resulting in fewer object migration than local methods. Our method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive methods.
Phase correlation of foreign exchange time series
NASA Astrophysics Data System (ADS)
Wu, Ming-Chya
2007-03-01
Correlation of foreign exchange rates in currency markets is investigated based on the empirical data of USD/DEM and USD/JPY exchange rates for a period from February 1 1986 to December 31 1996. The return of exchange time series is first decomposed into a number of intrinsic mode functions (IMFs) by the empirical mode decomposition method. The instantaneous phases of the resultant IMFs calculated by the Hilbert transform are then used to characterize the behaviors of pricing transmissions, and the correlation is probed by measuring the phase differences between two IMFs in the same order. From the distribution of phase differences, our results show explicitly that the correlations are stronger in daily time scale than in longer time scales. The demonstration for the correlations in periods of 1986-1989 and 1990-1993 indicates two exchange rates in the former period were more correlated than in the latter period. The result is consistent with the observations from the cross-correlation calculation.
Explicit Constructivism: A Missing Link in Ineffective Lectures?
ERIC Educational Resources Information Center
Prakash, E. S.
2010-01-01
This study tested the possibility that interactive lectures explicitly based on activating learners' prior knowledge and driven by a series of logical questions might enhance the effectiveness of lectures. A class of 54 students doing the respiratory system course in the second year of the Bachelor of Medicine and Bachelor of Surgery program in my…
Sean Healey; Gretchen Moisen; Jeff Masek; Warren Cohen; Sam Goward; < i> et al< /i>
2007-01-01
The Forest Inventory and Analysis (FIA) program has partnered with researchers from the National Aeronautics and Space Administration, the University of Maryland, and other U.S. Department of Agriculture Forest Service units to identify disturbance patterns across the United States using FIA plot data and time series of Landsat satellite images. Spatially explicit...
Wind Tunnel Simulations of the Mock Urban Setting Test - Experimental Procedures and Data Analysis
2004-07-01
depends on the subjective choice of points to include in the constant stress region. This is demonstrated by the marked difference in the slope for the two...designed explicitly for the analysis of time series and signal processing , particularly for atmospheric dispersion ex- periments. The scripts developed...below. Processing scripts are available for all these analyses in the /scripts directory. All files of figures and processed data resulting from these
1976-08-01
foreign policy dynamics, the structure of a theory cannot in {.eneral be derived from statistical analysis of time series data ( Brunner (1071), Thorson...and where such scientific knowledge la applicable. Recent attention in theory and research on the bureaucratic, handling of foreign policy...process. Some of ^.z element» of thase concerns can be made explicit if we introduce modern systems theories which seek to treat organizations as
Ecological change points: The strength of density dependence and the loss of history.
Ponciano, José M; Taper, Mark L; Dennis, Brian
2018-05-01
Change points in the dynamics of animal abundances have extensively been recorded in historical time series records. Little attention has been paid to the theoretical dynamic consequences of such change-points. Here we propose a change-point model of stochastic population dynamics. This investigation embodies a shift of attention from the problem of detecting when a change will occur, to another non-trivial puzzle: using ecological theory to understand and predict the post-breakpoint behavior of the population dynamics. The proposed model and the explicit expressions derived here predict and quantify how density dependence modulates the influence of the pre-breakpoint parameters into the post-breakpoint dynamics. Time series transitioning from one stationary distribution to another contain information about where the process was before the change-point, where is it heading and how long it will take to transition, and here this information is explicitly stated. Importantly, our results provide a direct connection of the strength of density dependence with theoretical properties of dynamic systems, such as the concept of resilience. Finally, we illustrate how to harness such information through maximum likelihood estimation for state-space models, and test the model robustness to widely different forms of compensatory dynamics. The model can be used to estimate important quantities in the theory and practice of population recovery. Copyright © 2018 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kostova, T; Carlsen, T
2003-11-21
We present a spatially-explicit individual-based computational model of rodent dynamics, customized for the prairie vole species, M. Ochrogaster. The model is based on trophic relationships and represents important features such as territorial competition, mating behavior, density-dependent predation and dispersal out of the modeled spatial region. Vegetation growth and vole fecundity are dependent on climatic components. The results of simulations show that the model correctly predicts the overall temporal dynamics of the population density. Time-series analysis shows a very good match between the periods corresponding to the peak population density frequencies predicted by the model and the ones reported in themore » literature. The model is used to study the relation between persistence, landscape area and predation. We introduce the notions of average time to extinction (ATE) and persistence frequency to quantify persistence. While the ATE decreases with decrease of area, it is a bell-shaped function of the predation level: increasing for 'small' and decreasing for 'large' predation levels.« less
NASA Astrophysics Data System (ADS)
Sultana, Tahmina; Takagi, Hiroaki; Morimatsu, Miki; Teramoto, Hiroshi; Li, Chun-Biu; Sako, Yasushi; Komatsuzaki, Tamiki
2013-12-01
We present a novel scheme to extract a multiscale state space network (SSN) from single-molecule time series. The multiscale SSN is a type of hidden Markov model that takes into account both multiple states buried in the measurement and memory effects in the process of the observable whenever they exist. Most biological systems function in a nonstationary manner across multiple timescales. Combined with a recently established nonlinear time series analysis based on information theory, a simple scheme is proposed to deal with the properties of multiscale and nonstationarity for a discrete time series. We derived an explicit analytical expression of the autocorrelation function in terms of the SSN. To demonstrate the potential of our scheme, we investigated single-molecule time series of dissociation and association kinetics between epidermal growth factor receptor (EGFR) on the plasma membrane and its adaptor protein Ash/Grb2 (Grb2) in an in vitro reconstituted system. We found that our formula successfully reproduces their autocorrelation function for a wide range of timescales (up to 3 s), and the underlying SSNs change their topographical structure as a function of the timescale; while the corresponding SSN is simple at the short timescale (0.033-0.1 s), the SSN at the longer timescales (0.1 s to ˜3 s) becomes rather complex in order to capture multiscale nonstationary kinetics emerging at longer timescales. It is also found that visiting the unbound form of the EGFR-Grb2 system approximately resets all information of history or memory of the process.
Moderating Effects of Mathematics Anxiety on the Effectiveness of Explicit Timing
ERIC Educational Resources Information Center
Grays, Sharnita D.; Rhymer, Katrina N.; Swartzmiller, Melissa D.
2017-01-01
Explicit timing is an empirically validated intervention to increase problem completion rates by exposing individuals to a stopwatch and explicitly telling them of the time limit for the assignment. Though explicit timing has proven to be effective for groups of students, some students may not respond well to explicit timing based on factors such…
Meyniel, Florent; Safra, Lou; Pessiglione, Mathias
2014-01-01
A pervasive case of cost-benefit problem is how to allocate effort over time, i.e. deciding when to work and when to rest. An economic decision perspective would suggest that duration of effort is determined beforehand, depending on expected costs and benefits. However, the literature on exercise performance emphasizes that decisions are made on the fly, depending on physiological variables. Here, we propose and validate a general model of effort allocation that integrates these two views. In this model, a single variable, termed cost evidence, accumulates during effort and dissipates during rest, triggering effort cessation and resumption when reaching bounds. We assumed that such a basic mechanism could explain implicit adaptation, whereas the latent parameters (slopes and bounds) could be amenable to explicit anticipation. A series of behavioral experiments manipulating effort duration and difficulty was conducted in a total of 121 healthy humans to dissociate implicit-reactive from explicit-predictive computations. Results show 1) that effort and rest durations are adapted on the fly to variations in cost-evidence level, 2) that the cost-evidence fluctuations driving the behavior do not match explicit ratings of exhaustion, and 3) that actual difficulty impacts effort duration whereas expected difficulty impacts rest duration. Taken together, our findings suggest that cost evidence is implicitly monitored online, with an accumulation rate proportional to actual task difficulty. In contrast, cost-evidence bounds and dissipation rate might be adjusted in anticipation, depending on explicit task difficulty. PMID:24743711
A ∞-Algebra of an Elliptic Curve and Eisenstein Series
NASA Astrophysics Data System (ADS)
Polishchuk, Alexander
2011-02-01
We compute explicitly the A ∞-structure on the algebra {Ext^*(mathcal{O}_C oplus L, mathcal{O}_C oplus L)} , where L is a line bundle of degree 1 on an elliptic curve C. The answer involves higher derivatives of Eisenstein series.
Selecting and applying indicators of ecosystem collapse for risk assessments.
Rowland, Jessica A; Nicholson, Emily; Murray, Nicholas J; Keith, David A; Lester, Rebecca E; Bland, Lucie M
2018-03-12
Ongoing ecosystem degradation and transformation are key threats to biodiversity. Measuring ecosystem change towards collapse relies on monitoring indicators that quantify key ecological processes. Yet little guidance is available on selecting and implementing indicators for ecosystem risk assessment. Here, we reviewed indicator use in ecological studies of decline towards collapse in marine pelagic and temperate forest ecosystems. We evaluated the use of indicator selection methods, indicator types (geographic distribution, abiotic, biotic), methods of assessing multiple indicators, and temporal quality of time series. We compared these ecological studies to risk assessments in the International Union for the Conservation of Nature Red List of Ecosystems (RLE), where indicators are used to estimate ecosystem collapse risk. We found that ecological studies and RLE assessments rarely reported how indicators were selected, particularly in terrestrial ecosystems. Few ecological studies and RLE assessments quantified ecosystem change with all three indicator types, and indicators types used varied between marine and terrestrial ecosystem. Several studies used indices or multivariate analyses to assess multiple indicators simultaneously, but RLE assessments did not, as RLE guidelines advise against them. Most studies and RLE assessments used time series spanning at least 30 years, increasing the chance of reliably detecting change. Limited use of indicator selection protocols and infrequent use of all three indicator types may hamper the ability to accurately detect changes. To improve the value of risk assessments for informing policy and management, we recommend using: (i) explicit protocols, including conceptual models, to identify and select indicators; (ii) a range of indicators spanning distributional, abiotic and biotic features; (iii) indices and multivariate analyses with extreme care until guidelines are developed; (iv) time series with sufficient data to increase ability to accurately diagnose directional change; (v) data from multiple sources to support assessments; and (vi) explicitly reporting steps in the assessment process. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Studying the effect of weather conditions on daily crash counts using a discrete time-series model.
Brijs, Tom; Karlis, Dimitris; Wets, Geert
2008-05-01
In previous research, significant effects of weather conditions on car crashes have been found. However, most studies use monthly or yearly data and only few studies are available analyzing the impact of weather conditions on daily car crash counts. Furthermore, the studies that are available on a daily level do not explicitly model the data in a time-series context, hereby ignoring the temporal serial correlation that may be present in the data. In this paper, we introduce an integer autoregressive model for modelling count data with time interdependencies. The model is applied to daily car crash data, metereological data and traffic exposure data from the Netherlands aiming at examining the risk impact of weather conditions on the observed counts. The results show that several assumptions related to the effect of weather conditions on crash counts are found to be significant in the data and that if serial temporal correlation is not accounted for in the model, this may produce biased results.
Indirect Goal Priming Is More Powerful than Explicit Instruction in Children
ERIC Educational Resources Information Center
Kesek, Amanda; Cunningham, William A.; Packer, Dominic J.; Zelazo, Philip David
2011-01-01
This study examined the relative efficacy of explicit instruction and indirect priming on young children's behavior in a task that required a series of choices between a small immediate reward and a larger delayed reward. One hundred and six 4-year-old children were randomly assigned to one of four conditions involving one of two goals (maximize…
ERIC Educational Resources Information Center
Fang, Su-Chi; Hsu, Ying-Shao; Hsu, Wei Hsiu
2016-01-01
The study explored how to best use scaffolds for supporting students' inquiry practices in computer-supported learning environments. We designed a series of inquiry units assisted with three versions of written inquiry prompts (generic and context-specific); that is, three scaffold-fading conditions: implicit, explicit, and fading. We then…
van de Kamp, Marie-Thérèse; Admiraal, Wilfried; van Drie, Jannet; Rijlaarsdam, Gert
2015-03-01
The main purposes of visual arts education concern the enhancement of students' creative processes and the originality of their art products. Divergent thinking is crucial for finding original ideas in the initial phase of a creative process that aims to result in an original product. This study aims to examine the effects of explicit instruction of meta-cognition on students' divergent thinking. A quasi-experimental design was implemented with 147 secondary school students in visual arts education. In the experimental condition, students attended a series of regular lessons with assignments on art reception and production, and they attended one intervention lesson with explicit instruction of meta-cognition. In the control condition, students attended a series of regular lessons only. Pre-test and post-test instances tests measured fluency, flexibility, and originality as indicators of divergent thinking. Explicit instruction of meta-cognitive knowledge had a positive effect on fluency and flexibility, but not on originality. This study implies that in the domain of visual arts, instructional support in building up meta-cognitive knowledge about divergent thinking may improve students' creative processes. This study also discusses possible reasons for the demonstrated lack of effect for originality. © 2014 The British Psychological Society.
Mapping and spatial-temporal modeling of Bromus tectorum invasion in central Utah
NASA Astrophysics Data System (ADS)
Jin, Zhenyu
Cheatgrass, or Downy Brome, is an exotic winter annual weed native to the Mediterranean region. Since its introduction to the U.S., it has become a significant weed and aggressive invader of sagebrush, pinion-juniper, and other shrub communities, where it can completely out-compete native grasses and shrubs. In this research, remotely sensed data combined with field collected data are used to investigate the distribution of the cheatgrass in Central Utah, to characterize the trend of the NDVI time-series of cheatgrass, and to construct a spatially explicit population-based model to simulate the spatial-temporal dynamics of the cheatgrass. This research proposes a method for mapping the canopy closure of invasive species using remotely sensed data acquired at different dates. Different invasive species have their own distinguished phenologies and the satellite images in different dates could be used to capture the phenology. The results of cheatgrass abundance prediction have a good fit with the field data for both linear regression and regression tree models, although the regression tree model has better performance than the linear regression model. To characterize the trend of NDVI time-series of cheatgrass, a novel smoothing algorithm named RMMEH is presented in this research to overcome some drawbacks of many other algorithms. By comparing the performance of RMMEH in smoothing a 16-day composite of the MODIS NDVI time-series with that of two other methods, which are the 4253EH, twice and the MVI, we have found that RMMEH not only keeps the original valid NDVI points, but also effectively removes the spurious spikes. The reconstructed NDVI time-series of different land covers are of higher quality and have smoother temporal trend. To simulate the spatial-temporal dynamics of cheatgrass, a spatially explicit population-based model is built applying remotely sensed data. The comparison between the model output and the ground truth of cheatgrass closure demonstrates that the model could successfully simulate the spatial-temporal dynamics of cheatgrass in a simple cheatgrass-dominant environment. The simulation of the functional response of different prescribed fire rates also shows that this model is helpful to answer management questions like, "What are the effects of prescribed fire to invasive species?" It demonstrates that a medium fire rate of 10% can successfully prevent cheatgrass invasion.
Temporal structure and gain-loss asymmetry for real and artificial stock indices
NASA Astrophysics Data System (ADS)
Siven, Johannes Vitalis; Lins, Jeffrey Todd
2009-11-01
Previous research has shown that for stock indices, the most likely time until a return of a particular size has been observed is longer for gains than for losses. We demonstrate that this so-called gain-loss asymmetry vanishes if the temporal dependence structure is destroyed by scrambling the time series. We also show that an artificial index constructed by a simple average of a number of individual stocks display gain-loss asymmetry—this allows us to explicitly analyze the dependence between the index constituents. We consider mutual information and correlation-based measures and show that the stock returns indeed have a higher degree of dependence in times of market downturns than upturns.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rivasseau, Vincent, E-mail: vincent.rivasseau@th.u-psud.fr, E-mail: adrian.tanasa@ens-lyon.org; Tanasa, Adrian, E-mail: vincent.rivasseau@th.u-psud.fr, E-mail: adrian.tanasa@ens-lyon.org
The Loop Vertex Expansion (LVE) is a quantum field theory (QFT) method which explicitly computes the Borel sum of Feynman perturbation series. This LVE relies in a crucial way on symmetric tree weights which define a measure on the set of spanning trees of any connected graph. In this paper we generalize this method by defining new tree weights. They depend on the choice of a partition of a set of vertices of the graph, and when the partition is non-trivial, they are no longer symmetric under permutation of vertices. Nevertheless we prove they have the required positivity property tomore » lead to a convergent LVE; in fact we formulate this positivity property precisely for the first time. Our generalized tree weights are inspired by the Brydges-Battle-Federbush work on cluster expansions and could be particularly suited to the computation of connected functions in QFT. Several concrete examples are explicitly given.« less
NASA Technical Reports Server (NTRS)
Hailperin, M.
1993-01-01
This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that the authors' techniques allow more accurate estimation of the global system loading, resulting in fewer object migrations than local methods. The authors' method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive load-balancing methods. Results from a preliminary analysis of another system and from simulation with a synthetic load provide some evidence of more general applicability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forest, E.; Bengtsson, J.; Reusch, M.F.
1991-04-01
The full power of Yoshida's technique is exploited to produce an arbitrary order implicit symplectic integrator and multi-map explicit integrator. This implicit integrator uses a characteristic function involving the force term alone. Also we point out the usefulness of the plain Ruth algorithm in computing Taylor series map using the techniques first introduced by Berz in his 'COSY-INFINITY' code.
ERIC Educational Resources Information Center
Colangelo, A.; Buchanan, L.
2005-01-01
We report evidence for dissociation between explicit and implicit access to word representations in a deep dyslexic patient (JO). JO read aloud a series of ambiguous (e.g., bank) and unambiguous (e.g., food) words and performed a lexical decision task using these same items. When required to explicitly access the items (i.e., naming), JO showed…
The New Tropospheric Product of the International GNSS Service
NASA Technical Reports Server (NTRS)
Byun, Sung H.; Bar-Sever, Yoaz E.; Gendt, Gerd
2005-01-01
We compare this new approach for generating the IGS tropospheric products with the previous approach, which was based on explicit combination of total zenith delay contributions from the IGS ACs. The new approach enables the IGS to rapidly generate highly accurate and highly reliable total zenith delay time series for many hundreds of sites, thus increasing the utility of the products to weather modelers, climatologists, and GPS analysts. In this paper we describe this new method, and discuss issues of accuracy, quality control, utility of the new products and assess its benefits.
NASA Astrophysics Data System (ADS)
Gerber, Christoph; Purtschert, Roland; Hunkeler, Daniel; Hug, Rainer; Sültenfuss, Jürgen
2018-06-01
Groundwater quality in many regions with intense agriculture has deteriorated due to the leaching of nitrate and other agricultural pollutants. Modified agricultural practices can reduce the input of nitrate to groundwater bodies, but it is crucial to determine the time span over which these measures become effective at reducing nitrate levels in pumping wells. Such estimates can be obtained from hydrogeological modeling or lumped-parameter models (LPM) in combination with environmental tracer data. Two challenges in such tracer-based estimates are (i) accounting for the different modes of transport in the unsaturated zone (USZ), and (ii) assessing uncertainties. Here we extend a recently published Bayesian inference scheme for simple LPMs to include an explicit USZ model and apply it to the Dünnerngäu aquifer, Switzerland. Compared to a previous estimate of travel times in the aquifer based on a 2D hydrogeological model, our approach provides a more accurate assessment of the dynamics of nitrate concentrations in the aquifer. We find that including tracer measurements (3H/3He, 85Kr, 39Ar, 4He) reduces uncertainty in nitrate predictions if nitrate time series at wells are not available or short, but does not necessarily lead to better predictions if long nitrate time series are available. Additionally, the combination of tracer data with nitrate time series allows for a separation of the travel times in the unsaturated and saturated zone.
Esser, Sarah; Haider, Hilde
2017-01-01
The Serial Reaction Time Task (SRTT) is an important paradigm to study the properties of unconscious learning processes. One specifically interesting and still controversially discussed topic are the conditions under which unconsciously acquired knowledge becomes conscious knowledge. The different assumptions about the underlying mechanisms can contrastively be separated into two accounts: single system views in which the strengthening of associative weights throughout training gradually turns implicit knowledge into explicit knowledge, and dual system views in which implicit knowledge itself does not become conscious. Rather, it requires a second process which detects changes in performance and is able to acquire conscious knowledge. In a series of three experiments, we manipulated the arrangement of sequential and deviant trials. In an SRTT training, participants either received mini-blocks of sequential trials followed by mini-blocks of deviant trials (22 trials each) or they received sequential and deviant trials mixed randomly. Importantly the number of correct and deviant transitions was the same for both conditions. Experiment 1 showed that both conditions acquired a comparable amount of implicit knowledge, expressed in different test tasks. Experiment 2 further demonstrated that both conditions differed in their subjectively experienced fluency of the task, with more fluency experienced when trained with mini-blocks. Lastly, Experiment 3 revealed that the participants trained with longer mini-blocks of sequential and deviant material developed more explicit knowledge. Results are discussed regarding their compatibility with different assumptions about the emergence of explicit knowledge in an implicit learning situation, especially with respect to the role of metacognitive judgements and more specifically the Unexpected-Event Hypothesis.
Esser, Sarah; Haider, Hilde
2017-01-01
The Serial Reaction Time Task (SRTT) is an important paradigm to study the properties of unconscious learning processes. One specifically interesting and still controversially discussed topic are the conditions under which unconsciously acquired knowledge becomes conscious knowledge. The different assumptions about the underlying mechanisms can contrastively be separated into two accounts: single system views in which the strengthening of associative weights throughout training gradually turns implicit knowledge into explicit knowledge, and dual system views in which implicit knowledge itself does not become conscious. Rather, it requires a second process which detects changes in performance and is able to acquire conscious knowledge. In a series of three experiments, we manipulated the arrangement of sequential and deviant trials. In an SRTT training, participants either received mini-blocks of sequential trials followed by mini-blocks of deviant trials (22 trials each) or they received sequential and deviant trials mixed randomly. Importantly the number of correct and deviant transitions was the same for both conditions. Experiment 1 showed that both conditions acquired a comparable amount of implicit knowledge, expressed in different test tasks. Experiment 2 further demonstrated that both conditions differed in their subjectively experienced fluency of the task, with more fluency experienced when trained with mini-blocks. Lastly, Experiment 3 revealed that the participants trained with longer mini-blocks of sequential and deviant material developed more explicit knowledge. Results are discussed regarding their compatibility with different assumptions about the emergence of explicit knowledge in an implicit learning situation, especially with respect to the role of metacognitive judgements and more specifically the Unexpected-Event Hypothesis. PMID:28421018
Barbu, Corentin; Dumonteil, Eric; Gourbière, Sébastien
2010-01-01
Background Chagas disease is a major parasitic disease in Latin America, prevented in part by vector control programs that reduce domestic populations of triatomines. However, the design of control strategies adapted to non-domiciliated vectors, such as Triatoma dimidiata, remains a challenge because it requires an accurate description of their spatio-temporal distributions, and a proper understanding of the underlying dispersal processes. Methodology/Principal Findings We combined extensive spatio-temporal data sets describing house infestation dynamics by T. dimidiata within a village, and spatially explicit population dynamics models in a selection model approach. Several models were implemented to provide theoretical predictions under different hypotheses on the origin of the dispersers and their dispersal characteristics, which we compared with the spatio-temporal pattern of infestation observed in the field. The best models fitted the dynamic of infestation described by a one year time-series, and also predicted with a very good accuracy the infestation process observed during a second replicate one year time-series. The parameterized models gave key insights into the dispersal of these vectors. i) About 55% of the triatomines infesting houses came from the peridomestic habitat, the rest corresponding to immigration from the sylvatic habitat, ii) dispersing triatomines were 5–15 times more attracted by houses than by peridomestic area, and iii) the moving individuals spread on average over rather small distances, typically 40–60 m/15 days. Conclusion/Significance Since these dispersal characteristics are associated with much higher abundance of insects in the periphery of the village, we discuss the possibility that spatially targeted interventions allow for optimizing the efficacy of vector control activities within villages. Such optimization could prove very useful in the context of limited resources devoted to vector control. PMID:20689823
A new heterogeneous asynchronous explicit-implicit time integrator for nonsmooth dynamics
NASA Astrophysics Data System (ADS)
Fekak, Fatima-Ezzahra; Brun, Michael; Gravouil, Anthony; Depale, Bruno
2017-07-01
In computational structural dynamics, particularly in the presence of nonsmooth behavior, the choice of the time-step and the time integrator has a critical impact on the feasibility of the simulation. Furthermore, in some cases, as in the case of a bridge crane under seismic loading, multiple time-scales coexist in the same problem. In that case, the use of multi-time scale methods is suitable. Here, we propose a new explicit-implicit heterogeneous asynchronous time integrator (HATI) for nonsmooth transient dynamics with frictionless unilateral contacts and impacts. Furthermore, we present a new explicit time integrator for contact/impact problems where the contact constraints are enforced using a Lagrange multiplier method. In other words, the aim of this paper consists in using an explicit time integrator with a fine time scale in the contact area for reproducing high frequency phenomena, while an implicit time integrator is adopted in the other parts in order to reproduce much low frequency phenomena and to optimize the CPU time. In a first step, the explicit time integrator is tested on a one-dimensional example and compared to Moreau-Jean's event-capturing schemes. The explicit algorithm is found to be very accurate and the scheme has generally a higher order of convergence than Moreau-Jean's schemes and provides also an excellent energy behavior. Then, the two time scales explicit-implicit HATI is applied to the numerical example of a bridge crane under seismic loading. The results are validated in comparison to a fine scale full explicit computation. The energy dissipated in the implicit-explicit interface is well controlled and the computational time is lower than a full-explicit simulation.
NASA Technical Reports Server (NTRS)
Goodman, Michael L.; Kwan, Chiman; Ayhan, Bulent; Shang, Eric L.
2017-01-01
There are many flare forecasting models. For an excellent review and comparison of some of them see Barnes et al. (2016). All these models are successful to some degree, but there is a need for better models. We claim the most successful models explicitly or implicitly base their forecasts on various estimates of components of the photospheric current density J, based on observations of the photospheric magnetic field B. However, none of the models we are aware of compute the complete J. We seek to develop a better model based on computing the complete photospheric J. Initial results from this model are presented in this talk. We present a data driven, near photospheric, 3 D, non-force free magnetohydrodynamic (MHD) model that computes time series of the total J, and associated resistive heating rate in each pixel at the photosphere in the neutral line regions (NLRs) of 14 active regions (ARs). The model is driven by time series of B measured by the Helioseismic & Magnetic Imager (HMI) on the Solar Dynamics Observatory (SDO) satellite. Spurious Doppler periods due to SDO orbital motion are filtered out of the time series of B in every AR pixel. Errors in B due to these periods can be significant.
Steele, Vaughn R.; Staley, Cameron; Sabatinelli, Dean
2015-01-01
Risky sexual behaviors typically occur when a person is sexually motivated by potent, sexual reward cues. Yet, individual differences in sensitivity to sexual cues have not been examined with respect to sexual risk behaviors. A greater responsiveness to sexual cues might provide greater motivation for a person to act sexually; a lower responsiveness to sexual cues might lead a person to seek more intense, novel, possibly risky, sexual acts. In this study, event-related potentials were recorded in 64 men and women while they viewed a series of emotional, including explicit sexual, photographs. The motivational salience of the sexual cues was varied by including more and less explicit sexual images. Indeed, the more explicit sexual stimuli resulted in enhanced late positive potentials (LPP) relative to the less explicit sexual images. Participants with fewer sexual intercourse partners in the last year had reduced LPP amplitude to the less explicit sexual images than the more explicit sexual images, whereas participants with more partners responded similarly to the more and less explicit sexual images. This pattern of results is consistent with a greater responsivity model. Those who engage in more sexual behaviors consistent with risk are also more responsive to less explicit sexual cues. PMID:24526189
Portrat, Sophie; Guida, Alessandro; Phénix, Thierry; Lemaire, Benoît
2016-04-01
Working memory (WM) is a cognitive system allowing short-term maintenance and processing of information. Maintaining information in WM consists, classically, in rehearsing or refreshing it. Chunking could also be considered as a maintenance mechanism. However, in the literature, it is more often used to explain performance than explicitly investigated within WM paradigms. Hence, the aim of the present paper was (1) to strengthen the experimental dialogue between WM and chunking, by studying the effect of acronyms in a computer-paced complex span task paradigm and (2) to formalize explicitly this dialogue within a computational model. Young adults performed a WM complex span task in which they had to maintain series of 7 letters for further recall while performing a concurrent location judgment task. The series to be remembered were either random strings of letters or strings containing a 3-letter acronym that appeared in position 1, 3, or 5 in the series. Together, the data and simulations provide a better understanding of the maintenance mechanisms taking place in WM and its interplay with long-term memory. Indeed, the behavioral WM performance lends evidence to the functional characteristics of chunking that seems to be, especially in a WM complex span task, an attentional time-based mechanism that certainly enhances WM performance but also competes with other processes at hand in WM. Computational simulations support and delineate such a conception by showing that searching for a chunk in long-term memory involves attentionally demanding subprocesses that essentially take place during the encoding phases of the task.
Optimal estimation of diffusion coefficients from single-particle trajectories
NASA Astrophysics Data System (ADS)
Vestergaard, Christian L.; Blainey, Paul C.; Flyvbjerg, Henrik
2014-02-01
How does one optimally determine the diffusion coefficient of a diffusing particle from a single-time-lapse recorded trajectory of the particle? We answer this question with an explicit, unbiased, and practically optimal covariance-based estimator (CVE). This estimator is regression-free and is far superior to commonly used methods based on measured mean squared displacements. In experimentally relevant parameter ranges, it also outperforms the analytically intractable and computationally more demanding maximum likelihood estimator (MLE). For the case of diffusion on a flexible and fluctuating substrate, the CVE is biased by substrate motion. However, given some long time series and a substrate under some tension, an extended MLE can separate particle diffusion on the substrate from substrate motion in the laboratory frame. This provides benchmarks that allow removal of bias caused by substrate fluctuations in CVE. The resulting unbiased CVE is optimal also for short time series on a fluctuating substrate. We have applied our estimators to human 8-oxoguanine DNA glycolase proteins diffusing on flow-stretched DNA, a fluctuating substrate, and found that diffusion coefficients are severely overestimated if substrate fluctuations are not accounted for.
The Levy sections theorem revisited
NASA Astrophysics Data System (ADS)
Figueiredo, Annibal; Gleria, Iram; Matsushita, Raul; Da Silva, Sergio
2007-06-01
This paper revisits the Levy sections theorem. We extend the scope of the theorem to time series and apply it to historical daily returns of selected dollar exchange rates. The elevated kurtosis usually observed in such series is then explained by their volatility patterns. And the duration of exchange rate pegs explains the extra elevated kurtosis in the exchange rates of emerging markets. In the end, our extension of the theorem provides an approach that is simpler than the more common explicit modelling of fat tails and dependence. Our main purpose is to build up a technique based on the sections that allows one to artificially remove the fat tails and dependence present in a data set. By analysing data through the lenses of the Levy sections theorem one can find common patterns in otherwise very different data sets.
Exact models for isotropic matter
NASA Astrophysics Data System (ADS)
Thirukkanesh, S.; Maharaj, S. D.
2006-04-01
We study the Einstein-Maxwell system of equations in spherically symmetric gravitational fields for static interior spacetimes. The condition for pressure isotropy is reduced to a recurrence equation with variable, rational coefficients. We demonstrate that this difference equation can be solved in general using mathematical induction. Consequently, we can find an explicit exact solution to the Einstein-Maxwell field equations. The metric functions, energy density, pressure and the electric field intensity can be found explicitly. Our result contains models found previously, including the neutron star model of Durgapal and Bannerji. By placing restrictions on parameters arising in the general series, we show that the series terminate and there exist two linearly independent solutions. Consequently, it is possible to find exact solutions in terms of elementary functions, namely polynomials and algebraic functions.
Time series inversion of spectra from ground-based radiometers
NASA Astrophysics Data System (ADS)
Christensen, O. M.; Eriksson, P.
2013-02-01
Retrieving time series of atmospheric constituents from ground-based spectrometers often requires different temporal averaging depending on the altitude region in focus. This can lead to several datasets existing for one instrument which complicates validation and comparisons between instruments. This paper puts forth a possible solution by incorporating the temporal domain into the maximum a posteriori (MAP) retrieval algorithm. The state vector is increased to include measurements spanning a time period, and the temporal correlations between the true atmospheric states are explicitly specified in the a priori uncertainty matrix. This allows the MAP method to effectively select the best temporal smoothing for each altitude, removing the need for several datasets to cover different altitudes. The method is compared to traditional averaging of spectra using a simulated retrieval of water vapour in the mesosphere. The simulations show that the method offers a significant advantage compared to the traditional method, extending the sensitivity an additional 10 km upwards without reducing the temporal resolution at lower altitudes. The method is also tested on the OSO water vapour microwave radiometer confirming the advantages found in the simulation. Additionally, it is shown how the method can interpolate data in time and provide diagnostic values to evaluate the interpolated data.
Embedded-explicit emergent literacy intervention I: Background and description of approach.
Justice, Laura M; Kaderavek, Joan N
2004-07-01
This article, the first of a two-part series, provides background information and a general description of an emergent literacy intervention model for at-risk preschoolers and kindergartners. The embedded-explicit intervention model emphasizes the dual importance of providing young children with socially embedded opportunities for meaningful, naturalistic literacy experiences throughout the day, in addition to regular structured therapeutic interactions that explicitly target critical emergent literacy goals. The role of the speech-language pathologist (SLP) in the embedded-explicit model encompasses both indirect and direct service delivery: The SLP consults and collaborates with teachers and parents to ensure the highest quality and quantity of socially embedded literacy-focused experiences and serves as a direct provider of explicit interventions using structured curricula and/or lesson plans. The goal of this integrated model is to provide comprehensive emergent literacy interventions across a spectrum of early literacy skills to ensure the successful transition of at-risk children from prereaders to readers.
Bivariate sub-Gaussian model for stock index returns
NASA Astrophysics Data System (ADS)
Jabłońska-Sabuka, Matylda; Teuerle, Marek; Wyłomańska, Agnieszka
2017-11-01
Financial time series are commonly modeled with methods assuming data normality. However, the real distribution can be nontrivial, also not having an explicitly formulated probability density function. In this work we introduce novel parameter estimation and high-powered distribution testing methods which do not rely on closed form densities, but use the characteristic functions for comparison. The approach applied to a pair of stock index returns demonstrates that such a bivariate vector can be a sample coming from a bivariate sub-Gaussian distribution. The methods presented here can be applied to any nontrivially distributed financial data, among others.
EverVIEW: a visualization platform for hydrologic and Earth science gridded data
Romañach, Stephanie S.; McKelvy, James M.; Suir, Kevin J.; Conzelmann, Craig
2015-01-01
The EverVIEW Data Viewer is a cross-platform desktop application that combines and builds upon multiple open source libraries to help users to explore spatially-explicit gridded data stored in Network Common Data Form (NetCDF). Datasets are displayed across multiple side-by-side geographic or tabular displays, showing colorized overlays on an Earth globe or grid cell values, respectively. Time-series datasets can be animated to see how water surface elevation changes through time or how habitat suitability for a particular species might change over time under a given scenario. Initially targeted toward Florida's Everglades restoration planning, EverVIEW has been flexible enough to address the varied needs of large-scale planning beyond Florida, and is currently being used in biological planning efforts nationally and internationally.
Energy efficient model based algorithm for control of building HVAC systems.
Kirubakaran, V; Sahu, Chinmay; Radhakrishnan, T K; Sivakumaran, N
2015-11-01
Energy efficient designs are receiving increasing attention in various fields of engineering. Heating ventilation and air conditioning (HVAC) control system designs involve improved energy usage with an acceptable relaxation in thermal comfort. In this paper, real time data from a building HVAC system provided by BuildingLAB is considered. A resistor-capacitor (RC) framework for representing thermal dynamics of the building is estimated using particle swarm optimization (PSO) algorithm. With objective costs as thermal comfort (deviation of room temperature from required temperature) and energy measure (Ecm) explicit MPC design for this building model is executed based on its state space representation of the supply water temperature (input)/room temperature (output) dynamics. The controllers are subjected to servo tracking and external disturbance (ambient temperature) is provided from the real time data during closed loop control. The control strategies are ported on a PIC32mx series microcontroller platform. The building model is implemented in MATLAB and hardware in loop (HIL) testing of the strategies is executed over a USB port. Results indicate that compared to traditional proportional integral (PI) controllers, the explicit MPC's improve both energy efficiency and thermal comfort significantly. Copyright © 2015 Elsevier Inc. All rights reserved.
López-Ibáñez, Manuel; Prasad, T Devi; Paechter, Ben
2011-01-01
Reducing the energy consumption of water distribution networks has never had more significance. The greatest energy savings can be obtained by carefully scheduling the operations of pumps. Schedules can be defined either implicitly, in terms of other elements of the network such as tank levels; or explicitly, by specifying the time during which each pump is on/off. The traditional representation of explicit schedules is a string of binary values with each bit representing pump on/off status during a particular time interval. In this paper, we formally define and analyze two new explicit representations based on time-controlled triggers, where the maximum number of pump switches is established beforehand and the schedule may contain fewer than the maximum number of switches. In these representations, a pump schedule is divided into a series of integers with each integer representing the number of hours for which a pump is active/inactive. This reduces the number of potential schedules compared to the binary representation, and allows the algorithm to operate on the feasible region of the search space. We propose evolutionary operators for these two new representations. The new representations and their corresponding operations are compared with the two most-used representations in pump scheduling, namely, binary representation and level-controlled triggers. A detailed statistical analysis of the results indicates which parameters have the greatest effect on the performance of evolutionary algorithms. The empirical results show that an evolutionary algorithm using the proposed representations is an improvement over the results obtained by a recent state of the art hybrid genetic algorithm for pump scheduling using level-controlled triggers.
Global Warming Estimation From Microwave Sounding Unit
NASA Technical Reports Server (NTRS)
Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Dalu, G.
1998-01-01
Microwave Sounding Unit (MSU) Ch 2 data sets, collected from sequential, polar-orbiting, Sun-synchronous National Oceanic and Atmospheric Administration operational satellites, contain systematic calibration errors that are coupled to the diurnal temperature cycle over the globe. Since these coupled errors in MSU data differ between successive satellites, it is necessary to make compensatory adjustments to these multisatellite data sets in order to determine long-term global temperature change. With the aid of the observations during overlapping periods of successive satellites, we can determine such adjustments and use them to account for the coupled errors in the long-term time series of MSU Ch 2 global temperature. In turn, these adjusted MSU Ch 2 data sets can be used to yield global temperature trend. In a pioneering study, Spencer and Christy (SC) (1990) developed a procedure to derive the global temperature trend from MSU Ch 2 data. Such a procedure can leave unaccounted residual errors in the time series of the temperature anomalies deduced by SC, which could lead to a spurious long-term temperature trend derived from their analysis. In the present study, we have developed a method that avoids the shortcomings of the SC procedure, the magnitude of the coupled errors is not determined explicitly. Furthermore, based on some assumptions, these coupled errors are eliminated in three separate steps. Such a procedure can leave unaccounted residual errors in the time series of the temperature anomalies deduced by SC, which could lead to a spurious long-term temperature trend derived from their analysis. In the present study, we have developed a method that avoids the shortcomings of the SC procedures. Based on our analysis, we find there is a global warming of 0.23+/-0.12 K between 1980 and 1991. Also, in this study, the time series of global temperature anomalies constructed by removing the global mean annual temperature cycle compares favorably with a similar time series obtained from conventional observations of temperature.
Gambini, R; Pullin, J
2000-12-18
We consider general relativity with a cosmological constant as a perturbative expansion around a completely solvable diffeomorphism invariant field theory. This theory is the lambda --> infinity limit of general relativity. This allows an explicit perturbative computational setup in which the quantum states of the theory and the classical observables can be explicitly computed. An unexpected relationship arises at a quantum level between the discrete spectrum of the volume operator and the allowed values of the cosmological constant.
Improving carbon monitoring and reporting in forests using spatially-explicit information.
Boisvenue, Céline; Smiley, Byron P; White, Joanne C; Kurz, Werner A; Wulder, Michael A
2016-12-01
Understanding and quantifying carbon (C) exchanges between the biosphere and the atmosphere-specifically the process of C removal from the atmosphere, and how this process is changing-is the basis for developing appropriate adaptation and mitigation strategies for climate change. Monitoring forest systems and reporting on greenhouse gas (GHG) emissions and removals are now required components of international efforts aimed at mitigating rising atmospheric GHG. Spatially-explicit information about forests can improve the estimates of GHG emissions and removals. However, at present, remotely-sensed information on forest change is not commonly integrated into GHG reporting systems. New, detailed (30-m spatial resolution) forest change products derived from satellite time series informing on location, magnitude, and type of change, at an annual time step, have recently become available. Here we estimate the forest GHG balance using these new Landsat-based change data, a spatial forest inventory, and develop yield curves as inputs to the Carbon Budget Model of the Canadian Forest Sector (CBM-CFS3) to estimate GHG emissions and removals at a 30 m resolution for a 13 Mha pilot area in Saskatchewan, Canada. Our results depict the forests as cumulative C sink (17.98 Tg C or 0.64 Tg C year -1 ) between 1984 and 2012 with an average C density of 206.5 (±0.6) Mg C ha -1 . Comparisons between our estimates and estimates from Canada's National Forest Carbon Monitoring, Accounting and Reporting System (NFCMARS) were possible only on a subset of our study area. In our simulations the area was a C sink, while the official reporting simulations, it was a C source. Forest area and overall C stock estimates also differ between the two simulated estimates. Both estimates have similar uncertainties, but the spatially-explicit results we present here better quantify the potential improvement brought on by spatially-explicit modelling. We discuss the source of the differences between these estimates. This study represents an important first step towards the integration of spatially-explicit information into Canada's NFCMARS.
Young Adults' Implicit and Explicit Attitudes towards the Sexuality of Older Adults.
Thompson, Ashley E; O'Sullivan, Lucia F; Byers, E Sandra; Shaughnessy, Krystelle
2014-09-01
Sexual interest and capacity can extend far into later life and result in many positive health outcomes. Yet there is little support for sexual expression in later life, particularly among young adults. This study assessed and compared young adults' explicit and implicit attitudes towards older adult sexuality. A sample of 120 participants (18-24 years; 58% female) completed a self-report (explicit) measure and a series of Implicit Association Tests capturing attitudes towards sexuality among older adults. Despite reporting positive explicit attitudes, young people revealed an implicit bias against the sexual lives of older adults. In particular, young adults demonstrated implicit biases favouring general, as compared to sexual, activities and young adults as compared to older adults. Moreover, the bias favouring general activities was amplified with regard to older adults as compared to younger adults. Our findings challenge the validity of research relying on self-reports of attitudes about older adult sexuality.
Combining satellite derived phenology with climate data for climate change impact assessment
NASA Astrophysics Data System (ADS)
Ivits, E.; Cherlet, M.; Tóth, G.; Sommer, S.; Mehl, W.; Vogt, J.; Micale, F.
2012-05-01
The projected influence of climate change on the timing and volume of phytomass production is expected to affect a number of ecosystem services. In order to develop coherent and locally effective adaptation and mitigation strategies, spatially explicit information on the observed changes is needed. Long-term variations of the vegetative growing season in different environmental zones of Europe for 1982-2006 have been derived by analysing time series of GIMMS NDVI data. The associations of phenologically homogenous spatial clusters to time series of temperature and precipitation data were evaluated. North-east Europe showed a trend to an earlier and longer growing season, particularly in the northern Baltic areas. Despite the earlier greening up large areas of Europe exhibited rather stable season length indicating the shift of the entire growing season to an earlier period. The northern Mediterranean displayed a growing season shift towards later dates while some agglomerations of earlier and shorter growing season were also seen. The correlation of phenological time series with climate data shows a cause-and-effect relationship over the semi natural areas consistent with results in literature. Managed ecosystems however appear to have heterogeneous change pattern with less or no correlation to climatic trends. Over these areas climatic trends seemed to overlap in a complex manner with more pronounced effects of local biophysical conditions and/or land management practices. Our results underline the importance of satellite derived phenological observations to explain local nonconformities to climatic trends for climate change impact assessment.
Iterative Refinement of a Binding Pocket Model: Active Computational Steering of Lead Optimization
2012-01-01
Computational approaches for binding affinity prediction are most frequently demonstrated through cross-validation within a series of molecules or through performance shown on a blinded test set. Here, we show how such a system performs in an iterative, temporal lead optimization exercise. A series of gyrase inhibitors with known synthetic order formed the set of molecules that could be selected for “synthesis.” Beginning with a small number of molecules, based only on structures and activities, a model was constructed. Compound selection was done computationally, each time making five selections based on confident predictions of high activity and five selections based on a quantitative measure of three-dimensional structural novelty. Compound selection was followed by model refinement using the new data. Iterative computational candidate selection produced rapid improvements in selected compound activity, and incorporation of explicitly novel compounds uncovered much more diverse active inhibitors than strategies lacking active novelty selection. PMID:23046104
High-frequency measurements of aeolian saltation flux: Field-based methodology and applications
NASA Astrophysics Data System (ADS)
Martin, Raleigh L.; Kok, Jasper F.; Hugenholtz, Chris H.; Barchyn, Thomas E.; Chamecki, Marcelo; Ellis, Jean T.
2018-02-01
Aeolian transport of sand and dust is driven by turbulent winds that fluctuate over a broad range of temporal and spatial scales. However, commonly used aeolian transport models do not explicitly account for such fluctuations, likely contributing to substantial discrepancies between models and measurements. Underlying this problem is the absence of accurate sand flux measurements at the short time scales at which wind speed fluctuates. Here, we draw on extensive field measurements of aeolian saltation to develop a methodology for generating high-frequency (up to 25 Hz) time series of total (vertically-integrated) saltation flux, namely by calibrating high-frequency (HF) particle counts to low-frequency (LF) flux measurements. The methodology follows four steps: (1) fit exponential curves to vertical profiles of saltation flux from LF saltation traps, (2) determine empirical calibration factors through comparison of LF exponential fits to HF number counts over concurrent time intervals, (3) apply these calibration factors to subsamples of the saltation count time series to obtain HF height-specific saltation fluxes, and (4) aggregate the calibrated HF height-specific saltation fluxes into estimates of total saltation fluxes. When coupled to high-frequency measurements of wind velocity, this methodology offers new opportunities for understanding how aeolian saltation dynamics respond to variability in driving winds over time scales from tens of milliseconds to days.
An Automatic Cloud Mask Algorithm Based on Time Series of MODIS Measurements
NASA Technical Reports Server (NTRS)
Lyapustin, Alexei; Wang, Yujie; Frey, R.
2008-01-01
Quality of aerosol retrievals and atmospheric correction depends strongly on accuracy of the cloud mask (CM) algorithm. The heritage CM algorithms developed for AVHRR and MODIS use the latest sensor measurements of spectral reflectance and brightness temperature and perform processing at the pixel level. The algorithms are threshold-based and empirically tuned. They don't explicitly address the classical problem of cloud search, wherein the baseline clear-skies scene is defined for comparison. Here, we report on a new CM algorithm which explicitly builds and maintains a reference clear-skies image of the surface (refcm) using a time series of MODIS measurements. The new algorithm, developed as part of the Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm for MODIS, relies on fact that clear-skies images of the same surface area have a common textural pattern, defined by the surface topography, boundaries of rivers and lakes, distribution of soils and vegetation etc. This pattern changes slowly given the daily rate of global Earth observations, whereas clouds introduce high-frequency random disturbances. Under clear skies, consecutive gridded images of the same surface area have a high covariance, whereas in presence of clouds covariance is usually low. This idea is central to initialization of refcm which is used to derive cloud mask in combination with spectral and brightness temperature tests. The refcm is continuously updated with the latest clear-skies MODIS measurements, thus adapting to seasonal and rapid surface changes. The algorithm is enhanced by an internal dynamic land-water-snow classification coupled with a surface change mask. An initial comparison shows that the new algorithm offers the potential to perform better than the MODIS MOD35 cloud mask in situations where the land surface is changing rapidly, and over Earth regions covered by snow and ice.
The full spectrum of AdS5/CFT4 I: representation theory and one-loop Q-system
NASA Astrophysics Data System (ADS)
Marboe, Christian; Volin, Dmytro
2018-04-01
With the formulation of the quantum spectral curve for the AdS5/CFT4 integrable system, it became potentially possible to compute its full spectrum with high efficiency. This is the first paper in a series devoted to the explicit design of such computations, with no restrictions to particular subsectors being imposed. We revisit the representation theoretical classification of possible states in the spectrum and map the symmetry multiplets to solutions of the quantum spectral curve at zero coupling. To this end it is practical to introduce a generalisation of Young diagrams to the case of non-compact representations and define algebraic Q-systems directly on these diagrams. Furthermore, we propose an algorithm to explicitly solve such Q-systems that circumvents the traditional usage of Bethe equations and simplifies the computation effort. For example, our algorithm quickly obtains explicit analytic results for all 495 multiplets that accommodate single-trace operators in N=4 SYM with classical conformal dimension up to \\frac{13}{2} . We plan to use these results as the seed for solving the quantum spectral curve perturbatively to high loop orders in the next paper of the series.
Bonding nature and electron delocalization of An(COT)2, An = Th, Pa, U.
Páez-Hernández, Dayán; Murillo-López, Juliana A; Arratia-Pérez, Ramiro
2011-08-18
A systematic study of a series of An(COT)(2) compounds, where An = Th, Pa, U, and COT represents cyclooctatetraene, has been performed using relativistic density functional theory. The ZORA Hamiltonian was applied for the inclusion of relativistic effects, taking into account all of the electrons for the optimization and explicitly including spin-orbit coupling effects. Time-dependent density functional theory (TDDFT) was used to calculate the excitation energies with the GGA SAOP functional, and the electronic transitions were analyzed using double group irreducible representations. The calculated excitation energies are in perfect correlation with the increment of the ring delocalization as it increases along the actinide series. These results are sufficient to ensure that, for these complexes, the increment in delocalization, as indicated by ELF bifurcation and NICS analysis, leads to a shift in the maximum wavelength of absorption in the visible region. Also, delocalization in the COT ring increases along the actinide series, so the systems become more aromatic because of a modulation induced by the actinides. © 2011 American Chemical Society
Ramseyer, Fabian; Kupper, Zeno; Caspar, Franz; Znoj, Hansjörg; Tschacher, Wolfgang
2014-10-01
Processes occurring in the course of psychotherapy are characterized by the simple fact that they unfold in time and that the multiple factors engaged in change processes vary highly between individuals (idiographic phenomena). Previous research, however, has neglected the temporal perspective by its traditional focus on static phenomena, which were mainly assessed at the group level (nomothetic phenomena). To support a temporal approach, the authors introduce time-series panel analysis (TSPA), a statistical methodology explicitly focusing on the quantification of temporal, session-to-session aspects of change in psychotherapy. TSPA-models are initially built at the level of individuals and are subsequently aggregated at the group level, thus allowing the exploration of prototypical models. TSPA is based on vector auto-regression (VAR), an extension of univariate auto-regression models to multivariate time-series data. The application of TSPA is demonstrated in a sample of 87 outpatient psychotherapy patients who were monitored by postsession questionnaires. Prototypical mechanisms of change were derived from the aggregation of individual multivariate models of psychotherapy process. In a 2nd step, the associations between mechanisms of change (TSPA) and pre- to postsymptom change were explored. TSPA allowed a prototypical process pattern to be identified, where patient's alliance and self-efficacy were linked by a temporal feedback-loop. Furthermore, therapist's stability over time in both mastery and clarification interventions was positively associated with better outcomes. TSPA is a statistical tool that sheds new light on temporal mechanisms of change. Through this approach, clinicians may gain insight into prototypical patterns of change in psychotherapy. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Cheung, Vincent C. K.; Devarajan, Karthik; Severini, Giacomo; Turolla, Andrea; Bonato, Paolo
2017-01-01
The non-negative matrix factorization algorithm (NMF) decomposes a data matrix into a set of non-negative basis vectors, each scaled by a coefficient. In its original formulation, the NMF assumes the data samples and dimensions to be independently distributed, making it a less-than-ideal algorithm for the analysis of time series data with temporal correlations. Here, we seek to derive an NMF that accounts for temporal dependencies in the data by explicitly incorporating a very simple temporal constraint for the coefficients into the NMF update rules. We applied the modified algorithm to 2 multi-dimensional electromyographic data sets collected from the human upper-limb to identify muscle synergies. We found that because it reduced the number of free parameters in the model, our modified NMF made it possible to use the Akaike Information Criterion to objectively identify a model order (i.e., the number of muscle synergies composing the data) that is more functionally interpretable, and closer to the numbers previously determined using ad hoc measures. PMID:26737046
NASA Astrophysics Data System (ADS)
Watkins, N. W.; Rypdal, M.; Lovsletten, O.
2012-12-01
For all natural hazards, the question of when the next "extreme event" (c.f. Taleb's "black swans") is expected is of obvious importance. In the environmental sciences users often frame such questions in terms of average "return periods", e.g. "is an X meter rise in the Thames water level a 1-in-Y year event ?". Frequently, however, we also care about the emergence of correlation, and whether the probability of several big events occurring in close succession is truly independent, i.e. are the black swans "bunched". A "big event", or a "burst", defined by its integrated signal above a threshold, might be a single, very large, event, or, instead, could in fact be a correlated series of "smaller" (i.e. less wildly fluctuating) events. Several available stochastic approaches provide quantitative information about such bursts, including Extreme Value Theory (EVT); the theory of records; level sets; sojourn times; and models of space-time "avalanches" of activity in non-equilibrium systems. Some focus more on the probability of single large events. Others are more concerned with extended dwell times above a given spatiotemporal threshold: However, the state of the art is not yet fully integrated, and the above-mentioned approaches differ in fundamental aspects. EVT is perhaps the best known in the geosciences. It is concerned with the distribution obeyed by the extremes of datasets, e.g. the 100 values obtained by considering the largest daily temperature recorded in each of the years of a century. However, the pioneering work from the 1920s on which EVT originally built was based on independent identically distributed samples, and took no account of memory and correlation that characterise many natural hazard time series. Ignoring this would fundamentally limit our ability to forecast; so much subsequent activity has been devoted to extending EVT to encompass dependence. A second group of approaches, by contrast, has notions of time and thus possible non-stationarity explicitly built in. In record breaking statistics, a record is defined in the sense used in everyday language, to be the largest value yet recorded in a time series, for example, the 2004 Sumatran Boxing Day earthquake was at the time the largest to be digitally recorded. The third group of approaches (e.g. avalanches) are explicitly spatiotemporal and so also include spatial structure. This presentation will discuss two examples of our recent work on the burst problem. We will show numerical results extending the preliminary results presented in [Watkins et al, PRE, 2009] using a standard additive model, linear fractional stable motion (LFSM). LFSM explicitly includes both heavy tails and long range dependence, allowing us to study how these 2 effects compete in determining the burst duration and size exponent probability distributions. We will contrast these simulations with new analytical studies of bursts in a multiplicative process, the multifractal random walk (MRW). We will present an analytical derivation for the scaling of the burst durations and make a preliminary comparison with data from the AE index from solar-terrestrial physics. We believe our result is more generally applicable than the MRW model, and that it applies to a broad class of multifractal processes.
Generating series for GUE correlators
NASA Astrophysics Data System (ADS)
Dubrovin, Boris; Yang, Di
2017-11-01
We extend to the Toda lattice hierarchy the approach of Bertola et al. (Phys D Nonlinear Phenom 327:30-57, 2016; IMRN, 2016) to computation of logarithmic derivatives of tau-functions in terms of the so-called matrix resolvents of the corresponding difference Lax operator. As a particular application we obtain explicit generating series for connected GUE correlators. On this basis an efficient recursive procedure for computing the correlators in full genera is developed.
Heideman, Simone G; van Ede, Freek; Nobre, Anna C
2018-05-24
In daily life, temporal expectations may derive from incidental learning of recurring patterns of intervals. We investigated the incidental acquisition and utilisation of combined temporal-ordinal (spatial/effector) structure in complex visual-motor sequences using a modified version of a serial reaction time (SRT) task. In this task, not only the series of targets/responses, but also the series of intervals between subsequent targets was repeated across multiple presentations of the same sequence. Each participant completed three sessions. In the first session, only the repeating sequence was presented. During the second and third session, occasional probe blocks were presented, where a new (unlearned) spatial-temporal sequence was introduced. We first confirm that participants not only got faster over time, but that they were slower and less accurate during probe blocks, indicating that they incidentally learned the sequence structure. Having established a robust behavioural benefit induced by the repeating spatial-temporal sequence, we next addressed our central hypothesis that implicit temporal orienting (evoked by the learned temporal structure) would have the largest influence on performance for targets following short (as opposed to longer) intervals between temporally structured sequence elements, paralleling classical observations in tasks using explicit temporal cues. We found that indeed, reaction time differences between new and repeated sequences were largest for the short interval, compared to the medium and long intervals, and that this was the case, even when comparing late blocks (where the repeated sequence had been incidentally learned), to early blocks (where this sequence was still unfamiliar). We conclude that incidentally acquired temporal expectations that follow a sequential structure can have a robust facilitatory influence on visually-guided behavioural responses and that, like more explicit forms of temporal orienting, this effect is most pronounced for sequence elements that are expected at short inter-element intervals. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
TIMESERIESSTREAMING.VI: LabVIEW program for reliable data streaming of large analog time series
NASA Astrophysics Data System (ADS)
Czerwinski, Fabian; Oddershede, Lene B.
2011-02-01
With modern data acquisition devices that work fast and very precise, scientists often face the task of dealing with huge amounts of data. These need to be rapidly processed and stored onto a hard disk. We present a LabVIEW program which reliably streams analog time series of MHz sampling. Its run time has virtually no limitation. We explicitly show how to use the program to extract time series from two experiments: For a photodiode detection system that tracks the position of an optically trapped particle and for a measurement of ionic current through a glass capillary. The program is easy to use and versatile as the input can be any type of analog signal. Also, the data streaming software is simple, highly reliable, and can be easily customized to include, e.g., real-time power spectral analysis and Allan variance noise quantification. Program summaryProgram title: TimeSeriesStreaming.VI Catalogue identifier: AEHT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 250 No. of bytes in distributed program, including test data, etc.: 63 259 Distribution format: tar.gz Programming language: LabVIEW ( http://www.ni.com/labview/) Computer: Any machine running LabVIEW 8.6 or higher Operating system: Windows XP and Windows 7 RAM: 60-360 Mbyte Classification: 3 Nature of problem: For numerous scientific and engineering applications, it is highly desirable to have an efficient, reliable, and flexible program to perform data streaming of time series sampled with high frequencies and possibly for long time intervals. This type of data acquisition often produces very large amounts of data not easily streamed onto a computer hard disk using standard methods. Solution method: This LabVIEW program is developed to directly stream any kind of time series onto a hard disk. Due to optimized timing and usage of computational resources, such as multicores and protocols for memory usage, this program provides extremely reliable data acquisition. In particular, the program is optimized to deal with large amounts of data, e.g., taken with high sampling frequencies and over long time intervals. The program can be easily customized for time series analyses. Restrictions: Only tested in Windows-operating LabVIEW environments, must use TDMS format, acquisition cards must be LabVIEW compatible, driver DAQmx installed. Running time: As desirable: microseconds to hours
NASA Technical Reports Server (NTRS)
Baysal, Oktay
1986-01-01
An explicit-implicit and an implicit two-dimensional Navier-Stokes code along with various grid generation capabilities were developed. A series of classical benckmark cases were simulated using these codes.
Particle Filtering for Model-Based Anomaly Detection in Sensor Networks
NASA Technical Reports Server (NTRS)
Solano, Wanda; Banerjee, Bikramjit; Kraemer, Landon
2012-01-01
A novel technique has been developed for anomaly detection of rocket engine test stand (RETS) data. The objective was to develop a system that postprocesses a csv file containing the sensor readings and activities (time-series) from a rocket engine test, and detects any anomalies that might have occurred during the test. The output consists of the names of the sensors that show anomalous behavior, and the start and end time of each anomaly. In order to reduce the involvement of domain experts significantly, several data-driven approaches have been proposed where models are automatically acquired from the data, thus bypassing the cost and effort of building system models. Many supervised learning methods can efficiently learn operational and fault models, given large amounts of both nominal and fault data. However, for domains such as RETS data, the amount of anomalous data that is actually available is relatively small, making most supervised learning methods rather ineffective, and in general met with limited success in anomaly detection. The fundamental problem with existing approaches is that they assume that the data are iid, i.e., independent and identically distributed, which is violated in typical RETS data. None of these techniques naturally exploit the temporal information inherent in time series data from the sensor networks. There are correlations among the sensor readings, not only at the same time, but also across time. However, these approaches have not explicitly identified and exploited such correlations. Given these limitations of model-free methods, there has been renewed interest in model-based methods, specifically graphical methods that explicitly reason temporally. The Gaussian Mixture Model (GMM) in a Linear Dynamic System approach assumes that the multi-dimensional test data is a mixture of multi-variate Gaussians, and fits a given number of Gaussian clusters with the help of the wellknown Expectation Maximization (EM) algorithm. The parameters thus learned are used for calculating the joint distribution of the observations. However, this GMM assumption is essentially an approximation and signals the potential viability of non-parametric density estimators. This is the key idea underlying the new approach.
Time series inversion of spectra from ground-based radiometers
NASA Astrophysics Data System (ADS)
Christensen, O. M.; Eriksson, P.
2013-07-01
Retrieving time series of atmospheric constituents from ground-based spectrometers often requires different temporal averaging depending on the altitude region in focus. This can lead to several datasets existing for one instrument, which complicates validation and comparisons between instruments. This paper puts forth a possible solution by incorporating the temporal domain into the maximum a posteriori (MAP) retrieval algorithm. The state vector is increased to include measurements spanning a time period, and the temporal correlations between the true atmospheric states are explicitly specified in the a priori uncertainty matrix. This allows the MAP method to effectively select the best temporal smoothing for each altitude, removing the need for several datasets to cover different altitudes. The method is compared to traditional averaging of spectra using a simulated retrieval of water vapour in the mesosphere. The simulations show that the method offers a significant advantage compared to the traditional method, extending the sensitivity an additional 10 km upwards without reducing the temporal resolution at lower altitudes. The method is also tested on the Onsala Space Observatory (OSO) water vapour microwave radiometer confirming the advantages found in the simulation. Additionally, it is shown how the method can interpolate data in time and provide diagnostic values to evaluate the interpolated data.
Multiple GISS AGCM Hindcasts and MSU Versions of 1979-1998
NASA Technical Reports Server (NTRS)
Shah, Kathryn Pierce; Rind, David; Druyan, Leonard; Lonergan, Patrick; Chandler, Mark
1998-01-01
Multiple realizations of the 1979-1998 time period have been simulated by the Goddard Institute for Space Studies Atmospheric General Circulation Model (GISS AGCM) to explore its responsiveness to accumulated forcings, particularly over sensitive agricultural regions. A microwave radiative transfer postprocessor has produced the AGCM's lower tropospheric, tropospheric and lower stratospheric brightness temperature (Tb) time series for correlations with the various Microwave Sounding Unit (MSU) time series available. MSU maps of monthly means and anomalies were also used to assess the AGCM's mean annual cycle and regional variability. Seven realizations by the AGCM were forced by observed sea surface temperatures (sst) through 1992 to gather rough standard deviations associated with internal model variability. Subsequent runs hindcast January 1979 through April 1998 with an accumulation of forcings: observed ssts, greenhouse gases, stratospheric volcanic aerosols. stratospheric and tropospheric ozone and tropospheric sulfate and black carbon aerosols. The goal of narrowing gaps between AGCM and MSU time series was complicated by MSU time series, by Tb simulation concerns and by unforced climatic variability in the AGCM and in the real world. Lower stratospheric Tb correlations between the AGCM and MSU for 1979-1998 reached as high as 0.91 +/-0.16 globally with sst, greenhouse gases, volcanic aerosol, stratospheric ozone forcings and tropospheric aerosols. Mid-tropospheric Tb correlations reached as high as 0.66 +/-.04 globally and 0.84 +/-.02 in the tropics. Oceanic lower tropospheric Tb correlations similarly reached 0.61 +/-.06 globally and 0.79 +/-.02 in the tropics. Of the sensitive agricultural areas considered, Nordeste in northeastern Brazil was simulated best with mid-tropospheric Tb correlations up to 0.75 +/- .03. The two other agricultural regions, in Africa and in the northern mid-latitudes, suffered from higher levels of non-sst variability. Zimbabwe had a maximum mid-tropospheric correlation of 0.54 +/- 0.11 while the U.S. Cornbelt had only 0.25 +/- .10. Precipitation and surface temperature performance are also examined over these regions. Correlations of MSU and AGCM time series mostly improved with addition of explicit atmospheric forcings in zonal bands but not in agricultural regional bins each encompassing only six AGCM gridcells.
On vector-valued Poincaré series of weight 2
NASA Astrophysics Data System (ADS)
Meneses, Claudio
2017-10-01
Given a pair (Γ , ρ) of a Fuchsian group of the first kind, and a unitary representation ρ of Γ of arbitrary rank, the problem of construction of vector-valued Poincaré series of weight 2 is considered. Implications in the theory of parabolic bundles are discussed. When the genus of the group is zero, it is shown how an explicit basis for the space of these functions can be constructed.
2011-09-01
Petropoulos and Harry J . Psomiades, Foreign Interference in Greek Politics: An Historical Perspective, vol. II of Modern Greek Research Series, ed... Maxwell Airforce Base, 2002), 13. 48 Ibid. 49 According to this theory, Turkish diplomats claim that several islets, while not explicitly...and Opportunities, vol. VI in Modern Greek Research Series, ed. Van Coufoudakis, Harry J . Psomiades and Andre Gerolymatos (New York: Pella Publishing
Improved Satellite-based Crop Yield Mapping by Spatially Explicit Parameterization of Crop Phenology
NASA Astrophysics Data System (ADS)
Jin, Z.; Azzari, G.; Lobell, D. B.
2016-12-01
Field-scale mapping of crop yields with satellite data often relies on the use of crop simulation models. However, these approaches can be hampered by inaccuracies in the simulation of crop phenology. Here we present and test an approach to use dense time series of Landsat 7 and 8 acquisitions data to calibrate various parameters related to crop phenology simulation, such as leaf number and leaf appearance rates. These parameters are then mapped across the Midwestern United States for maize and soybean, and for two different simulation models. We then implement our recently developed Scalable satellite-based Crop Yield Mapper (SCYM) with simulations reflecting the improved phenology parameterizations, and compare to prior estimates based on default phenology routines. Our preliminary results show that the proposed method can effectively alleviate the underestimation of early-season LAI by the default Agricultural Production Systems sIMulator (APSIM), and that spatially explicit parameterization for the phenology model substantially improves the SCYM performance in capturing the spatiotemporal variation in maize and soybean yield. The scheme presented in our study thus preserves the scalability of SCYM, while significantly reducing its uncertainty.
Explicit and implicit anti-fat attitudes in children and their relationships with their body images.
Solbes, Irene; Enesco, Ileana
2010-02-01
This study aimed to explore the prevalence of negative attitudes toward overweight peers among children using different explicit and implicit measures, and to analyze their relationships with some aspects of their body image. A total of 120 children aged 6-11 years were interviewed using a computer program that simulated a game containing several tasks. Specifically, we have applied multiple measures of explicit attitudes toward average-weight/overweight peers, several personal body attitudes questions and a child-oriented version of the Implicit Association Test. Our participants showed important prejudice and stereotypes against overweight children, both at the explicit and implicit levels. However, we found important differences in the intensity of prejudice and its developmental course as a function of the tasks and the type of measurement used to assess it. Children who grow up in Western societies idealize thinness from an early age and denigrate overweight, to which they associate explicitly and implicitly a series of negative traits that have nothing to do with the weight. As they grow older, they seem to reduce their levels of explicit prejudice, but not the intensity of implicit bias. More research is needed to study in depth prejudice and discrimination toward overweight children from a developmental point of view. Copyright 2010 S. Karger AG, Basel.
Modeling Individual Cyclic Variation in Human Behavior.
Pierson, Emma; Althoff, Tim; Leskovec, Jure
2018-04-01
Cycles are fundamental to human health and behavior. Examples include mood cycles, circadian rhythms, and the menstrual cycle. However, modeling cycles in time series data is challenging because in most cases the cycles are not labeled or directly observed and need to be inferred from multidimensional measurements taken over time. Here, we present Cyclic Hidden Markov Models (CyH-MMs) for detecting and modeling cycles in a collection of multidimensional heterogeneous time series data. In contrast to previous cycle modeling methods, CyHMMs deal with a number of challenges encountered in modeling real-world cycles: they can model multivariate data with both discrete and continuous dimensions; they explicitly model and are robust to missing data; and they can share information across individuals to accommodate variation both within and between individual time series. Experiments on synthetic and real-world health-tracking data demonstrate that CyHMMs infer cycle lengths more accurately than existing methods, with 58% lower error on simulated data and 63% lower error on real-world data compared to the best-performing baseline. CyHMMs can also perform functions which baselines cannot: they can model the progression of individual features/symptoms over the course of the cycle, identify the most variable features, and cluster individual time series into groups with distinct characteristics. Applying CyHMMs to two real-world health-tracking datasets-of human menstrual cycle symptoms and physical activity tracking data-yields important insights including which symptoms to expect at each point during the cycle. We also find that people fall into several groups with distinct cycle patterns, and that these groups differ along dimensions not provided to the model. For example, by modeling missing data in the menstrual cycles dataset, we are able to discover a medically relevant group of birth control users even though information on birth control is not given to the model.
Modeling Individual Cyclic Variation in Human Behavior
Pierson, Emma; Althoff, Tim; Leskovec, Jure
2018-01-01
Cycles are fundamental to human health and behavior. Examples include mood cycles, circadian rhythms, and the menstrual cycle. However, modeling cycles in time series data is challenging because in most cases the cycles are not labeled or directly observed and need to be inferred from multidimensional measurements taken over time. Here, we present Cyclic Hidden Markov Models (CyH-MMs) for detecting and modeling cycles in a collection of multidimensional heterogeneous time series data. In contrast to previous cycle modeling methods, CyHMMs deal with a number of challenges encountered in modeling real-world cycles: they can model multivariate data with both discrete and continuous dimensions; they explicitly model and are robust to missing data; and they can share information across individuals to accommodate variation both within and between individual time series. Experiments on synthetic and real-world health-tracking data demonstrate that CyHMMs infer cycle lengths more accurately than existing methods, with 58% lower error on simulated data and 63% lower error on real-world data compared to the best-performing baseline. CyHMMs can also perform functions which baselines cannot: they can model the progression of individual features/symptoms over the course of the cycle, identify the most variable features, and cluster individual time series into groups with distinct characteristics. Applying CyHMMs to two real-world health-tracking datasets—of human menstrual cycle symptoms and physical activity tracking data—yields important insights including which symptoms to expect at each point during the cycle. We also find that people fall into several groups with distinct cycle patterns, and that these groups differ along dimensions not provided to the model. For example, by modeling missing data in the menstrual cycles dataset, we are able to discover a medically relevant group of birth control users even though information on birth control is not given to the model. PMID:29780976
2009-01-01
This article is part of a series written for people responsible for making decisions about health policies and programmes and for those who support these decision makers. Policymakers have limited resources for developing – or supporting the development of – evidence-informed policies and programmes. These required resources include staff time, staff infrastructural needs (such as access to a librarian or journal article purchasing), and ongoing professional development. They may therefore prefer instead to contract out such work to independent units with more suitably skilled staff and appropriate infrastructure. However, policymakers may only have limited financial resources to do so. Regardless of whether the support for evidence-informed policymaking is provided in-house or contracted out, or whether it is centralised or decentralised, resources always need to be used wisely in order to maximise their impact. Examples of undesirable practices in a priority-setting approach include timelines to support evidence-informed policymaking being negotiated on a case-by-case basis (instead of having clear norms about the level of support that can be provided for each timeline), implicit (rather than explicit) criteria for setting priorities, ad hoc (rather than systematic and explicit) priority-setting process, and the absence of both a communications plan and a monitoring and evaluation plan. In this article, we suggest questions that can guide those setting priorities for finding and using research evidence to support evidence-informed policymaking. These are: 1. Does the approach to prioritisation make clear the timelines that have been set for addressing high-priority issues in different ways? 2. Does the approach incorporate explicit criteria for determining priorities? 3. Does the approach incorporate an explicit process for determining priorities? 4. Does the approach incorporate a communications strategy and a monitoring and evaluation plan? PMID:20018110
Lavis, John N; Oxman, Andrew D; Lewin, Simon; Fretheim, Atle
2009-12-16
This article is part of a series written for people responsible for making decisions about health policies and programmes and for those who support these decision makers. Policymakers have limited resources for developing--or supporting the development of--evidence-informed policies and programmes. These required resources include staff time, staff infrastructural needs (such as access to a librarian or journal article purchasing), and ongoing professional development. They may therefore prefer instead to contract out such work to independent units with more suitably skilled staff and appropriate infrastructure. However, policymakers may only have limited financial resources to do so. Regardless of whether the support for evidence-informed policymaking is provided in-house or contracted out, or whether it is centralised or decentralised, resources always need to be used wisely in order to maximise their impact. Examples of undesirable practices in a priority-setting approach include timelines to support evidence-informed policymaking being negotiated on a case-by-case basis (instead of having clear norms about the level of support that can be provided for each timeline), implicit (rather than explicit) criteria for setting priorities, ad hoc (rather than systematic and explicit) priority-setting process, and the absence of both a communications plan and a monitoring and evaluation plan. In this article, we suggest questions that can guide those setting priorities for finding and using research evidence to support evidence-informed policymaking. These are: 1. Does the approach to prioritisation make clear the timelines that have been set for addressing high-priority issues in different ways? 2. Does the approach incorporate explicit criteria for determining priorities? 3. Does the approach incorporate an explicit process for determining priorities? 4. Does the approach incorporate a communications strategy and a monitoring and evaluation plan?
NASA Astrophysics Data System (ADS)
Firdausi, N.; Prabawa, H. W.; Sutarno, H.
2017-02-01
In an effort to maximize a student’s academic growth, one of the tools available to educators is the explicit instruction. Explicit instruction is marked by a series of support or scaffold, where the students will be guided through the learning process with a clear statement of purpose and a reason for learning new skills, a clear explanation and demonstration of learning targets, supported and practiced with independent feedback until mastery has been achieved. The technology development trend of todays, requires an adjustment in the development of learning object that supports the achievement of explicit instruction targets. This is where the gamification position is. In the role as a pedagogical strategy, the use of gamification preformance study class is still relatively new. Gamification not only use the game elements and game design techniques in non-game contexts, but also to empower and engage learners with the ability of motivation on learning approach and maintains a relaxed atmosphere. With using Reseach and Development methods, this paper presents the integration of technology (which in this case using the concept of gamification) in explicit instruction settings and the impact on the improvement of students’ understanding.
Studying Activity Series of Metals.
ERIC Educational Resources Information Center
Hoon, Tien-Ghun; And Others
1995-01-01
Presents teaching strategies that illustrate the linking together of numerous chemical concepts involving the activity of metals (quantitative analysis, corrosion, and electrolysis) through the use of deep-level processing strategies. Concludes that making explicit links in the process of teaching chemistry can lead effectively to meaningful…
Elementary functions in thermodynamic Bethe ansatz
NASA Astrophysics Data System (ADS)
Suzuki, J.
2015-05-01
Some years ago, Fendley found an explicit solution to the thermodynamic Bethe ansatz (TBA) equation for an N=2 supersymmetric theory in 2D with a specific F-term. Motivated by this, we seek explicit solutions for other super-potential cases utilizing the idea from the ODE/IM correspondence. We find that the TBA equations, corresponding to a wider class of super-potentials, admit solutions in terms of elementary functions such as modified Bessel functions and confluent hyper-geometric series. Based on talks given at ‘Infinite Analysis 2014’ (Tokyo, 2014) and at ‘Integrable lattice models and quantum field theories’ (Bad Honnef, 2014).
NASA Astrophysics Data System (ADS)
Luce, C.; Tonina, D.; Gariglio, F. P.; Applebee, R.
2012-12-01
Differences in the diurnal variations of temperature at different depths in streambed sediments are commonly used for estimating vertical fluxes of water in the streambed. We applied spatial and temporal rescaling of the advection-diffusion equation to derive two new relationships that greatly extend the kinds of information that can be derived from streambed temperature measurements. The first equation provides a direct estimate of the Peclet number from the amplitude decay and phase delay information. The analytical equation is explicit (e.g. no numerical root-finding is necessary), and invertable. The thermal front velocity can be estimated from the Peclet number when the thermal diffusivity is known. The second equation allows for an independent estimate of the thermal diffusivity directly from the amplitude decay and phase delay information. Several improvements are available with the new information. The first equation uses a ratio of the amplitude decay and phase delay information; thus Peclet number calculations are independent of depth. The explicit form also makes it somewhat faster and easier to calculate estimates from a large number of sensors or multiple positions along one sensor. Where current practice requires a priori estimation of streambed thermal diffusivity, the new approach allows an independent calculation, improving precision of estimates. Furthermore, when many measurements are made over space and time, expectations of the spatial correlation and temporal invariance of thermal diffusivity are valuable for validation of measurements. Finally, the closed-form explicit solution allows for direct calculation of propagation of uncertainties in error measurements and parameter estimates, providing insight about error expectations for sensors placed at different depths in different environments as a function of surface temperature variation amplitudes. The improvements are expected to increase the utility of temperature measurement methods for studying groundwater-surface water interactions across space and time scales. We discuss the theoretical implications of the new solutions supported by examples with data for illustration and validation.
Hedged Monte-Carlo: low variance derivative pricing with objective probabilities
NASA Astrophysics Data System (ADS)
Potters, Marc; Bouchaud, Jean-Philippe; Sestovic, Dragan
2001-01-01
We propose a new ‘hedged’ Monte-Carlo ( HMC) method to price financial derivatives, which allows to determine simultaneously the optimal hedge. The inclusion of the optimal hedging strategy allows one to reduce the financial risk associated with option trading, and for the very same reason reduces considerably the variance of our HMC scheme as compared to previous methods. The explicit accounting of the hedging cost naturally converts the objective probability into the ‘risk-neutral’ one. This allows a consistent use of purely historical time series to price derivatives and obtain their residual risk. The method can be used to price a large class of exotic options, including those with path dependent and early exercise features.
Persistence in eye movement during visual search
NASA Astrophysics Data System (ADS)
Amor, Tatiana A.; Reis, Saulo D. S.; Campos, Daniel; Herrmann, Hans J.; Andrade, José S.
2016-02-01
As any cognitive task, visual search involves a number of underlying processes that cannot be directly observed and measured. In this way, the movement of the eyes certainly represents the most explicit and closest connection we can get to the inner mechanisms governing this cognitive activity. Here we show that the process of eye movement during visual search, consisting of sequences of fixations intercalated by saccades, exhibits distinctive persistent behaviors. Initially, by focusing on saccadic directions and intersaccadic angles, we disclose that the probability distributions of these measures show a clear preference of participants towards a reading-like mechanism (geometrical persistence), whose features and potential advantages for searching/foraging are discussed. We then perform a Multifractal Detrended Fluctuation Analysis (MF-DFA) over the time series of jump magnitudes in the eye trajectory and find that it exhibits a typical multifractal behavior arising from the sequential combination of saccades and fixations. By inspecting the time series composed of only fixational movements, our results reveal instead a monofractal behavior with a Hurst exponent , which indicates the presence of long-range power-law positive correlations (statistical persistence). We expect that our methodological approach can be adopted as a way to understand persistence and strategy-planning during visual search.
Video Game Rehabilitation of Velopharyngeal Dysfunction: A Case Series.
Cler, Gabriel J; Mittelman, Talia; Braden, Maia N; Woodnorth, Geralyn Harvey; Stepp, Cara E
2017-06-22
Video games provide a promising platform for rehabilitation of speech disorders. Although video games have been used to train speech perception in foreign language learners and have been proposed for aural rehabilitation, their use in speech therapy has been limited thus far. We present feasibility results from at-home use in a case series of children with velopharyngeal dysfunction (VPD) using an interactive video game that provided real-time biofeedback to facilitate appropriate nasalization. Five participants were recruited across a range of ages, VPD severities, and VPD etiologies. Participants completed multiple weeks of individual game play with a video game that provides feedback on nasalization measured via nasal accelerometry. Nasalization was assessed before and after training by using nasometry, aerodynamic measures, and expert perceptual judgments. Four participants used the game at home or school, with the remaining participant unwilling to have the nasal accelerometer secured to his nasal skin, perhaps due to his young age. The remaining participants showed a tendency toward decreased nasalization after training, particularly for the words explicitly trained in the video game. Results suggest that video game-based systems may provide a useful rehabilitation platform for providing real-time feedback of speech nasalization in VPD. https://doi.org/10.23641/asha.5116828.
Emergence of patterns in random processes
NASA Astrophysics Data System (ADS)
Newman, William I.; Turcotte, Donald L.; Malamud, Bruce D.
2012-08-01
Sixty years ago, it was observed that any independent and identically distributed (i.i.d.) random variable would produce a pattern of peak-to-peak sequences with, on average, three events per sequence. This outcome was employed to show that randomness could yield, as a null hypothesis for animal populations, an explanation for their apparent 3-year cycles. We show how we can explicitly obtain a universal distribution of the lengths of peak-to-peak sequences in time series and that this can be employed for long data sets as a test of their i.i.d. character. We illustrate the validity of our analysis utilizing the peak-to-peak statistics of a Gaussian white noise. We also consider the nearest-neighbor cluster statistics of point processes in time. If the time intervals are random, we show that cluster size statistics are identical to the peak-to-peak sequence statistics of time series. In order to study the influence of correlations in a time series, we determine the peak-to-peak sequence statistics for the Langevin equation of kinetic theory leading to Brownian motion. To test our methodology, we consider a variety of applications. Using a global catalog of earthquakes, we obtain the peak-to-peak statistics of earthquake magnitudes and the nearest neighbor interoccurrence time statistics. In both cases, we find good agreement with the i.i.d. theory. We also consider the interval statistics of the Old Faithful geyser in Yellowstone National Park. In this case, we find a significant deviation from the i.i.d. theory which we attribute to antipersistence. We consider the interval statistics using the AL index of geomagnetic substorms. We again find a significant deviation from i.i.d. behavior that we attribute to mild persistence. Finally, we examine the behavior of Standard and Poor's 500 stock index's daily returns from 1928-2011 and show that, while it is close to being i.i.d., there is, again, significant persistence. We expect that there will be many other applications of our methodology both to interoccurrence statistics and to time series.
Multistage Schemes with Multigrid for Euler and Navier-Strokes Equations: Components and Analysis
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, Eli
1997-01-01
A class of explicit multistage time-stepping schemes with centered spatial differencing and multigrids are considered for the compressible Euler and Navier-Stokes equations. These schemes are the basis for a family of computer programs (flow codes with multigrid (FLOMG) series) currently used to solve a wide range of fluid dynamics problems, including internal and external flows. In this paper, the components of these multistage time-stepping schemes are defined, discussed, and in many cases analyzed to provide additional insight into their behavior. Special emphasis is given to numerical dissipation, stability of Runge-Kutta schemes, and the convergence acceleration techniques of multigrid and implicit residual smoothing. Both the Baldwin and Lomax algebraic equilibrium model and the Johnson and King one-half equation nonequilibrium model are used to establish turbulence closure. Implementation of these models is described.
Computationally intensive econometrics using a distributed matrix-programming language.
Doornik, Jurgen A; Hendry, David F; Shephard, Neil
2002-06-15
This paper reviews the need for powerful computing facilities in econometrics, focusing on concrete problems which arise in financial economics and in macroeconomics. We argue that the profession is being held back by the lack of easy-to-use generic software which is able to exploit the availability of cheap clusters of distributed computers. Our response is to extend, in a number of directions, the well-known matrix-programming interpreted language Ox developed by the first author. We note three possible levels of extensions: (i) Ox with parallelization explicit in the Ox code; (ii) Ox with a parallelized run-time library; and (iii) Ox with a parallelized interpreter. This paper studies and implements the first case, emphasizing the need for deterministic computing in science. We give examples in the context of financial economics and time-series modelling.
NASA Astrophysics Data System (ADS)
Manohar, A. V.
2003-02-01
These lecture notes present some of the basic ideas of heavy quark effective theory. The topics covered include the classification of states, the derivation of the HQET Lagrangian at tree level, hadron masses, meson form factors, Luke's theorem, reparameterization invariance and inclusive decays. Radiative corrections are discussed in some detail, including an explicit computation of a matching correction for HQET. Borel summability, renormalons, and their connection with the QCD perturbation series is covered, as well as the use of the upsilon expansion to improve the convergence of the perturbation series.
Rubinstein, Robert; Kurien, Susan; Cambon, Claude
2015-06-22
The representation theory of the rotation group is applied to construct a series expansion of the correlation tensor in homogeneous anisotropic turbulence. The resolution of angular dependence is the main analytical difficulty posed by anisotropic turbulence; representation theory parametrises this dependence by a tensor analogue of the standard spherical harmonics expansion of a scalar. As a result, the series expansion is formulated in terms of explicitly constructed tensor bases with scalar coefficients determined by angular moments of the correlation tensor.
Determination of the expansion of the potential of the earth's normal gravitational field
NASA Astrophysics Data System (ADS)
Kochiev, A. A.
The potential of the generalized problem of 2N fixed centers is expanded in a polynomial and Legendre function series. Formulas are derived for the expansion coefficients, and the disturbing function of the problem is constructed in an explicit form.
A GPU-accelerated implicit meshless method for compressible flows
NASA Astrophysics Data System (ADS)
Zhang, Jia-Le; Ma, Zhi-Hua; Chen, Hong-Quan; Cao, Cheng
2018-05-01
This paper develops a recently proposed GPU based two-dimensional explicit meshless method (Ma et al., 2014) by devising and implementing an efficient parallel LU-SGS implicit algorithm to further improve the computational efficiency. The capability of the original 2D meshless code is extended to deal with 3D complex compressible flow problems. To resolve the inherent data dependency of the standard LU-SGS method, which causes thread-racing conditions destabilizing numerical computation, a generic rainbow coloring method is presented and applied to organize the computational points into different groups by painting neighboring points with different colors. The original LU-SGS method is modified and parallelized accordingly to perform calculations in a color-by-color manner. The CUDA Fortran programming model is employed to develop the key kernel functions to apply boundary conditions, calculate time steps, evaluate residuals as well as advance and update the solution in the temporal space. A series of two- and three-dimensional test cases including compressible flows over single- and multi-element airfoils and a M6 wing are carried out to verify the developed code. The obtained solutions agree well with experimental data and other computational results reported in the literature. Detailed analysis on the performance of the developed code reveals that the developed CPU based implicit meshless method is at least four to eight times faster than its explicit counterpart. The computational efficiency of the implicit method could be further improved by ten to fifteen times on the GPU.
Implicit and explicit motor sequence learning in children born very preterm.
Jongbloed-Pereboom, Marjolein; Janssen, Anjo J W M; Steiner, K; Steenbergen, Bert; Nijhuis-van der Sanden, Maria W G
2017-01-01
Motor skills can be learned explicitly (dependent on working memory (WM)) or implicitly (relatively independent of WM). Children born very preterm (VPT) often have working memory deficits. Explicit learning may be compromised in these children. This study investigated implicit and explicit motor learning and the role of working memory in VPT children and controls. Three groups (6-9 years) participated: 20 VPT children with motor problems, 20 VPT children without motor problems, and 20 controls. A nine button sequence was learned implicitly (pressing the lighted button as quickly as possible) and explicitly (discovering the sequence via trial-and-error). Children learned implicitly and explicitly, evidenced by decreased movement duration of the sequence over time. In the explicit condition, children also reduced the number of errors over time. Controls made more errors than VPT children without motor problems. Visual WM had positive effects on both explicit and implicit performance. VPT birth and low motor proficiency did not negatively affect implicit or explicit learning. Visual WM was positively related to both implicit and explicit performance, but did not influence learning curves. These findings question the theoretical difference between implicit and explicit learning and the proposed role of visual WM therein. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Oriani, Fabio
2017-04-01
The unpredictable nature of rainfall makes its estimation as much difficult as it is essential to hydrological applications. Stochastic simulation is often considered a convenient approach to asses the uncertainty of rainfall processes, but preserving their irregular behavior and variability at multiple scales is a challenge even for the most advanced techniques. In this presentation, an overview on the Direct Sampling technique [1] and its recent application to rainfall and hydrological data simulation [2, 3] is given. The algorithm, having its roots in multiple-point statistics, makes use of a training data set to simulate the outcome of a process without inferring any explicit probability measure: the data are simulated in time or space by sampling the training data set where a sufficiently similar group of neighbor data exists. This approach allows preserving complex statistical dependencies at different scales with a good approximation, while reducing the parameterization to the minimum. The straights and weaknesses of the Direct Sampling approach are shown through a series of applications to rainfall and hydrological data: from time-series simulation to spatial rainfall fields conditioned by elevation or a climate scenario. In the era of vast databases, is this data-driven approach a valid alternative to parametric simulation techniques? [1] Mariethoz G., Renard P., and Straubhaar J. (2010), The Direct Sampling method to perform multiple-point geostatistical simulations, Water. Rerous. Res., 46(11), http://dx.doi.org/10.1029/2008WR007621 [2] Oriani F., Straubhaar J., Renard P., and Mariethoz G. (2014), Simulation of rainfall time series from different climatic regions using the direct sampling technique, Hydrol. Earth Syst. Sci., 18, 3015-3031, http://dx.doi.org/10.5194/hess-18-3015-2014 [3] Oriani F., Borghi A., Straubhaar J., Mariethoz G., Renard P. (2016), Missing data simulation inside flow rate time-series using multiple-point statistics, Environ. Model. Softw., vol. 86, pp. 264 - 276, http://dx.doi.org/10.1016/j.envsoft.2016.10.002
Further summation formulae related to generalized harmonic numbers
NASA Astrophysics Data System (ADS)
Zheng, De-Yin
2007-11-01
By employing the univariate series expansion of classical hypergeometric series formulae, Shen [L.-C. Shen, Remarks on some integrals and series involving the Stirling numbers and [zeta](n), Trans. Amer. Math. Soc. 347 (1995) 1391-1399] and Choi and Srivastava [J. Choi, H.M. Srivastava, Certain classes of infinite series, Monatsh. Math. 127 (1999) 15-25; J. Choi, H.M. Srivastava, Explicit evaluation of Euler and related sums, Ramanujan J. 10 (2005) 51-70] investigated the evaluation of infinite series related to generalized harmonic numbers. More summation formulae have systematically been derived by Chu [W. Chu, Hypergeometric series and the Riemann Zeta function, Acta Arith. 82 (1997) 103-118], who developed fully this approach to the multivariate case. The present paper will explore the hypergeometric series method further and establish numerous summation formulae expressing infinite series related to generalized harmonic numbers in terms of the Riemann Zeta function [zeta](m) with m=5,6,7, including several known ones as examples.
Multi-year mapping of irrigated croplands over the US High Plains Aquifer using satellite data
NASA Astrophysics Data System (ADS)
Deines, J.; Kendall, A. D.; Hyndman, D. W.
2016-12-01
Irrigated agriculture is the largest consumer of freshwater globally. Effective water management is crucial to support ongoing agricultural intensification to meet increasing demand for food, fuel, and fiber production. Knowledge of where and when irrigation occurs is critical for effective management and hydrological modeling, yet data on patterns of irrigation through time are surprisingly rare. Existing regional datasets in the United States tend to be either aspatial county-level estimates or static, single-year remotely sensed products with relatively low spatial resolution ( 250 m or coarser). Spatially explicit, dynamic maps are needed to understand water use trends, create accurate hydrological models, and inform forecasts of future water availability under projected climate change. In the High Plains Aquifer (HPA), repeat mapping efforts in 2002 and 2007 indicated only 60% of irrigated lands were static between these periods. To better understand annual irrigation dynamics, we used remote sensing to produce annual maps of irrigated cropland across the HPA region from a data fusion of Landsat satellites, annual time series of vegetation indices, and ancillary data such as precipitation, soil properties, and terrain slope. We performed machine learning classification using Google Earth Engine, allowing efficient image processing over a large region for multiple years. We then analyzed maps for water use trends and found that although total irrigated area has increased only slightly, there was substantial variability in the spatial pattern of irrigated lands over time. This dataset will support efforts towards groundwater sustainability by providing consistent, spatially explicit tracking of irrigation dynamics over time.
Including the Learner in Personalized Learning. Connect: Making Learning Personal
ERIC Educational Resources Information Center
Rickabaugh, Jim
2015-01-01
This issue is in response to Janet Twyman's brief, "Competency-Based Education: Supporting Personalized Learning" in the "Connect: Making Learning Personal" series. The discussion in Twyman's brief stopped short of being explicit regarding the aspect of personalized learning that Wisconsin's education innovation lab, the…
NASA Astrophysics Data System (ADS)
Malanson, G. P.; DeRose, R. J.; Bekker, M. F.
2016-12-01
The consequences of increasing climatic variance while including variability among individuals and populations are explored for range margins of species with a spatially explicit simulation. The model has a single environmental gradient and a single species then extended to two species. Species response to the environment is a Gaussian function with a peak of 1.0 at their peak fitness on the gradient. The variance in the environment is taken from the total variance in the tree ring series of 399 individuals of Pinus edulis in FIA plots in the western USA. The variability is increased by a multiplier of the standard deviation for various doubling times. The variance of individuals in the simulation is drawn from these same series. Inheritance of individual variability is based on the geographic locations of the individuals. The variance for P. edulis is recomputed as time-dependent conditional standard deviations using the GARCH procedure. Establishment and mortality are simulated in a Monte Carlo process with individual variance. Variance for P. edulis does not show a consistent pattern of heteroscedasticity. An obvious result is that increasing variance has deleterious effects on species persistence because extreme events that result in extinctions cannot be balanced by positive anomalies, but even less extreme negative events cannot be balanced by positive anomalies because of biological and spatial constraints. In the two species model the superior competitor is more affected by increasing climatic variance because its response function is steeper at the point of intersection with the other species and so the uncompensated effects of negative anomalies are greater for it. These theoretical results can guide the anticipated need to mitigate the effects of increasing climatic variability on P. edulis range margins. The trailing edge, here subject to increasing drought stress with increasing temperatures, will be more affected by negative anomalies.
Egger, Michael E; Ohlendorf, Joanna M; Scoggins, Charles R; McMasters, Kelly M; Martin, Robert C G
2015-01-01
Background The aim of this paper is to assess the current state of quality and outcomes measures being reported for hepatic resections in the recent literature. Methods Medline and PubMed databases were searched for English language articles published between 1 January 2002 and 30 April 2013. Two examiners reviewed each article and relevant citations for appropriateness of inclusion, which excluded papers of liver donor hepatic resections, repeat hepatectomies or meta-analyses. Data were extracted and summarized by two examiners for analysis. Results Fifty-five studies were identified with suitable reporting to assess peri-operative mortality in hepatic resections. In only 35% (19/55) of the studies was the follow-up time explicitly stated, and in 47% (26/55) of studies peri-operative mortality was limited to in-hospital or 30 days. The time period in which complications were captured was not explicitly stated in 19 out of 28 studies. The remaining studies only captured complications within 30 days of the index operation (8/28). There was a paucity of quality literature addressing truly patient-centred outcomes. Conclusion Quality outcomes after a hepatic resection are inconsistently reported in the literature. Quality outcome studies for a hepatectomy should report mortality and morbidity at a minimum of 90 days after surgery. PMID:26228262
BATS: a Bayesian user-friendly software for analyzing time series microarray experiments.
Angelini, Claudia; Cutillo, Luisa; De Canditiis, Daniela; Mutarelli, Margherita; Pensky, Marianna
2008-10-06
Gene expression levels in a given cell can be influenced by different factors, namely pharmacological or medical treatments. The response to a given stimulus is usually different for different genes and may depend on time. One of the goals of modern molecular biology is the high-throughput identification of genes associated with a particular treatment or a biological process of interest. From methodological and computational point of view, analyzing high-dimensional time course microarray data requires very specific set of tools which are usually not included in standard software packages. Recently, the authors of this paper developed a fully Bayesian approach which allows one to identify differentially expressed genes in a 'one-sample' time-course microarray experiment, to rank them and to estimate their expression profiles. The method is based on explicit expressions for calculations and, hence, very computationally efficient. The software package BATS (Bayesian Analysis of Time Series) presented here implements the methodology described above. It allows an user to automatically identify and rank differentially expressed genes and to estimate their expression profiles when at least 5-6 time points are available. The package has a user-friendly interface. BATS successfully manages various technical difficulties which arise in time-course microarray experiments, such as a small number of observations, non-uniform sampling intervals and replicated or missing data. BATS is a free user-friendly software for the analysis of both simulated and real microarray time course experiments. The software, the user manual and a brief illustrative example are freely available online at the BATS website: http://www.na.iac.cnr.it/bats.
NASA Astrophysics Data System (ADS)
Santos, Léonard; Thirel, Guillaume; Perrin, Charles
2018-04-01
In many conceptual rainfall-runoff models, the water balance differential equations are not explicitly formulated. These differential equations are solved sequentially by splitting the equations into terms that can be solved analytically with a technique called operator splitting
. As a result, only the solutions of the split equations are used to present the different models. This article provides a methodology to make the governing water balance equations of a bucket-type rainfall-runoff model explicit and to solve them continuously. This is done by setting up a comprehensive state-space representation of the model. By representing it in this way, the operator splitting, which makes the structural analysis of the model more complex, could be removed. In this state-space representation, the lag functions (unit hydrographs), which are frequent in rainfall-runoff models and make the resolution of the representation difficult, are first replaced by a so-called Nash cascade
and then solved with a robust numerical integration technique. To illustrate this methodology, the GR4J model is taken as an example. The substitution of the unit hydrographs with a Nash cascade, even if it modifies the model behaviour when solved using operator splitting, does not modify it when the state-space representation is solved using an implicit integration technique. Indeed, the flow time series simulated by the new representation of the model are very similar to those simulated by the classic model. The use of a robust numerical technique that approximates a continuous-time model also improves the lag parameter consistency across time steps and provides a more time-consistent model with time-independent parameters.
NASA Astrophysics Data System (ADS)
Inc, Mustafa; Isa Aliyu, Aliyu; Yusuf, Abdullahi; Baleanu, Dumitru
2017-12-01
This paper obtains the dark, bright, dark-bright or combined optical and singular solitons to the nonlinear Schrödinger equation (NLSE) with group velocity dispersion coefficient and second-order spatio-temporal dispersion coefficient, which arises in photonics and waveguide optics and in optical fibers. The integration algorithm is the sine-Gordon equation method (SGEM). Furthermore, the explicit solutions of the equation are derived by considering the power series solutions (PSS) theory and the convergence of the solutions is guaranteed. Lastly, the modulation instability analysis (MI) is studied based on the standard linear-stability analysis and the MI gain spectrum is obtained.
Heteroskedasticity as a leading indicator of desertification in spatially explicit data.
Seekell, David A; Dakos, Vasilis
2015-06-01
Regime shifts are abrupt transitions between alternate ecosystem states including desertification in arid regions due to drought or overgrazing. Regime shifts may be preceded by statistical anomalies such as increased autocorrelation, indicating declining resilience and warning of an impending shift. Tests for conditional heteroskedasticity, a type of clustered variance, have proven powerful leading indicators for regime shifts in time series data, but an analogous indicator for spatial data has not been evaluated. A spatial analog for conditional heteroskedasticity might be especially useful in arid environments where spatial interactions are critical in structuring ecosystem pattern and process. We tested the efficacy of a test for spatial heteroskedasticity as a leading indicator of regime shifts with simulated data from spatially extended vegetation models with regular and scale-free patterning. These models simulate shifts from extensive vegetative cover to bare, desert-like conditions. The magnitude of spatial heteroskedasticity increased consistently as the modeled systems approached a regime shift from vegetated to desert state. Relative spatial autocorrelation, spatial heteroskedasticity increased earlier and more consistently. We conclude that tests for spatial heteroskedasticity can contribute to the growing toolbox of early warning indicators for regime shifts analyzed with spatially explicit data.
Math In-Service Training for Adult Educators.
ERIC Educational Resources Information Center
Llorente, Juan Carlos; Porras, Marta; Martinez, Rosa
In a series of mathematics education workshops in which teachers from adult basic education and vocational education worked together to design teaching situations on particular contents in mathematics in order to make explicit and bring into reflection the teaching strategies used by each group. The workshops constituted a common space of…
ERIC Educational Resources Information Center
Binder, P.-M.; Richert, A.
2011-01-01
A series of papers have recently addressed the mechanism by which a siphon works. While all this started as an effort to clarify words--namely, dictionary definitions--the authors feel that words, along with the misguided use of physical concepts, are currently contributing to considerable confusion and casuistry on this subject. They wish to make…
Introducing Analysis of Conflict Theory Into the Social Science Classroom.
ERIC Educational Resources Information Center
Harris, Thomas E.
The paper provides a simplified introduction to conflict theory through a series of in-class exercises. Conflict resolution, defined as negotiated settlement, can occur through three forms of communication: tacit, implicit, and explicit. Tacit communication, taking place without face-to-face or written interaction, refers to inferences made and…
Mapping of forest disturbance magnitudes across the US National Forest System
NASA Astrophysics Data System (ADS)
Hernandez, A. J.; Healey, S. P.; Ramsey, R. D.; McGinty, C.; Garrard, C.; Lu, N.; Huang, C.
2013-12-01
A precise record in conjunction with ongoing monitoring of carbon pools constitutes essentials inputs for the continuous modernization of an ever- dynamic science such as climate change. This is particularly important in forested ecosystems for which accurate field archives are available and can be used in combination with historic satellite imagery to obtain spatially explicit estimates of several indicators that can be used in the assessment of said carbon pools. Many forest disturbance processes limit storage of carbon in forested ecosystems and thereby reduce those systems' capacity to mitigate changes in the global climate system. A component of the US National Forest System's (NFS) comprehensive plan for carbon monitoring includes accounting for mapped disturbances, such as fires, harvests, and insect activity. A long-term time series of maps that show the timing, extent, type, and magnitude of disturbances going back to 1990 has been prepared for the United States Forest Service (USFS) Northern Region, and is currently under preparation for the rest of the NFS regions covering more than 75 million hectares. Our mapping approach starts with an automated initial detection of annual disturbances using imagery captured within the growing season from the Landsat archive. Through a meticulous process, the initial detections are then visually inspected, manually corrected and labeled using various USFS ancillary datasets and Google Earth high-resolution historic imagery. We prepared multitemporal models of percent canopy cover and live tree carbon (T/ha) that were calibrated with extensive (in excess of 2000 locations) field data from the US Forest Service Forest Inventory and Analysis program (FIA). The models were then applied to all the years of the radiometrically corrected and normalized Landsat time series in order to provide annual spatially explicit estimates of the magnitude of change in terms of these two attributes. Our results provide objective, widely interpretable estimates of per-year disturbance effects across large areas. Different stakeholders (scientists, managers, policymakers) should benefit from this broad survey of disturbance processes affecting US federal forests over the last 20 years.
Cave men: stone tools, Victorian science, and the 'primitive mind' of deep time.
Pettitt, Paul B; White, Mark J
2011-03-20
Palaeoanthropology, the study of the evolution of humanity, arose in the nineteenth century. Excavations in Europe uncovered a series of archaeological sediments which provided proof that the antiquity of human life on Earth was far longer than the biblical six thousand years, and by the 1880s authors had constructed a basic paradigm of what 'primitive' human life was like. Here we examine the development of Victorian palaeoanthropology for what it reveals of the development of notions of cognitive evolution. It seems that Victorian specialists rarely addressed cognitive evolution explicitly, although several assumptions were generally made that arose from preconceptions derived from contemporary 'primitive' peoples. We identify three main phases of development of notions of the primitive mind in the period.
NASA Technical Reports Server (NTRS)
Usab, William J., Jr.; Jiang, Yi-Tsann
1991-01-01
The objective of the present research is to develop a general solution adaptive scheme for the accurate prediction of inviscid quasi-three-dimensional flow in advanced compressor and turbine designs. The adaptive solution scheme combines an explicit finite-volume time-marching scheme for unstructured triangular meshes and an advancing front triangular mesh scheme with a remeshing procedure for adapting the mesh as the solution evolves. The unstructured flow solver has been tested on a series of two-dimensional airfoil configurations including a three-element analytic test case presented here. Mesh adapted quasi-three-dimensional Euler solutions are presented for three spanwise stations of the NASA rotor 67 transonic fan. Computed solutions are compared with available experimental data.
SparkClouds: visualizing trends in tag clouds.
Lee, Bongshin; Riche, Nathalie Henry; Karlson, Amy K; Carpendale, Sheelash
2010-01-01
Tag clouds have proliferated over the web over the last decade. They provide a visual summary of a collection of texts by visually depicting the tag frequency by font size. In use, tag clouds can evolve as the associated data source changes over time. Interesting discussions around tag clouds often include a series of tag clouds and consider how they evolve over time. However, since tag clouds do not explicitly represent trends or support comparisons, the cognitive demands placed on the person for perceiving trends in multiple tag clouds are high. In this paper, we introduce SparkClouds, which integrate sparklines into a tag cloud to convey trends between multiple tag clouds. We present results from a controlled study that compares SparkClouds with two traditional trend visualizations—multiple line graphs and stacked bar charts—as well as Parallel Tag Clouds. Results show that SparkClouds ability to show trends compares favourably to the alternative visualizations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herbert, J.M.
1997-02-01
Perturbation theory has long been utilized by quantum chemists as a method for approximating solutions to the Schroedinger equation. Perturbation treatments represent a system`s energy as a power series in which each additional term further corrects the total energy; it is therefore convenient to have an explicit formula for the nth-order energy correction term. If all perturbations are collected into a single Hamiltonian operator, such a closed-form expression for the nth-order energy correction is well known; however, use of a single perturbed Hamiltonian often leads to divergent energy series, while superior convergence behavior is obtained by expanding the perturbed Hamiltonianmore » in a power series. This report presents a closed-form expression for the nth-order energy correction obtained using Rayleigh-Schroedinger perturbation theory and a power series expansion of the Hamiltonian.« less
NASA Astrophysics Data System (ADS)
Mao, Jin-Jin; Tian, Shou-Fu; Zou, Li; Zhang, Tian-Tian
2018-05-01
In this paper, we consider a generalized Hirota equation with a bounded potential, which can be used to describe the propagation properties of optical soliton solutions. By employing the hypothetical method and the sub-equation method, we construct the bright soliton, dark soliton, complexitons and Gaussian soliton solutions of the Hirota equation. Moreover, we explicitly derive the power series solutions with their convergence analysis. Finally, we provide the graphical analysis of such soliton solutions in order to better understand their dynamical behavior.
Forrest, Lauren N; Smith, April R; Fussner, Lauren M; Dodd, Dorian R; Clerkin, Elise M
2016-01-01
"Fast" (i.e., implicit) processing is relatively automatic; "slow" (i.e., explicit) processing is relatively controlled and can override automatic processing. These different processing types often produce different responses that uniquely predict behaviors. In the present study, we tested if explicit, self-reported symptoms of exercise dependence and an implicit association of exercise as important predicted exercise behaviors and change in problematic exercise attitudes. We assessed implicit attitudes of exercise importance and self-reported symptoms of exercise dependence at Time 1. Participants reported daily exercise behaviors for approximately one month, and then completed a Time 2 assessment of self-reported exercise dependence symptoms. Undergraduate males and females (Time 1, N = 93; Time 2, N = 74) tracked daily exercise behaviors for one month and completed an Implicit Association Test assessing implicit exercise importance and subscales of the Exercise Dependence Questionnaire (EDQ) assessing exercise dependence symptoms. Implicit attitudes of exercise importance and Time 1 EDQ scores predicted Time 2 EDQ scores. Further, implicit exercise importance and Time 1 EDQ scores predicted daily exercise intensity while Time 1 EDQ scores predicted the amount of days exercised. Implicit and explicit processing appear to uniquely predict exercise behaviors and attitudes. Given that different implicit and explicit processes may drive certain exercise factors (e.g., intensity and frequency, respectively), these behaviors may contribute to different aspects of exercise dependence.
Forrest, Lauren N.; Smith, April R.; Fussner, Lauren M.; Dodd, Dorian R.; Clerkin, Elise M.
2015-01-01
Objectives ”Fast” (i.e., implicit) processing is relatively automatic; “slow” (i.e., explicit) processing is relatively controlled and can override automatic processing. These different processing types often produce different responses that uniquely predict behaviors. In the present study, we tested if explicit, self-reported symptoms of exercise dependence and an implicit association of exercise as important predicted exercise behaviors and change in problematic exercise attitudes. Design We assessed implicit attitudes of exercise importance and self-reported symptoms of exercise dependence at Time 1. Participants reported daily exercise behaviors for approximately one month, and then completed a Time 2 assessment of self-reported exercise dependence symptoms. Method Undergraduate males and females (Time 1, N = 93; Time 2, N = 74) tracked daily exercise behaviors for one month and completed an Implicit Association Test assessing implicit exercise importance and subscales of the Exercise Dependence Questionnaire (EDQ) assessing exercise dependence symptoms. Results Implicit attitudes of exercise importance and Time 1 EDQ scores predicted Time 2 EDQ scores. Further, implicit exercise importance and Time 1 EDQ scores predicted daily exercise intensity while Time 1 EDQ scores predicted the amount of days exercised. Conclusion Implicit and explicit processing appear to uniquely predict exercise behaviors and attitudes. Given that different implicit and explicit processes may drive certain exercise factors (e.g., intensity and frequency, respectively), these behaviors may contribute to different aspects of exercise dependence. PMID:26195916
Nonequilibrium transitions driven by external dichotomous noise
NASA Astrophysics Data System (ADS)
Behn, U.; Schiele, K.; Teubel, A.; Kühnel, A.
1987-06-01
The stationary probability density P s for a class of nonlinear one-dimensional models driven by a dichotomous Markovian process (DMP) I t , can be calculated explicitly. For the specific case of the Stratonovich model, x=ax - -x 3 +I t x, the qualitative shape of P s and its support is discussed in the whole parameter region. The location of the maxima of P s shows a behavior similar to order parameters in continuous phase transitions. The possibility of a noiseinduced change from continuous to a discontinuous transition in an extended model, in which the DMP couples also to the cubic term, is discussed. The time-dependent moments
NASA Astrophysics Data System (ADS)
Irvine, Brian; Fleskens, Luuk; Kirkby, Mike
2016-04-01
Stakeholders in recent EU projects identified soil erosion as the most frequent driver of land degradation in semi-arid environments. In a number of sites, historic land management and rainfall variability are recognised as contributing to the serious environmental impact. In order to consider the potential of sustainable land management and water harvesting techniques stakeholders and study sites from the projects selected and trialled both local technologies and promising technologies reported from other sites . The combined PESERA and DESMICE modelling approach considered the regional effects of the technologies in combating desertification both in environmental and socio-economical terms. Initial analysis was based on long term average climate data with the model run to equilibrium. Current analysis, primarily based on the WAHARA study sites considers rainfall variability more explicitly in time series mode. The PESERA-DESMICE approach considers the difference between a baseline scenario and a (water harvesting) technology scenario, typically, in terms of productivity, financial viability and scope for reducing erosion risk. A series of 50 year rainfall realisations are generated from observed data to capture a full range of the climatic variability. Each realisation provides a unique time-series of rainfall and through modelling can provide a simulated time-series of crop yield and erosion risk for both baseline conditions and technology scenarios. Subsequent realisations and model simulations add to an envelope of the potential crop yield and cost-benefit relations. The development of such envelopes helps express the agricultural and erosional risk associated with climate variability and the potential for conservation measures to absorb the risk, highlighting the probability of achieving a given crop yield or erosion limit. Information that can directly inform or influence the local adoption of conservation measures under the climatic variability in semi-arid areas
Wang, Qian; Molenaar, Peter; Harsh, Saurabh; Freeman, Kenneth; Xie, Jinyu; Gold, Carol; Rovine, Mike; Ulbrecht, Jan
2014-03-01
An essential component of any artificial pancreas is on the prediction of blood glucose levels as a function of exogenous and endogenous perturbations such as insulin dose, meal intake, and physical activity and emotional tone under natural living conditions. In this article, we present a new data-driven state-space dynamic model with time-varying coefficients that are used to explicitly quantify the time-varying patient-specific effects of insulin dose and meal intake on blood glucose fluctuations. Using the 3-variate time series of glucose level, insulin dose, and meal intake of an individual type 1 diabetic subject, we apply an extended Kalman filter (EKF) to estimate time-varying coefficients of the patient-specific state-space model. We evaluate our empirical modeling using (1) the FDA-approved UVa/Padova simulator with 30 virtual patients and (2) clinical data of 5 type 1 diabetic patients under natural living conditions. Compared to a forgetting-factor-based recursive ARX model of the same order, the EKF model predictions have higher fit, and significantly better temporal gain and J index and thus are superior in early detection of upward and downward trends in glucose. The EKF based state-space model developed in this article is particularly suitable for model-based state-feedback control designs since the Kalman filter estimates the state variable of the glucose dynamics based on the measured glucose time series. In addition, since the model parameters are estimated in real time, this model is also suitable for adaptive control. © 2014 Diabetes Technology Society.
Finite-time singularities in the dynamics of hyperinflation in an economy
NASA Astrophysics Data System (ADS)
Szybisz, Martín A.; Szybisz, Leszek
2009-08-01
The dynamics of hyperinflation episodes is studied by applying a theoretical approach based on collective “adaptive inflation expectations” with a positive nonlinear feedback proposed in the literature. In such a description it is assumed that the growth rate of the logarithmic price, r(t) , changes with a velocity obeying a power law which leads to a finite-time singularity at a critical time tc . By revising that model we found that, indeed, there are two types of singular solutions for the logarithmic price, p(t) . One is given by the already reported form p(t)≈(tc-t)-α (with α>0 ) and the other exhibits a logarithmic divergence, p(t)≈ln[1/(tc-t)] . The singularity is a signature for an economic crash. In the present work we express p(t) explicitly in terms of the parameters introduced throughout the formulation avoiding the use of any combination of them defined in the original paper. This procedure allows to examine simultaneously the time series of r(t) and p(t) performing a linked error analysis of the determined parameters. For the first time this approach is applied for analyzing the very extreme historical hyperinflations occurred in Greece (1941-1944) and Yugoslavia (1991-1994). The case of Greece is compatible with a logarithmic singularity. The study is completed with an analysis of the hyperinflation spiral currently experienced in Zimbabwe. According to our results, an economic crash in this country is predicted for these days. The robustness of the results to changes of the initial time of the series and the differences with a linear feedback are discussed.
Grech, Alana; Sheppard, James; Marsh, Helene
2011-01-01
Background Conservation planning and the design of marine protected areas (MPAs) requires spatially explicit information on the distribution of ecological features. Most species of marine mammals range over large areas and across multiple planning regions. The spatial distributions of marine mammals are difficult to predict using habitat modelling at ecological scales because of insufficient understanding of their habitat needs, however, relevant information may be available from surveys conducted to inform mandatory stock assessments. Methodology and Results We use a 20-year time series of systematic aerial surveys of dugong (Dugong dugong) abundance to create spatially-explicit models of dugong distribution and relative density at the scale of the coastal waters of northeast Australia (∼136,000 km2). We interpolated the corrected data at the scale of 2 km * 2 km planning units using geostatistics. Planning units were classified as low, medium, high and very high dugong density on the basis of the relative density of dugongs estimated from the models and a frequency analysis. Torres Strait was identified as the most significant dugong habitat in northeast Australia and the most globally significant habitat known for any member of the Order Sirenia. The models are used by local, State and Federal agencies to inform management decisions related to the Indigenous harvest of dugongs, gill-net fisheries and Australia's National Representative System of Marine Protected Areas. Conclusion/Significance In this paper we demonstrate that spatially-explicit population models add value to data collected for stock assessments, provide a robust alternative to predictive habitat distribution models, and inform species conservation at multiple scales. PMID:21464933
The Use (and Misuse) of PISA in Guiding Policy Reform: The Case of Spain
ERIC Educational Resources Information Center
Choi, Álvaro; Jerrim, John
2016-01-01
In 2013 Spain introduced a series of educational reforms explicitly inspired by the Programme for International Student Assessment (PISA) 2012 results. These reforms were mainly implemented in secondary education--based upon the assumption that this is where Spain's educational problems lie. This paper questions this assumption by attempting to…
Integrating Program Assessment and a Career Focus into a Research Methods Course
ERIC Educational Resources Information Center
Senter, Mary Scheuer
2017-01-01
Sociology research methods students in 2013 and 2016 implemented a series of "real world" data gathering activities that enhanced their learning while assisting the department with ongoing program assessment and program review. In addition to the explicit collection of program assessment data on both students' development of sociological…
Directed Self-Inquiry: A Scaffold for Teaching Laboratory Report Writing
ERIC Educational Resources Information Center
Deiner, L. Jay; Newsome, Daniel; Samaroo, Diana
2012-01-01
A scaffold was created for the explicit instruction of laboratory report writing. The scaffold breaks the laboratory report into sections and teaches students to ask and answer questions in order to generate section-appropriate content and language. Implementation of the scaffold is done through a series of section-specific worksheets that are…
Teacher Compensation: Performance Pay and Other Issues. The Informed Educator Series
ERIC Educational Resources Information Center
Protheroe, Nancy
2011-01-01
This "Informed Educator" examines the issue of performance pay for teachers. Research looking for a possible link between performance pay and student learning is examined, and implementation issues are addressed. Finally, the need to broaden the discussion of performance pay to a more comprehensive review that explicitly connects the structure of…
Cross-Cultural Evidence that the Nonverbal Expression of Pride Is an Automatic Status Signal
ERIC Educational Resources Information Center
Tracy, Jessica L.; Shariff, Azim F.; Zhao, Wanying; Henrich, Joseph
2013-01-01
To test whether the pride expression is an implicit, reliably developing signal of high social status in humans, the authors conducted a series of experiments that measured implicit and explicit cognitive associations between pride displays and high-status concepts in two culturally disparate populations--North American undergraduates and Fijian…
NASA Astrophysics Data System (ADS)
Howe, Eric Michael; Wÿss Rudge, David
This paper provides an argument in favor of a specific pedagogical method of using the history of science to help students develop more informed views about nature of science (NOS) issues. The paper describes a series of lesson plans devoted to encouraging students to engage, unbeknownst to them, in similar reasoning that led scientists to understand sickle-cell anemia from the perspective of multiple subdisciplines in biology. Students pursue their understanding of a "mystery disease"; by means of a series of open-ended problems that invite them to discuss it from the perspective of anatomy, physiology, ecology, evolution, and molecular and cell biology. Throughout this unit, instructors incorporate techniques that invite students to explicitly and reflectively discuss various NOS issues with reference to this example and more generally. It is argued on the grounds of constructivist tenets that this pedagogy has substantial advantages over more implicit approaches. The findings of an empirical study using an open-ended survey and follow-up, semi-structured interviews to assess students' pre- and post-instruction NOS conceptions support the efficacy of this approach.
Explicit expressions of quantum mechanical rotation operators for spins 1 to 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kocakoç, Mehpeyker, E-mail: mkocakoc@cu.edu.tr; Tapramaz, Recep, E-mail: recept@omu.edu.tr
2016-03-25
Quantum mechanical rotation operators are the subject of quantum mechanics, mathematics and pulsed magnetic resonance spectroscopies, namely NMR, EPR and ENDOR. They are also necessary for spin based quantum information systems. The rotation operators of spin 1/2 are well known and can be found in related textbooks. But rotation operators of other spins greater than 1/2 can be found numerically by evaluating the series expansions of exponential operator obtained from Schrödinger equation, or by evaluating Wigner-d formula or by evaluating recently established expressions in polynomial forms discussed in the text. In this work, explicit symbolic expressions of x, y andmore » z components of rotation operators for spins 1 to 2 are worked out by evaluating series expansion of exponential operator for each element of operators and utilizing linear curve fitting process. The procedures gave out exact expressions of each element of the rotation operators. The operators of spins greater than 2 are under study and will be published in a separate paper.« less
Modeling effects of climate change on Yakima River salmonid habitats
Hatten, James R.; Batt, Thomas R.; Connolly, Patrick J.; Maule, Alec G.
2014-01-01
We evaluated the potential effects of two climate change scenarios on salmonid habitats in the Yakima River by linking the outputs from a watershed model, a river operations model, a two-dimensional (2D) hydrodynamic model, and a geographic information system (GIS). The watershed model produced a discharge time series (hydrograph) in two study reaches under three climate scenarios: a baseline (1981–2005), a 1-°C increase in mean air temperature (plus one scenario), and a 2-°C increase (plus two scenario). A river operations model modified the discharge time series with Yakima River operational rules, a 2D model provided spatially explicit depth and velocity grids for two floodplain reaches, while an expert panel provided habitat criteria for four life stages of coho and fall Chinook salmon. We generated discharge-habitat functions for each salmonid life stage (e.g., spawning, rearing) in main stem and side channels, and habitat time series for baseline, plus one (P1) and plus two (P2) scenarios. The spatial and temporal patterns in salmonid habitats differed by reach, life stage, and climate scenario. Seventy-five percent of the 28 discharge-habitat responses exhibited a decrease in habitat quantity, with the P2 scenario producing the largest changes, followed by P1. Fry and spring/summer rearing habitats were the most sensitive to warming and flow modification for both species. Side channels generally produced more habitat than main stem and were more responsive to flow changes, demonstrating the importance of lateral connectivity in the floodplain. A discharge-habitat sensitivity analysis revealed that proactive management of regulated surface waters (i.e., increasing or decreasing flows) might lessen the impacts of climate change on salmonid habitats.
Spectral Analysis: From Additive Perspective to Multiplicative Perspective
NASA Astrophysics Data System (ADS)
Wu, Z.
2017-12-01
The early usage of trigonometric functions can be traced back to at least 17th century BC. It was Bhaskara II of the 12th century CE who first proved the mathematical equivalence between the sum of two trigonometric functions of any given angles and the product of two trigonometric functions of related angles, which has been taught these days in middle school classroom. The additive perspective of trigonometric functions led to the development of the Fourier transform that is used to express any functions as the sum of a set of trigonometric functions and opened a new mathematical field called harmonic analysis. Unfortunately, Fourier's sum cannot directly express nonlinear interactions between trigonometric components of different periods, and thereby lacking the capability of quantifying nonlinear interactions in dynamical systems. In this talk, the speaker will introduce the Huang transform and Holo-spectrum which were pioneered by Norden Huang and emphasizes the multiplicative perspective of trigonometric functions in expressing any function. Holo-spectrum is a multi-dimensional spectral expression of a time series that explicitly identifies the interactions among different scales and quantifies nonlinear interactions hidden in a time series. Along with this introduction, the developing concepts of physical, rather than mathematical, analysis of data will be explained. Various enlightening applications of Holo-spectrum analysis in atmospheric and climate studies will also be presented.
Design of fuzzy cognitive maps using neural networks for predicting chaotic time series.
Song, H J; Miao, C Y; Shen, Z Q; Roel, W; Maja, D H; Francky, C
2010-12-01
As a powerful paradigm for knowledge representation and a simulation mechanism applicable to numerous research and application fields, Fuzzy Cognitive Maps (FCMs) have attracted a great deal of attention from various research communities. However, the traditional FCMs do not provide efficient methods to determine the states of the investigated system and to quantify causalities which are the very foundation of the FCM theory. Therefore in many cases, constructing FCMs for complex causal systems greatly depends on expert knowledge. The manually developed models have a substantial shortcoming due to model subjectivity and difficulties with accessing its reliability. In this paper, we propose a fuzzy neural network to enhance the learning ability of FCMs so that the automatic determination of membership functions and quantification of causalities can be incorporated with the inference mechanism of conventional FCMs. In this manner, FCM models of the investigated systems can be automatically constructed from data, and therefore are independent of the experts. Furthermore, we employ mutual subsethood to define and describe the causalities in FCMs. It provides more explicit interpretation for causalities in FCMs and makes the inference process easier to understand. To validate the performance, the proposed approach is tested in predicting chaotic time series. The simulation studies show the effectiveness of the proposed approach. Copyright © 2010 Elsevier Ltd. All rights reserved.
Spurious One-Month and One-Year Periods in Visual Observations of Variable Stars
NASA Astrophysics Data System (ADS)
Percy, J. R.
2015-12-01
Visual observations of variable stars, when time-series analyzed with some algorithms such as DC-DFT in vstar, show spurious periods at or close to one synodic month (29.5306 days), and also at about a year, with an amplitude of typically a few hundredths of a magnitude. The one-year periods have been attributed to the Ceraski effect, which was believed to be a physiological effect of the visual observing process. This paper reports on time-series analysis, using DC-DFT in vstar, of visual observations (and in some cases, V observations) of a large number of stars in the AAVSO International Database, initially to investigate the one-month periods. The results suggest that both the one-month and one-year periods are actually due to aliasing of the stars' very low-frequency variations, though they do not rule out very low-amplitude signals (typically 0.01 to 0.02 magnitude) which may be due to a different process, such as a physiological one. Most or all of these aliasing effects may be avoided by using a different algorithm, which takes explicit account of the window function of the data, and/or by being fully aware of the possible presence of and aliasing by very low-frequency variations.
Quantum correlations in non-inertial cavity systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harsij, Zeynab, E-mail: z.harsij@ph.iut.ac.ir; Mirza, Behrouz, E-mail: b.mirza@cc.iut.ac.ir
2016-10-15
Non-inertial cavities are utilized to store and send Quantum Information between mode pairs. A two-cavity system is considered where one is inertial and the other accelerated in a finite time. Maclaurian series are applied to expand the related Bogoliubov coefficients and the problem is treated perturbatively. It is shown that Quantum Discord, which is a measure of quantumness of correlations, is degraded periodically. This is almost in agreement with previous results reached in accelerated systems where increment of acceleration decreases the degree of quantum correlations. As another finding of the study, it is explicitly shown that degradation of Quantum Discordmore » disappears when the state is in a single cavity which is accelerated for a finite time. This feature makes accelerating cavities useful instruments in Quantum Information Theory. - Highlights: • Non-inertial cavities are utilized to store and send information in Quantum Information Theory. • Cavities include boundary conditions which will protect the entanglement once it has been created. • The problem is treated perturbatively and the maclaurian series are applied to expand the related Bogoliubov coefficients. • When two cavities are considered degradation in the degree of quantum correlation happens and it appears periodically. • The interesting issue is when a single cavity is studied and the degradation in quantum correlations disappears.« less
Video Game Rehabilitation of Velopharyngeal Dysfunction: A Case Series
Mittelman, Talia; Braden, Maia N.; Woodnorth, Geralyn Harvey; Stepp, Cara E.
2017-01-01
Purpose Video games provide a promising platform for rehabilitation of speech disorders. Although video games have been used to train speech perception in foreign language learners and have been proposed for aural rehabilitation, their use in speech therapy has been limited thus far. We present feasibility results from at-home use in a case series of children with velopharyngeal dysfunction (VPD) using an interactive video game that provided real-time biofeedback to facilitate appropriate nasalization. Method Five participants were recruited across a range of ages, VPD severities, and VPD etiologies. Participants completed multiple weeks of individual game play with a video game that provides feedback on nasalization measured via nasal accelerometry. Nasalization was assessed before and after training by using nasometry, aerodynamic measures, and expert perceptual judgments. Results Four participants used the game at home or school, with the remaining participant unwilling to have the nasal accelerometer secured to his nasal skin, perhaps due to his young age. The remaining participants showed a tendency toward decreased nasalization after training, particularly for the words explicitly trained in the video game. Conclusion Results suggest that video game–based systems may provide a useful rehabilitation platform for providing real-time feedback of speech nasalization in VPD. Supplemental Material https://doi.org/10.23641/asha.5116828 PMID:28655049
Dricu, Mihai; Frühholz, Sascha
2016-12-01
We conducted a series of activation likelihood estimation (ALE) meta-analyses to determine the commonalities and distinctions between separate levels of emotion perception, namely incidental perception, passive perception, and explicit evaluation of emotional expressions. Pooling together more than 180 neuroimaging experiments using facial, vocal or body expressions, our results are threefold. First, explicitly evaluating the emotions of others recruits brain regions associated with the sensory processing of expressions, such as the inferior occipital gyrus, middle fusiform gyrus and the superior temporal gyrus, and brain regions involved in low-level and high-level mindreading, namely the posterior superior temporal sulcus, the inferior frontal cortex and dorsomedial frontal cortex. Second, we show that only the sensory regions were also consistently active during the passive perception of emotional expressions. Third, we show that the brain regions involved in mindreading were active during the explicit evaluation of both facial and vocal expressions. We discuss these results in light of the existing literature and conclude by proposing a cognitive model for perceiving and evaluating the emotions of others. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cross-cultural evidence that the nonverbal expression of pride is an automatic status signal.
Tracy, Jessica L; Shariff, Azim F; Zhao, Wanying; Henrich, Joseph
2013-02-01
To test whether the pride expression is an implicit, reliably developing signal of high social status in humans, the authors conducted a series of experiments that measured implicit and explicit cognitive associations between pride displays and high-status concepts in two culturally disparate populations--North American undergraduates and Fijian villagers living in a traditional, small-scale society. In both groups, pride displays produced strong implicit associations with high status, despite Fijian social norms discouraging overt displays of pride. Also in both groups, implicit and explicit associations between emotion expressions and status were dissociated; despite the cross-cultural implicit association between pride displays and high status, happy displays were, cross-culturally, the more powerful status indicator at an explicit level, and among Fijians, happy and pride displays were equally strongly implicitly associated with status. Finally, a cultural difference emerged: Fijians viewed happy displays as more deserving of high status than did North Americans, both implicitly and explicitly. Together, these findings suggest that the display and recognition of pride may be part of a suite of adaptations for negotiating status relationships, but that the high-status message of pride is largely communicated through implicit cognitive processes. 2013 APA, all rights reserved
NASA Astrophysics Data System (ADS)
Doha, E. H.; Ahmed, H. M.
2004-08-01
A formula expressing explicitly the derivatives of Bessel polynomials of any degree and for any order in terms of the Bessel polynomials themselves is proved. Another explicit formula, which expresses the Bessel expansion coefficients of a general-order derivative of an infinitely differentiable function in terms of its original Bessel coefficients, is also given. A formula for the Bessel coefficients of the moments of one single Bessel polynomial of certain degree is proved. A formula for the Bessel coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its Bessel coefficients is also obtained. Application of these formulae for solving ordinary differential equations with varying coefficients, by reducing them to recurrence relations in the expansion coefficients of the solution, is explained. An algebraic symbolic approach (using Mathematica) in order to build and solve recursively for the connection coefficients between Bessel-Bessel polynomials is described. An explicit formula for these coefficients between Jacobi and Bessel polynomials is given, of which the ultraspherical polynomial and its consequences are important special cases. Two analytical formulae for the connection coefficients between Laguerre-Bessel and Hermite-Bessel are also developed.
NASA Astrophysics Data System (ADS)
Birkel, C.; Paroli, R.; Spezia, L.; Tetzlaff, D.; Soulsby, C.
2012-12-01
In this paper we present a novel model framework using the class of Markov Switching Autoregressive Models (MSARMs) to examine catchments as complex stochastic systems that exhibit non-stationary, non-linear and non-Normal rainfall-runoff and solute dynamics. Hereby, MSARMs are pairs of stochastic processes, one observed and one unobserved, or hidden. We model the unobserved process as a finite state Markov chain and assume that the observed process, given the hidden Markov chain, is conditionally autoregressive, which means that the current observation depends on its recent past (system memory). The model is fully embedded in a Bayesian analysis based on Markov Chain Monte Carlo (MCMC) algorithms for model selection and uncertainty assessment. Hereby, the autoregressive order and the dimension of the hidden Markov chain state-space are essentially self-selected. The hidden states of the Markov chain represent unobserved levels of variability in the observed process that may result from complex interactions of hydroclimatic variability on the one hand and catchment characteristics affecting water and solute storage on the other. To deal with non-stationarity, additional meteorological and hydrological time series along with a periodic component can be included in the MSARMs as covariates. This extension allows identification of potential underlying drivers of temporal rainfall-runoff and solute dynamics. We applied the MSAR model framework to streamflow and conservative tracer (deuterium and oxygen-18) time series from an intensively monitored 2.3 km2 experimental catchment in eastern Scotland. Statistical time series analysis, in the form of MSARMs, suggested that the streamflow and isotope tracer time series are not controlled by simple linear rules. MSARMs showed that the dependence of current observations on past inputs observed by transport models often in form of the long-tailing of travel time and residence time distributions can be efficiently explained by non-stationarity either of the system input (climatic variability) and/or the complexity of catchment storage characteristics. The statistical model is also capable of reproducing short (event) and longer-term (inter-event) and wet and dry dynamical "hydrological states". These reflect the non-linear transport mechanisms of flow pathways induced by transient climatic and hydrological variables and modified by catchment characteristics. We conclude that MSARMs are a powerful tool to analyze the temporal dynamics of hydrological data, allowing for explicit integration of non-stationary, non-linear and non-Normal characteristics.
Ngwa, Julius S; Cabral, Howard J; Cheng, Debbie M; Pencina, Michael J; Gagnon, David R; LaValley, Michael P; Cupples, L Adrienne
2016-11-03
Typical survival studies follow individuals to an event and measure explanatory variables for that event, sometimes repeatedly over the course of follow up. The Cox regression model has been used widely in the analyses of time to diagnosis or death from disease. The associations between the survival outcome and time dependent measures may be biased unless they are modeled appropriately. In this paper we explore the Time Dependent Cox Regression Model (TDCM), which quantifies the effect of repeated measures of covariates in the analysis of time to event data. This model is commonly used in biomedical research but sometimes does not explicitly adjust for the times at which time dependent explanatory variables are measured. This approach can yield different estimates of association compared to a model that adjusts for these times. In order to address the question of how different these estimates are from a statistical perspective, we compare the TDCM to Pooled Logistic Regression (PLR) and Cross Sectional Pooling (CSP), considering models that adjust and do not adjust for time in PLR and CSP. In a series of simulations we found that time adjusted CSP provided identical results to the TDCM while the PLR showed larger parameter estimates compared to the time adjusted CSP and the TDCM in scenarios with high event rates. We also observed upwardly biased estimates in the unadjusted CSP and unadjusted PLR methods. The time adjusted PLR had a positive bias in the time dependent Age effect with reduced bias when the event rate is low. The PLR methods showed a negative bias in the Sex effect, a subject level covariate, when compared to the other methods. The Cox models yielded reliable estimates for the Sex effect in all scenarios considered. We conclude that survival analyses that explicitly account in the statistical model for the times at which time dependent covariates are measured provide more reliable estimates compared to unadjusted analyses. We present results from the Framingham Heart Study in which lipid measurements and myocardial infarction data events were collected over a period of 26 years.
Testing the Use of Implicit Solvent in the Molecular Dynamics Modelling of DNA Flexibility
NASA Astrophysics Data System (ADS)
Mitchell, J.; Harris, S.
DNA flexibility controls packaging, looping and in some cases sequence specific protein binding. Molecular dynamics simulations carried out with a computationally efficient implicit solvent model are potentially a powerful tool for studying larger DNA molecules than can be currently simulated when water and counterions are represented explicitly. In this work we compare DNA flexibility at the base pair step level modelled using an implicit solvent model to that previously determined from explicit solvent simulations and database analysis. Although much of the sequence dependent behaviour is preserved in implicit solvent, the DNA is considerably more flexible when the approximate model is used. In addition we test the ability of the implicit solvent to model stress induced DNA disruptions by simulating a series of DNA minicircle topoisomers which vary in size and superhelical density. When compared with previously run explicit solvent simulations, we find that while the levels of DNA denaturation are similar using both computational methodologies, the specific structural form of the disruptions is different.
Booker, Nancy Achieng'; Miller, Ann Neville; Ngure, Peter
2016-12-01
Extremely popular with Kenyan youth, the entertainment-education drama Shuga was designed with specific goals of promoting condom use, single versus multiple sexual partners, and destigmatization of HIV. Almost as soon as it aired, however, it generated controversy due to its extensive sexual themes and relatively explicit portrayal of sexual issues. To determine how safer sex, antistigma messages, and overall sexual content were integrated into Shuga, we conducted a content analysis. Results indicated that condom use and HIV destigmatization messages were frequently and clearly communicated. Negative consequences for risky sexual behavior were communicated over the course of the entire series. Messages about multiple concurrent partnerships were not evident. In addition, in terms of scenes per hour of programming, Shuga had 10.3 times the amount of sexual content overall, 8.2 times the amount of sexual talk, 17.8 times the amount of sexual behavior, and 9.4 times the amount of sexual intercourse as found in previous analysis of U.S. entertainment programming. Research is needed to determine how these factors may interact to influence adolescent viewers of entertainment education dramas.
Empirical STORM-E Model. [I. Theoretical and Observational Basis
NASA Technical Reports Server (NTRS)
Mertens, Christopher J.; Xu, Xiaojing; Bilitza, Dieter; Mlynczak, Martin G.; Russell, James M., III
2013-01-01
Auroral nighttime infrared emission observed by the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument onboard the Thermosphere-Ionosphere-Mesosphere Energetics and Dynamics (TIMED) satellite is used to develop an empirical model of geomagnetic storm enhancements to E-region peak electron densities. The empirical model is called STORM-E and will be incorporated into the 2012 release of the International Reference Ionosphere (IRI). The proxy for characterizing the E-region response to geomagnetic forcing is NO+(v) volume emission rates (VER) derived from the TIMED/SABER 4.3 lm channel limb radiance measurements. The storm-time response of the NO+(v) 4.3 lm VER is sensitive to auroral particle precipitation. A statistical database of storm-time to climatological quiet-time ratios of SABER-observed NO+(v) 4.3 lm VER are fit to widely available geomagnetic indices using the theoretical framework of linear impulse-response theory. The STORM-E model provides a dynamic storm-time correction factor to adjust a known quiescent E-region electron density peak concentration for geomagnetic enhancements due to auroral particle precipitation. Part II of this series describes the explicit development of the empirical storm-time correction factor for E-region peak electron densities, and shows comparisons of E-region electron densities between STORM-E predictions and incoherent scatter radar measurements. In this paper, Part I of the series, the efficacy of using SABER-derived NO+(v) VER as a proxy for the E-region response to solar-geomagnetic disturbances is presented. Furthermore, a detailed description of the algorithms and methodologies used to derive NO+(v) VER from SABER 4.3 lm limb emission measurements is given. Finally, an assessment of key uncertainties in retrieving NO+(v) VER is presented
Comment on ‘Special-case closed form of the Baker-Campbell-Hausdorff formula’
NASA Astrophysics Data System (ADS)
Lo, C. F.
2016-05-01
Recently Van-Brunt and Visser (2015 J. Phys. A: Math. Theor. 48 225207) succeeded in explicitly evaluating the Baker-Campbell-Hausdorff (BCH) expansion series for the noncommuting operators X and Y, provided that the two operators satisfy the commutation relation: [X,Y]={uX}+{vY}+{cI}, and the operator I commutes with both of them. In this comment we show that the closed-form BCH formula of this special case can be straightforwardly derived by the means of the Wei-Norman theorem and no summation of the infinite series is needed.
NASA Astrophysics Data System (ADS)
Naserpour, Mahin; Zapata-Rodríguez, Carlos J.
2018-01-01
The evaluation of vector wave fields can be accurately performed by means of diffraction integrals, differential equations and also series expansions. In this paper, a Bessel series expansion which basis relies on the exact solution of the Helmholtz equation in cylindrical coordinates is theoretically developed for the straightforward yet accurate description of low-numerical-aperture focal waves. The validity of this approach is confirmed by explicit application to Gaussian beams and apertured focused fields in the paraxial regime. Finally we discuss how our procedure can be favorably implemented in scattering problems.
Assimilation of LAI time-series in crop production models
NASA Astrophysics Data System (ADS)
Kooistra, Lammert; Rijk, Bert; Nannes, Louis
2014-05-01
Agriculture is worldwide a large consumer of freshwater, nutrients and land. Spatial explicit agricultural management activities (e.g., fertilization, irrigation) could significantly improve efficiency in resource use. In previous studies and operational applications, remote sensing has shown to be a powerful method for spatio-temporal monitoring of actual crop status. As a next step, yield forecasting by assimilating remote sensing based plant variables in crop production models would improve agricultural decision support both at the farm and field level. In this study we investigated the potential of remote sensing based Leaf Area Index (LAI) time-series assimilated in the crop production model LINTUL to improve yield forecasting at field level. The effect of assimilation method and amount of assimilated observations was evaluated. The LINTUL-3 crop production model was calibrated and validated for a potato crop on two experimental fields in the south of the Netherlands. A range of data sources (e.g., in-situ soil moisture and weather sensors, destructive crop measurements) was used for calibration of the model for the experimental field in 2010. LAI from cropscan field radiometer measurements and actual LAI measured with the LAI-2000 instrument were used as input for the LAI time-series. The LAI time-series were assimilated in the LINTUL model and validated for a second experimental field on which potatoes were grown in 2011. Yield in 2011 was simulated with an R2 of 0.82 when compared with field measured yield. Furthermore, we analysed the potential of assimilation of LAI into the LINTUL-3 model through the 'updating' assimilation technique. The deviation between measured and simulated yield decreased from 9371 kg/ha to 8729 kg/ha when assimilating weekly LAI measurements in the LINTUL model over the season of 2011. LINTUL-3 furthermore shows the main growth reducing factors, which are useful for farm decision support. The combination of crop models and sensor techniques shows promising results for precision agriculture application and thereby for reduction of the footprint agriculture has on the world's resources.
Alice Doesn't Live Here Anymore: What to Do when a Good Series Contains Explicit Language
ERIC Educational Resources Information Center
Scales, Pat
2007-01-01
Pat Scales, a spokesperson for First Amendment issues, provides school librarians with guidance on: (1) how to handle questions from parents about books that employ offensive language; (2) selecting age-appropriate books for middle school libraries; (3) the difference between banning books and removing reevaluated books; and (4) whether books…
ERIC Educational Resources Information Center
Shcheglova, Irina A.; Thomson, Gregg E.; Merrill, Martha C.
2017-01-01
American research universities have recently joined the march for internationalization and now are putting explicit efforts into finding ways to create an international focus. Within a short number of years, their missions have been transformed, incorporating elements of globalization. Universities now declare the importance of preparing students…
Design and analysis for detection monitoring of forest health
F. A. Roesch
1995-01-01
An analysis procedure is proposed for the sample design of the Forest Health Monitoring Program (FHM) in the United States. The procedure is intended to provide increased sensitivity to localized but potentially important changes in forest health by explicitly accounting for the spatial relationships between plots in the FHM design. After a series of median sweeps...
ERIC Educational Resources Information Center
Chorpita, Bruce F.
2006-01-01
This clinically wise and pragmatic book presents a systematic approach for treating any form of childhood anxiety using proven exposure-based techniques. What makes this rigorously tested modular treatment unique is that it is explicitly designed with flexibility and individualization in mind. Developed in a real-world, highly diverse community…
ERIC Educational Resources Information Center
Higgs, Theodore V.
Teaching grammar for its own sake is largely counterproductive when the goal of instruction is to have students communicate spontaneously, fluently, and accurately in the target language. The ideal foreign language program is one providing the best possible environment for language acquisition to take place. Explicit teaching about the language…
Picasso, "Arte Joven", and Unamuno's "Niebla"
ERIC Educational Resources Information Center
Franz, Thomas R.
2009-01-01
By means of his association with the short-lived newspaper "Arte Joven", Unamuno had unique access to a series of sexually charged drawings by the young Pablo Picasso. While he initally objected to the explicit nature of one of these drawings, he seems ultimately to have taken inspiration from both it and others in its group on the way to…
Faculty Definitions of Desirable Teacher Beliefs. Program Evaluation Series No. 17.
ERIC Educational Resources Information Center
Brousseau, Bruce; Freeman, Donald J.
This paper describes the results of a survey of teacher education faculty that was prompted by efforts to make educational beliefs an explicit component of the curricula of teacher preparation programs at Michigan State University. The analysis sought to determine: (1) the extent to which faculty agree on the ways beliefs should be shaped within a…
ERIC Educational Resources Information Center
Edge, Karen
2013-01-01
Grounded within knowledge management (KM) theory and conceptions of tacit and explicit knowledge, this article draws on historical evidence from the Early Years Literacy Project (EYLP), a four-year instructional renewal strategy implemented across 100 schools in a large Canadian school district. The EYLP management approach included a series of…
Literacy for a Changing World. A Fresh Look at the Basics Series.
ERIC Educational Resources Information Center
Christie, Frances, Ed.
Arguing that being "literate" carries a different expectation from even the recent past, this book asserts that schools must explicitly teach the nature of language, and that students must be given clear criteria for, and support in, achieving excellence in controlling the different types of written language used in their various fields…
Dombert, Beate; Mokros, Andreas; Brückner, Eva; Schlegl, Verena; Antfolk, Jan; Bäckström, Anna; Zappalà, Angelo; Osterheider, Michael; Santtila, Pekka
2013-12-01
The implicit assessment of pedophilic sexual interest through viewing-time methods necessitates visual stimuli. There are grave ethical and legal concerns against using pictures of real children, however. The present report is a summary of findings on a new set of 108 computer-generated stimuli. The images vary in terms of gender (female/male), explicitness (naked/clothed), and physical maturity (prepubescent, pubescent, and adult) of the persons depicted. A series of three studies tested the internal and external validity of the picture set. Studies 1 and 2 yielded good-to-high estimates of observer agreement with regard to stimulus maturity levels by two methods (categorization and paired comparison). Study 3 extended these findings with regard to judgments made by convicted child sexual offenders.
Correcting GOES-R Magnetometer Data for Stray Fields
NASA Technical Reports Server (NTRS)
Carter, Delano; Freesland, Douglas; Tadikonda, Sivakumar; Kronenwetter, Jeffrey; Todirita, Monica; Dahya, Melissa; Chu, Donald
2016-01-01
Time-varying spacecraft magnetic fields, i.e. stray fields, are a problem for magnetometer systems. While constant fields can be removed by calibration, stray fields are difficult to distinguish from ambient field variations. Putting two magnetometers on a long boom and solving for both the ambient and stray fields can help, but this gradiometer solution is more sensitive to noise than a single magnetometer. As shown here for the R-series Geostationary Operational Environmental Satellites (GOES-R), unless the stray fields are larger than the noise, simply averaging the two magnetometer readings gives a more accurate solution. If averaging is used, it may be worthwhile to estimate and remove stray fields explicitly. Models and estimation algorithms to do so are provided for solar array, arcjet and reaction wheel fields.
NASA Technical Reports Server (NTRS)
Kreider, Kevin L.; Baumeister, Kenneth J.
1996-01-01
An explicit finite difference real time iteration scheme is developed to study harmonic sound propagation in aircraft engine nacelles. To reduce storage requirements for future large 3D problems, the time dependent potential form of the acoustic wave equation is used. To insure that the finite difference scheme is both explicit and stable for a harmonic monochromatic sound field, a parabolic (in time) approximation is introduced to reduce the order of the governing equation. The analysis begins with a harmonic sound source radiating into a quiescent duct. This fully explicit iteration method then calculates stepwise in time to obtain the 'steady state' harmonic solutions of the acoustic field. For stability, applications of conventional impedance boundary conditions requires coupling to explicit hyperbolic difference equations at the boundary. The introduction of the time parameter eliminates the large matrix storage requirements normally associated with frequency domain solutions, and time marching attains the steady-state quickly enough to make the method favorable when compared to frequency domain methods. For validation, this transient-frequency domain method is applied to sound propagation in a 2D hard wall duct with plug flow.
Symbolic programming language in molecular multicenter integral problem
NASA Astrophysics Data System (ADS)
Safouhi, Hassan; Bouferguene, Ahmed
It is well known that in any ab initio molecular orbital (MO) calculation, the major task involves the computation of molecular integrals, among which the computation of three-center nuclear attraction and Coulomb integrals is the most frequently encountered. As the molecular system becomes larger, computation of these integrals becomes one of the most laborious and time-consuming steps in molecular systems calculation. Improvement of the computational methods of molecular integrals would be indispensable to further development in computational studies of large molecular systems. To develop fast and accurate algorithms for the numerical evaluation of these integrals over B functions, we used nonlinear transformations for improving convergence of highly oscillatory integrals. These methods form the basis of new methods for solving various problems that were unsolvable otherwise and have many applications as well. To apply these nonlinear transformations, the integrands should satisfy linear differential equations with coefficients having asymptotic power series in the sense of Poincaré, which in their turn should satisfy some limit conditions. These differential equations are very difficult to obtain explicitly. In the case of molecular integrals, we used a symbolic programming language (MAPLE) to demonstrate that all the conditions required to apply these nonlinear transformation methods are satisfied. Differential equations are obtained explicitly, allowing us to demonstrate that the limit conditions are also satisfied.
Duality between QCD perturbative series and power corrections
NASA Astrophysics Data System (ADS)
Narison, S.; Zakharov, V. I.
2009-08-01
We elaborate on the relation between perturbative and power-like corrections to short-distance sensitive QCD observables. We confront theoretical expectations with explicit perturbative calculations existing in literature. As is expected, the quadratic correction is dual to a long perturbative series and one should use one of them but not both. However, this might be true only for very long perturbative series, with number of terms needed in most cases exceeding the number of terms available. What has not been foreseen, the quartic corrections might also be dual to the perturbative series. If confirmed, this would imply a crucial modification of the dogma. We confront this quadratic correction against existing phenomenology (QCD (spectral) sum rules scales, determinations of light quark masses and of αs from τ-decay). We find no contradiction and (to some extent) better agreement with the data and with recent lattice calculations.
Special solutions to Chazy equation
NASA Astrophysics Data System (ADS)
Varin, V. P.
2017-02-01
We consider the classical Chazy equation, which is known to be integrable in hypergeometric functions. But this solution has remained purely existential and was never used numerically. We give explicit formulas for hypergeometric solutions in terms of initial data. A special solution was found in the upper half plane H with the same tessellation of H as that of the modular group. This allowed us to derive some new identities for the Eisenstein series. We constructed a special solution in the unit disk and gave an explicit description of singularities on its natural boundary. A global solution to Chazy equation in elliptic and theta functions was found that allows parametrization of an arbitrary solution to Chazy equation. The results have applications to analytic number theory.
Dual little strings and their partition functions
NASA Astrophysics Data System (ADS)
Bastian, Brice; Hohenegger, Stefan; Iqbal, Amer; Rey, Soo-Jong
2018-05-01
We study the topological string partition function of a class of toric, double elliptically fibered Calabi-Yau threefolds XN ,M at a generic point in the Kähler moduli space. These manifolds engineer little string theories in five dimensions or lower and are dual to stacks of M5-branes probing a transverse orbifold singularity. Using the refined topological vertex formalism, we explicitly calculate a generic building block which allows us to compute the topological string partition function of XN ,M as a series expansion in different Kähler parameters. Using this result, we give further explicit proof for a duality found previously in the literature, which relates XN ,M˜XN',M' for N M =N'M' and gcd (N ,M )=gcd (N',M') .
NASA Astrophysics Data System (ADS)
Bonini, Alfredo; Fioravanti, Davide; Piscaglia, Simone; Rossi, Marco
2018-06-01
We disentangle the contribution of scalars to the OPE series of null hexagonal Wilson loops/MHV gluon scattering amplitudes in multicolour N = 4 SYM. In specific, we develop a systematic computation of the SU (4) matrix part of the Wilson loop by means of Young tableaux (with several examples). Then, we use a peculiar factorisation property (when a group of rapidities becomes large) to deduce an explicit polar form. Furthermore, we emphasise the advantages of expanding the logarithm of the Wilson loop in terms of 'connected functions' as we apply this procedure to find an explicit strong coupling expansion (definitively proving that the leading order can prevail on the classical AdS5 string contribution).
Scattering on two Aharonov-Bohm vortices
NASA Astrophysics Data System (ADS)
Bogomolny, E.
2016-12-01
The problem of two Aharonov-Bohm (AB) vortices for the Helmholtz equation is examined in detail. It is demonstrated that the method proposed by Myers (1963 J. Math. Phys. 6 1839) for slit diffraction can be generalised to obtain an explicit solution for AB vortices. Due to the singular nature of AB interaction the Green function and scattering amplitude for two AB vortices obey a series of partial differential equations. Coefficients entering these equations, fulfil ordinary non-linear differential equations whose solutions can be obtained by solving the Painlevé III equation. The asymptotics of necessary functions for very large and very small vortex separations are calculated explicitly. Taken together, this means that the problem of two AB vortices is exactly solvable.
Implicit and Explicit Memory for Affective Passages in Temporal Lobectomy Patients
ERIC Educational Resources Information Center
Burton, Leslie A.; Rabin, Laura; Vardy, Susan Bernstein; Frohlich, Jonathan; Porter, Gwinne Wyatt; Dimitri, Diana; Cofer, Lucas; Labar, Douglas
2008-01-01
Eighteen temporal lobectomy patients (9 left, LTL; 9 right, RTL) were administered four verbal tasks, an Affective Implicit Task, a Neutral Implicit Task, an Affective Explicit Task, and a Neutral Explicit Task. For the Affective and Neutral Implicit Tasks, participants were timed while reading aloud passages with affective or neutral content,…
Implicit timing activates the left inferior parietal cortex.
Wiener, Martin; Turkeltaub, Peter E; Coslett, H Branch
2010-11-01
Coull and Nobre (2008) suggested that tasks that employ temporal cues might be divided on the basis of whether these cues are explicitly or implicitly processed. Furthermore, they suggested that implicit timing preferentially engages the left cerebral hemisphere. We tested this hypothesis by conducting a quantitative meta-analysis of eleven neuroimaging studies of implicit timing using the activation-likelihood estimation (ALE) algorithm (Turkeltaub, Eden, Jones, & Zeffiro, 2002). Our analysis revealed a single but robust cluster of activation-likelihood in the left inferior parietal cortex (supramarginal gyrus). This result is in accord with the hypothesis that the left hemisphere subserves implicit timing mechanisms. Furthermore, in conjunction with a previously reported meta-analysis of explicit timing tasks, our data support the claim that implicit and explicit timing are supported by at least partially distinct neural structures. Copyright © 2010 Elsevier Ltd. All rights reserved.
The time course of explicit and implicit categorization.
Smith, J David; Zakrzewski, Alexandria C; Herberger, Eric R; Boomer, Joseph; Roeder, Jessica L; Ashby, F Gregory; Church, Barbara A
2015-10-01
Contemporary theory in cognitive neuroscience distinguishes, among the processes and utilities that serve categorization, explicit and implicit systems of category learning that learn, respectively, category rules by active hypothesis testing or adaptive behaviors by association and reinforcement. Little is known about the time course of categorization within these systems. Accordingly, the present experiments contrasted tasks that fostered explicit categorization (because they had a one-dimensional, rule-based solution) or implicit categorization (because they had a two-dimensional, information-integration solution). In Experiment 1, participants learned categories under unspeeded or speeded conditions. In Experiment 2, they applied previously trained category knowledge under unspeeded or speeded conditions. Speeded conditions selectively impaired implicit category learning and implicit mature categorization. These results illuminate the processing dynamics of explicit/implicit categorization.
Finite-time singularities in the dynamics of hyperinflation in an economy.
Szybisz, Martín A; Szybisz, Leszek
2009-08-01
The dynamics of hyperinflation episodes is studied by applying a theoretical approach based on collective "adaptive inflation expectations" with a positive nonlinear feedback proposed in the literature. In such a description it is assumed that the growth rate of the logarithmic price, r(t), changes with a velocity obeying a power law which leads to a finite-time singularity at a critical time t(c). By revising that model we found that, indeed, there are two types of singular solutions for the logarithmic price, p(t) . One is given by the already reported form p(t) approximately (t(c)-t)(-alpha) (with alpha>0 ) and the other exhibits a logarithmic divergence, p(t) approximately ln[1/(t(c)-t)] . The singularity is a signature for an economic crash. In the present work we express p(t) explicitly in terms of the parameters introduced throughout the formulation avoiding the use of any combination of them defined in the original paper. This procedure allows to examine simultaneously the time series of r(t) and p(t) performing a linked error analysis of the determined parameters. For the first time this approach is applied for analyzing the very extreme historical hyperinflations occurred in Greece (1941-1944) and Yugoslavia (1991-1994). The case of Greece is compatible with a logarithmic singularity. The study is completed with an analysis of the hyperinflation spiral currently experienced in Zimbabwe. According to our results, an economic crash in this country is predicted for these days. The robustness of the results to changes of the initial time of the series and the differences with a linear feedback are discussed.
Estimating urban vegetation fraction across 25 cities in pan-Pacific using Landsat time series data
NASA Astrophysics Data System (ADS)
Lu, Yuhao; Coops, Nicholas C.; Hermosilla, Txomin
2017-04-01
Urbanization globally is consistently reshaping the natural landscape to accommodate the growing human population. Urban vegetation plays a key role in moderating environmental impacts caused by urbanization and is critically important for local economic, social and cultural development. The differing patterns of human population growth, varying urban structures and development stages, results in highly varied spatial and temporal vegetation patterns particularly in the pan-Pacific region which has some of the fastest urbanization rates globally. Yet spatially-explicit temporal information on the amount and change of urban vegetation is rarely documented particularly in less developed nations. Remote sensing offers an exceptional data source and a unique perspective to map urban vegetation and change due to its consistency and ubiquitous nature. In this research, we assess the vegetation fractions of 25 cities across 12 pan-Pacific countries using annual gap-free Landsat surface reflectance products acquired from 1984 to 2012, using sub-pixel, spectral unmixing approaches. Vegetation change trends were then analyzed using Mann-Kendall statistics and Theil-Sen slope estimators. Unmixing results successfully mapped urban vegetation for pixels located in urban parks, forested mountainous regions, as well as agricultural land (correlation coefficient ranging from 0.66 to 0.77). The greatest vegetation loss from 1984 to 2012 was found in Shanghai, Tianjin, and Dalian in China. In contrast, cities including Vancouver (Canada) and Seattle (USA) showed stable vegetation trends through time. Using temporal trend analysis, our results suggest that it is possible to reduce noise and outliers caused by phenological changes particularly in cropland using dense new Landsat time series approaches. We conclude that simple yet effective approaches of unmixing Landsat time series data for assessing spatial and temporal changes of urban vegetation at regional scales can provide critical information for urban planners and anthropogenic studies globally.
Change Mechanisms of Schema-Centered Group Psychotherapy with Personality Disorder Patients
Tschacher, Wolfgang; Zorn, Peter; Ramseyer, Fabian
2012-01-01
Background This study addressed the temporal properties of personality disorders and their treatment by schema-centered group psychotherapy. It investigated the change mechanisms of psychotherapy using a novel method by which psychotherapy can be modeled explicitly in the temporal domain. Methodology and Findings 69 patients were assigned to a specific schema-centered behavioral group psychotherapy, 26 to social skills training as a control condition. The largest diagnostic subgroups were narcissistic and borderline personality disorder. Both treatments offered 30 group sessions of 100 min duration each, at a frequency of two sessions per week. Therapy process was described by components resulting from principal component analysis of patients' session-reports that were obtained after each session. These patient-assessed components were Clarification, Bond, Rejection, and Emotional Activation. The statistical approach focused on time-lagged associations of components using time-series panel analysis. This method provided a detailed quantitative representation of therapy process. It was found that Clarification played a core role in schema-centered psychotherapy, reducing rejection and regulating the emotion of patients. This was also a change mechanism linked to therapy outcome. Conclusions/Significance The introduced process-oriented methodology allowed to highlight the mechanisms by which psychotherapeutic treatment became effective. Additionally, process models depicted the actual patterns that differentiated specific diagnostic subgroups. Time-series analysis explores Granger causality, a non-experimental approximation of causality based on temporal sequences. This methodology, resting upon naturalistic data, can explicate mechanisms of action in psychotherapy research and illustrate the temporal patterns underlying personality disorders. PMID:22745811
Martin, Sherry L; Hayes, Daniel B; Kendall, Anthony D; Hyndman, David W
2017-02-01
Numerous studies have linked land use/land cover (LULC) to aquatic ecosystem responses, however only a few have included the dynamics of changing LULC in their analysis. In this study, we explicitly recognize changing LULC by linking mechanistic groundwater flow and travel time models to a historical time series of LULC, creating a land-use legacy map. We then illustrate the utility of legacy maps to explore relationships between dynamic LULC and lake water chemistry. We tested two main concepts about mechanisms linking LULC and lake water chemistry: groundwater pathways are an important mechanism driving legacy effects; and, LULC over multiple spatial scales is more closely related to lake chemistry than LULC over a single spatial scale. We applied statistical models to twelve water chemistry variables, ranging from nutrients to relatively conservative ions, to better understand the roles of biogeochemical reactivity and solubility on connections between LULC and aquatic ecosystem response. Our study illustrates how different areas can have long groundwater pathways that represent different LULC than what can be seen on the landscape today. These groundwater pathways delay the arrival of nutrients and other water quality constituents, thus creating a legacy of historic land uses that eventually reaches surface water. We find that: 1) several water chemistry variables are best fit by legacy LULC while others have a stronger link to current LULC, and 2) single spatial scales of LULC analysis performed worse for most variables. Our novel combination of temporal and spatial scales was the best overall model fit for most variables, including SRP where this model explained 54% of the variation. We show that it is important to explicitly account for temporal and spatial context when linking LULC to ecosystem response. Copyright © 2016. Published by Elsevier B.V.
Norman, Elisabeth; Price, Mark C.
2012-01-01
In the current paper, we first evaluate the suitability of traditional serial reaction time (SRT) and artificial grammar learning (AGL) experiments for measuring implicit learning of social signals. We then report the results of a novel sequence learning task which combines aspects of the SRT and AGL paradigms to meet our suggested criteria for how implicit learning experiments can be adapted to increase their relevance to situations of social intuition. The sequences followed standard finite-state grammars. Sequence learning and consciousness of acquired knowledge were compared between 2 groups of 24 participants viewing either sequences of individually presented letters or sequences of body-posture pictures, which were described as series of yoga movements. Participants in both conditions showed above-chance classification accuracy, indicating that sequence learning had occurred in both stimulus conditions. This shows that sequence learning can still be found when learning procedures reflect the characteristics of social intuition. Rule awareness was measured using trial-by-trial evaluation of decision strategy (Dienes & Scott, 2005; Scott & Dienes, 2008). For letters, sequence classification was best on trials where participants reported responding on the basis of explicit rules or memory, indicating some explicit learning in this condition. For body-posture, classification was not above chance on these types of trial, but instead showed a trend to be best on those trials where participants reported that their responses were based on intuition, familiarity, or random choice, suggesting that learning was more implicit. Results therefore indicate that the use of traditional stimuli in research on sequence learning might underestimate the extent to which learning is implicit in domains such as social learning, contributing to ongoing debate about levels of conscious awareness in implicit learning. PMID:22679467
Landsat phenological metrics and their relation to aboveground carbon in the Brazilian Savanna.
Schwieder, M; Leitão, P J; Pinto, J R R; Teixeira, A M C; Pedroni, F; Sanchez, M; Bustamante, M M; Hostert, P
2018-05-15
The quantification and spatially explicit mapping of carbon stocks in terrestrial ecosystems is important to better understand the global carbon cycle and to monitor and report change processes, especially in the context of international policy mechanisms such as REDD+ or the implementation of Nationally Determined Contributions (NDCs) and the UN Sustainable Development Goals (SDGs). Especially in heterogeneous ecosystems, such as Savannas, accurate carbon quantifications are still lacking, where highly variable vegetation densities occur and a strong seasonality hinders consistent data acquisition. In order to account for these challenges we analyzed the potential of land surface phenological metrics derived from gap-filled 8-day Landsat time series for carbon mapping. We selected three areas located in different subregions in the central Brazil region, which is a prominent example of a Savanna with significant carbon stocks that has been undergoing extensive land cover conversions. Here phenological metrics from the season 2014/2015 were combined with aboveground carbon field samples of cerrado sensu stricto vegetation using Random Forest regression models to map the regional carbon distribution and to analyze the relation between phenological metrics and aboveground carbon. The gap filling approach enabled to accurately approximate the original Landsat ETM+ and OLI EVI values and the subsequent derivation of annual phenological metrics. Random Forest model performances varied between the three study areas with RMSE values of 1.64 t/ha (mean relative RMSE 30%), 2.35 t/ha (46%) and 2.18 t/ha (45%). Comparable relationships between remote sensing based land surface phenological metrics and aboveground carbon were observed in all study areas. Aboveground carbon distributions could be mapped and revealed comprehensible spatial patterns. Phenological metrics were derived from 8-day Landsat time series with a spatial resolution that is sufficient to capture gradual changes in carbon stocks of heterogeneous Savanna ecosystems. These metrics revealed the relationship between aboveground carbon and the phenology of the observed vegetation. Our results suggest that metrics relating to the seasonal minimum and maximum values were the most influential variables and bear potential to improve spatially explicit mapping approaches in heterogeneous ecosystems, where both spatial and temporal resolutions are critical.
Ewolds, Harald E; Bröker, Laura; de Oliveira, Rita F; Raab, Markus; Künzell, Stefan
2017-01-01
The goal of this study was to investigate the effect of predictability on dual-task performance in a continuous tracking task. Participants practiced either informed (explicit group) or uninformed (implicit group) about a repeated segment in the curves they had to track. In Experiment 1 participants practices the tracking task only, dual-task performance was assessed after by combining the tracking task with an auditory reaction time task. Results showed both groups learned equally well and tracking performance on a predictable segment in the dual-task condition was better than on random segments. However, reaction times did not benefit from a predictable tracking segment. To investigate the effect of learning under dual-task situation participants in Experiment 2 practiced the tracking task while simultaneously performing the auditory reaction time task. No learning of the repeated segment could be demonstrated for either group during the training blocks, in contrast to the test-block and retention test, where participants performed better on the repeated segment in both dual-task and single-task conditions. Only the explicit group improved from test-block to retention test. As in Experiment 1, reaction times while tracking a predictable segment were no better than reaction times while tracking a random segment. We concluded that predictability has a positive effect only on the predictable task itself possibly because of a task-shielding mechanism. For dual-task training there seems to be an initial negative effect of explicit instructions, possibly because of fatigue, but the advantage of explicit instructions was demonstrated in a retention test. This might be due to the explicit memory system informing or aiding the implicit memory system.
Ewolds, Harald E.; Bröker, Laura; de Oliveira, Rita F.; Raab, Markus; Künzell, Stefan
2017-01-01
The goal of this study was to investigate the effect of predictability on dual-task performance in a continuous tracking task. Participants practiced either informed (explicit group) or uninformed (implicit group) about a repeated segment in the curves they had to track. In Experiment 1 participants practices the tracking task only, dual-task performance was assessed after by combining the tracking task with an auditory reaction time task. Results showed both groups learned equally well and tracking performance on a predictable segment in the dual-task condition was better than on random segments. However, reaction times did not benefit from a predictable tracking segment. To investigate the effect of learning under dual-task situation participants in Experiment 2 practiced the tracking task while simultaneously performing the auditory reaction time task. No learning of the repeated segment could be demonstrated for either group during the training blocks, in contrast to the test-block and retention test, where participants performed better on the repeated segment in both dual-task and single-task conditions. Only the explicit group improved from test-block to retention test. As in Experiment 1, reaction times while tracking a predictable segment were no better than reaction times while tracking a random segment. We concluded that predictability has a positive effect only on the predictable task itself possibly because of a task-shielding mechanism. For dual-task training there seems to be an initial negative effect of explicit instructions, possibly because of fatigue, but the advantage of explicit instructions was demonstrated in a retention test. This might be due to the explicit memory system informing or aiding the implicit memory system. PMID:29312083
High-Order Space-Time Methods for Conservation Laws
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2013-01-01
Current high-order methods such as discontinuous Galerkin and/or flux reconstruction can provide effective discretization for the spatial derivatives. Together with a time discretization, such methods result in either too small a time step size in the case of an explicit scheme or a very large system in the case of an implicit one. To tackle these problems, two new high-order space-time schemes for conservation laws are introduced: the first is explicit and the second, implicit. The explicit method here, also called the moment scheme, achieves a Courant-Friedrichs-Lewy (CFL) condition of 1 for the case of one-spatial dimension regardless of the degree of the polynomial approximation. (For standard explicit methods, if the spatial approximation is of degree p, then the time step sizes are typically proportional to 1/p(exp 2)). Fourier analyses for the one and two-dimensional cases are carried out. The property of super accuracy (or super convergence) is discussed. The implicit method is a simplified but optimal version of the discontinuous Galerkin scheme applied to time. It reduces to a collocation implicit Runge-Kutta (RK) method for ordinary differential equations (ODE) called Radau IIA. The explicit and implicit schemes are closely related since they employ the same intermediate time levels, and the former can serve as a key building block in an iterative procedure for the latter. A limiting technique for the piecewise linear scheme is also discussed. The technique can suppress oscillations near a discontinuity while preserving accuracy near extrema. Preliminary numerical results are shown
NASA Astrophysics Data System (ADS)
Czuba, Jonathan A.; Foufoula-Georgiou, Efi; Gran, Karen B.; Belmont, Patrick; Wilcock, Peter R.
2017-05-01
Understanding how sediment moves along source to sink pathways through watersheds—from hillslopes to channels and in and out of floodplains—is a fundamental problem in geomorphology. We contribute to advancing this understanding by modeling the transport and in-channel storage dynamics of bed material sediment on a river network over a 600 year time period. Specifically, we present spatiotemporal changes in bed sediment thickness along an entire river network to elucidate how river networks organize and process sediment supply. We apply our model to sand transport in the agricultural Greater Blue Earth River Basin in Minnesota. By casting the arrival of sediment to links of the network as a Poisson process, we derive analytically (under supply-limited conditions) the time-averaged probability distribution function of bed sediment thickness for each link of the river network for any spatial distribution of inputs. Under transport-limited conditions, the analytical assumptions of the Poisson arrival process are violated (due to in-channel storage dynamics) where we find large fluctuations and periodicity in the time series of bed sediment thickness. The time series of bed sediment thickness is the result of dynamics on a network in propagating, altering, and amalgamating sediment inputs in sometimes unexpected ways. One key insight gleaned from the model is that there can be a small fraction of reaches with relatively low-transport capacity within a nonequilibrium river network acting as "bottlenecks" that control sediment to downstream reaches, whereby fluctuations in bed elevation can dissociate from signals in sediment supply.
ERIC Educational Resources Information Center
Howe, Eric Michael; Rudge, David Wyss
2005-01-01
This paper provides an argument in favor of a specific pedagogical method of using the history of science to help students develop more informed views about nature of science (NOS) issues. The paper describes a series of lesson plans devoted to encouraging students to engage, "unbeknownst to them", in similar reasoning that led…
ERIC Educational Resources Information Center
Keeling, Charles D.
This booklet is one of a series intended to provide explicit instructions for the collection of oceanographic data and samples at sea. The methods and procedures described have been used by the Scripps Institution of Oceanography and found reliable and up-to-date. Instructions are given for taking air samples on board ship to determine the…
ERIC Educational Resources Information Center
Temes, Peter S., Ed.
All the essays in this collection explicitly or implicitly discuss the ethics of leadership. Paul Johnson's "Plato's Republic as Leadership Text" is an essay on Plato and Nietzsche that considers two fundamental issues: the use of force and persuasion and the tension between the actions that lead to a position of leadership and the actions after…
ERIC Educational Resources Information Center
Pap, Leo
The bibliography is a nearly exhaustive listing of books and articles in American libraries which make some explicit reference to the Portuguese in the United States. The 800 entries, citing sources published during the 20th century, are mainly written in English or Portuguese, although a few are in German, French, and Spanish. Coverage extends…
Efficient nonparametric n -body force fields from machine learning
NASA Astrophysics Data System (ADS)
Glielmo, Aldo; Zeni, Claudio; De Vita, Alessandro
2018-05-01
We provide a definition and explicit expressions for n -body Gaussian process (GP) kernels, which can learn any interatomic interaction occurring in a physical system, up to n -body contributions, for any value of n . The series is complete, as it can be shown that the "universal approximator" squared exponential kernel can be written as a sum of n -body kernels. These recipes enable the choice of optimally efficient force models for each target system, as confirmed by extensive testing on various materials. We furthermore describe how the n -body kernels can be "mapped" on equivalent representations that provide database-size-independent predictions and are thus crucially more efficient. We explicitly carry out this mapping procedure for the first nontrivial (three-body) kernel of the series, and we show that this reproduces the GP-predicted forces with meV /Å accuracy while being orders of magnitude faster. These results pave the way to using novel force models (here named "M-FFs") that are computationally as fast as their corresponding standard parametrized n -body force fields, while retaining the nonparametric character, the ease of training and validation, and the accuracy of the best recently proposed machine-learning potentials.
Verburgh, L; Scherder, E J A; van Lange, P A M; Oosterlaan, J
2016-09-01
In sports, fast and accurate execution of movements is required. It has been shown that implicitly learned movements might be less vulnerable than explicitly learned movements to stressful and fast changing circumstances that exist at the elite sports level. The present study provides insight in explicit and implicit motor learning in youth soccer players with different expertise levels. Twenty-seven youth elite soccer players and 25 non-elite soccer players (aged 10-12) performed a serial reaction time task (SRTT). In the SRTT, one of the sequences must be learned explicitly, the other was implicitly learned. No main effect of group was found for implicit and explicit learning on mean reaction time (MRT) and accuracy. However, for MRT, an interaction was found between learning condition, learning phase and group. Analyses showed no group effects for the explicit learning condition, but youth elite soccer players showed better learning in the implicit learning condition. In particular, during implicit motor learning youth elite soccer showed faster MRTs in the early learning phase and earlier reached asymptote performance in terms of MRT. Present findings may be important for sports because children with superior implicit learning abilities in early learning phases may be able to learn more (durable) motor skills in a shorter time period as compared to other children.
Forecasting Lightning Threat Using WRF Proxy Fields
NASA Technical Reports Server (NTRS)
McCaul, E. W., Jr.
2010-01-01
Objectives: Given that high-resolution WRF forecasts can capture the character of convective outbreaks, we seek to: 1. Create WRF forecasts of LTG threat (1-24 h), based on 2 proxy fields from explicitly simulated convection: - graupel flux near -15 C (captures LTG time variability) - vertically integrated ice (captures LTG threat area). 2. Calibrate each threat to yield accurate quantitative peak flash rate densities. 3. Also evaluate threats for areal coverage, time variability. 4. Blend threats to optimize results. 5. Examine sensitivity to model mesh, microphysics. Methods: 1. Use high-resolution 2-km WRF simulations to prognose convection for a diverse series of selected case studies. 2. Evaluate graupel fluxes; vertically integrated ice (VII). 3. Calibrate WRF LTG proxies using peak total LTG flash rate densities from NALMA; relationships look linear, with regression line passing through origin. 4. Truncate low threat values to make threat areal coverage match NALMA flash extent density obs. 5. Blend proxies to achieve optimal performance 6. Study CAPS 4-km ensembles to evaluate sensitivities.
Discontinuous Galerkin algorithms for fully kinetic plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Juno, J.; Hakim, A.; TenBarge, J.
Here, we present a new algorithm for the discretization of the non-relativistic Vlasov–Maxwell system of equations for the study of plasmas in the kinetic regime. Using the discontinuous Galerkin finite element method for the spatial discretization, we obtain a high order accurate solution for the plasma's distribution function. Time stepping for the distribution function is done explicitly with a third order strong-stability preserving Runge–Kutta method. Since the Vlasov equation in the Vlasov–Maxwell system is a high dimensional transport equation, up to six dimensions plus time, we take special care to note various features we have implemented to reduce the costmore » while maintaining the integrity of the solution, including the use of a reduced high-order basis set. A series of benchmarks, from simple wave and shock calculations, to a five dimensional turbulence simulation, are presented to verify the efficacy of our set of numerical methods, as well as demonstrate the power of the implemented features.« less
Discontinuous Galerkin algorithms for fully kinetic plasmas
Juno, J.; Hakim, A.; TenBarge, J.; ...
2017-10-10
Here, we present a new algorithm for the discretization of the non-relativistic Vlasov–Maxwell system of equations for the study of plasmas in the kinetic regime. Using the discontinuous Galerkin finite element method for the spatial discretization, we obtain a high order accurate solution for the plasma's distribution function. Time stepping for the distribution function is done explicitly with a third order strong-stability preserving Runge–Kutta method. Since the Vlasov equation in the Vlasov–Maxwell system is a high dimensional transport equation, up to six dimensions plus time, we take special care to note various features we have implemented to reduce the costmore » while maintaining the integrity of the solution, including the use of a reduced high-order basis set. A series of benchmarks, from simple wave and shock calculations, to a five dimensional turbulence simulation, are presented to verify the efficacy of our set of numerical methods, as well as demonstrate the power of the implemented features.« less
The Time Course of Explicit and Implicit Categorization
Zakrzewski, Alexandria C.; Herberger, Eric; Boomer, Joseph; Roeder, Jessica; Ashby, F. Gregory; Church, Barbara A.
2015-01-01
Contemporary theory in cognitive neuroscience distinguishes, among the processes and utilities that serve categorization, explicit and implicit systems of category learning that learn, respectively, category rules by active hypothesis testing or adaptive behaviors by association and reinforcement. Little is known about the time course of categorization within these systems. Accordingly, the present experiments contrasted tasks that fostered explicit categorization (because they had a one-dimensional, rule-based solution) or implicit categorization (because they had a two-dimensional, information-integration solution). In Experiment 1, participants learned categories under unspeeded or speeded conditions. In Experiment 2, they applied previously trained category knowledge under unspeeded or speeded conditions. Speeded conditions selectively impaired implicit category learning and implicit mature categorization. These results illuminate the processing dynamics of explicit/implicit categorization. PMID:26025556
Polus, Stephanie; Pieper, Dawid; Burns, Jacob; Fretheim, Atle; Ramsay, Craig; Higgins, Julian P T; Mathes, Tim; Pfadenhauer, Lisa M; Rehfuess, Eva A
2017-11-01
The aim of the study was to examine the application, design, and analysis characteristics of controlled before-after (CBA) and interrupted time series (ITS) studies and their use in Cochrane reviews. We searched the Cochrane library for reviews including these study designs from May 2012 to March 2015 and purposively selected, where available, two reviews each across 10 prespecified intervention types. We randomly selected two CBA and two ITS studies from each review. Two researchers independently extracted information from the studies and the respective reviews. Sixty-nine reviews considered CBA and ITS studies for inclusion. We analyzed 21 CBA and 16 ITS studies from 11 to 8 reviews, respectively. Cochrane reviews inconsistently defined and labeled CBA and ITS studies. Many studies did not meet the Cochrane definition or the minimum criteria provided by Cochrane Effective Practice and Organisation of Care. The studies present a heterogeneous set of study features and applied a large variety of analyses. While CBA and ITS studies represent important study designs to evaluate the effects of interventions, especially on a population or organizational level, unclear study design features challenge unequivocal classification and appropriate use. We discuss options for more specific definitions and explicit criteria for CBA and ITS studies. Copyright © 2017 Elsevier Inc. All rights reserved.
Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes
Nakamura, Tomoaki; Nagai, Takayuki; Mochihashi, Daichi; Kobayashi, Ichiro; Asoh, Hideki; Kaneko, Masahide
2017-01-01
Humans divide perceived continuous information into segments to facilitate recognition. For example, humans can segment speech waves into recognizable morphemes. Analogously, continuous motions are segmented into recognizable unit actions. People can divide continuous information into segments without using explicit segment points. This capacity for unsupervised segmentation is also useful for robots, because it enables them to flexibly learn languages, gestures, and actions. In this paper, we propose a Gaussian process-hidden semi-Markov model (GP-HSMM) that can divide continuous time series data into segments in an unsupervised manner. Our proposed method consists of a generative model based on the hidden semi-Markov model (HSMM), the emission distributions of which are Gaussian processes (GPs). Continuous time series data is generated by connecting segments generated by the GP. Segmentation can be achieved by using forward filtering-backward sampling to estimate the model's parameters, including the lengths and classes of the segments. In an experiment using the CMU motion capture dataset, we tested GP-HSMM with motion capture data containing simple exercise motions; the results of this experiment showed that the proposed GP-HSMM was comparable with other methods. We also conducted an experiment using karate motion capture data, which is more complex than exercise motion capture data; in this experiment, the segmentation accuracy of GP-HSMM was 0.92, which outperformed other methods. PMID:29311889
NASA Astrophysics Data System (ADS)
Alakent, Burak; Camurdan, Mehmet C.; Doruker, Pemra
2005-10-01
Time series analysis tools are employed on the principal modes obtained from the Cα trajectories from two independent molecular-dynamics simulations of α-amylase inhibitor (tendamistat). Fluctuations inside an energy minimum (intraminimum motions), transitions between minima (interminimum motions), and relaxations in different hierarchical energy levels are investigated and compared with those encountered in vacuum by using different sampling window sizes and intervals. The low-frequency low-indexed mode relationship, established in vacuum, is also encountered in water, which shows the reliability of the important dynamics information offered by principal components analysis in water. It has been shown that examining a short data collection period (100ps) may result in a high population of overdamped modes, while some of the low-frequency oscillations (<10cm-1) can be captured in water by using a longer data collection period (1200ps). Simultaneous analysis of short and long sampling window sizes gives the following picture of the effect of water on protein dynamics. Water makes the protein lose its memory: future conformations are less dependent on previous conformations due to the lowering of energy barriers in hierarchical levels of the energy landscape. In short-time dynamics (<10ps), damping factors extracted from time series model parameters are lowered. For tendamistat, the friction coefficient in the Langevin equation is found to be around 40-60cm-1 for the low-indexed modes, compatible with literature. The fact that water has increased the friction and that on the other hand has lubrication effect at first sight contradicts. However, this comes about because water enhances the transitions between minima and forces the protein to reduce its already inherent inability to maintain oscillations observed in vacuum. Some of the frequencies lower than 10cm-1 are found to be overdamped, while those higher than 20cm-1 are slightly increased. As for the long-time dynamics in water, it is found that random-walk motion is maintained for approximately 200ps (about five times of that in vacuum) in the low-indexed modes, showing the lowering of energy barriers between the higher-level minima.
Analytic approximations to the modon dispersion relation. [in oceanography
NASA Technical Reports Server (NTRS)
Boyd, J. P.
1981-01-01
Three explicit analytic approximations are given to the modon dispersion relation developed by Flierl et al. (1980) to describe Gulf Stream rings and related phenomena in the oceans and atmosphere. The solutions are in the form of k(q), and are developed in the form of a power series in q for small q, an inverse power series in 1/q for large q, and a two-point Pade approximant. The low order Pade approximant is shown to yield a solution for the dispersion relation with a maximum relative error for the lowest branch of the function equal to one in 700 in the q interval zero to infinity.
Effective conductivity of a periodic dilute composite with perfect contact and its series expansion
NASA Astrophysics Data System (ADS)
Pukhtaievych, Roman
2018-06-01
We study the asymptotic behavior of the effective thermal conductivity of a periodic two-phase dilute composite obtained by introducing into an infinite homogeneous matrix a periodic set of inclusions of a different material, each of them of size proportional to a positive parameter ɛ . We assume a perfect thermal contact at constituent interfaces, i.e., a continuity of the normal component of the heat flux and of the temperature. For ɛ small, we prove that the effective conductivity can be represented as a convergent power series in ɛ and we determine the coefficients in terms of the solutions of explicit systems of integral equations.
NASA Astrophysics Data System (ADS)
Sarıaydın, Selin; Yıldırım, Ahmet
2010-05-01
In this paper, we studied the solitary wave solutions of the (2+1)-dimensional Boussinesq equation utt -uxx-uyy-(u2)xx-uxxxx = 0 and the (3+1)-dimensional Kadomtsev-Petviashvili (KP) equation uxt -6ux 2 +6uuxx -uxxxx -uyy -uzz = 0. By using this method, an explicit numerical solution is calculated in the form of a convergent power series with easily computable components. To illustrate the application of this method numerical results are derived by using the calculated components of the homotopy perturbation series. The numerical solutions are compared with the known analytical solutions. Results derived from our method are shown graphically.
NASA Technical Reports Server (NTRS)
Chen, Xiaoqin; Tamma, Kumar K.; Sha, Desong
1993-01-01
The present paper describes a new explicit virtual-pulse time integral methodology for nonlinear structural dynamics problems. The purpose of the paper is to provide the theoretical basis of the methodology and to demonstrate applicability of the proposed formulations to nonlinear dynamic structures. Different from the existing numerical methods such as direct time integrations or mode superposition techniques, the proposed methodology offers new perspectives and methodology of development, and possesses several unique and attractive computational characteristics. The methodology is tested and compared with the implicit Newmark method (trapezoidal rule) through a nonlinear softening and hardening spring dynamic models. The numerical results indicate that the proposed explicit virtual-pulse time integral methodology is an excellent alternative for solving general nonlinear dynamic problems.
Implicit-Explicit Time Integration Methods for Non-hydrostatic Atmospheric Models
NASA Astrophysics Data System (ADS)
Gardner, D. J.; Guerra, J. E.; Hamon, F. P.; Reynolds, D. R.; Ullrich, P. A.; Woodward, C. S.
2016-12-01
The Accelerated Climate Modeling for Energy (ACME) project is developing a non-hydrostatic atmospheric dynamical core for high-resolution coupled climate simulations on Department of Energy leadership class supercomputers. An important factor in computational efficiency is avoiding the overly restrictive time step size limitations of fully explicit time integration methods due to the stiffest modes present in the model (acoustic waves). In this work we compare the accuracy and performance of different Implicit-Explicit (IMEX) splittings of the non-hydrostatic equations and various Additive Runge-Kutta (ARK) time integration methods. Results utilizing the Tempest non-hydrostatic atmospheric model and the ARKode package show that the choice of IMEX splitting and ARK scheme has a significant impact on the maximum stable time step size as well as solution quality. Horizontally Explicit Vertically Implicit (HEVI) approaches paired with certain ARK methods lead to greatly improved runtimes. With effective preconditioning IMEX splittings that incorporate some implicit horizontal dynamics can be competitive with HEVI results. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-699187
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu
State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less
Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu
2017-10-31
State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less
Berry, Tanya R; Rodgers, Wendy M; Divine, Alison; Hall, Craig
2018-06-19
Discrepancies between automatically activated associations (i.e., implicit evaluations) and explicit evaluations of motives (measured with a questionnaire) could lead to greater information processing to resolve discrepancies or self-regulatory failures that may affect behavior. This research examined the relationship of health and appearance exercise-related explicit-implicit evaluative discrepancies, the interaction between implicit and explicit evaluations, and the combined value of explicit and implicit evaluations (i.e., the summed scores) to dropout from a yearlong exercise program. Participants (N = 253) completed implicit health and appearance measures and explicit health and appearance motives at baseline, prior to starting the exercise program. The sum of implicit and explicit appearance measures was positively related to weeks in the program, and discrepancy between the implicit and explicit health measures was negatively related to length of time in the program. Implicit exercise evaluations and their relationships to oft-cited motives such as appearance and health may inform exercise dropout.
NASA Astrophysics Data System (ADS)
Clark, Martyn P.; Kavetski, Dmitri
2010-10-01
A major neglected weakness of many current hydrological models is the numerical method used to solve the governing model equations. This paper thoroughly evaluates several classes of time stepping schemes in terms of numerical reliability and computational efficiency in the context of conceptual hydrological modeling. Numerical experiments are carried out using 8 distinct time stepping algorithms and 6 different conceptual rainfall-runoff models, applied in a densely gauged experimental catchment, as well as in 12 basins with diverse physical and hydroclimatic characteristics. Results show that, over vast regions of the parameter space, the numerical errors of fixed-step explicit schemes commonly used in hydrology routinely dwarf the structural errors of the model conceptualization. This substantially degrades model predictions, but also, disturbingly, generates fortuitously adequate performance for parameter sets where numerical errors compensate for model structural errors. Simply running fixed-step explicit schemes with shorter time steps provides a poor balance between accuracy and efficiency: in some cases daily-step adaptive explicit schemes with moderate error tolerances achieved comparable or higher accuracy than 15 min fixed-step explicit approximations but were nearly 10 times more efficient. From the range of simple time stepping schemes investigated in this work, the fixed-step implicit Euler method and the adaptive explicit Heun method emerge as good practical choices for the majority of simulation scenarios. In combination with the companion paper, where impacts on model analysis, interpretation, and prediction are assessed, this two-part study vividly highlights the impact of numerical errors on critical performance aspects of conceptual hydrological models and provides practical guidelines for robust numerical implementation.
An efficient, explicit finite-rate algorithm to compute flows in chemical nonequilibrium
NASA Technical Reports Server (NTRS)
Palmer, Grant
1989-01-01
An explicit finite-rate code was developed to compute hypersonic viscous chemically reacting flows about three-dimensional bodies. Equations describing the finite-rate chemical reactions were fully coupled to the gas dynamic equations using a new coupling technique. The new technique maintains stability in the explicit finite-rate formulation while permitting relatively large global time steps.
Inverse sequential detection of parameter changes in developing time series
NASA Technical Reports Server (NTRS)
Radok, Uwe; Brown, Timothy J.
1992-01-01
Progressive values of two probabilities are obtained for parameter estimates derived from an existing set of values and from the same set enlarged by one or more new values, respectively. One probability is that of erroneously preferring the second of these estimates for the existing data ('type 1 error'), while the second probability is that of erroneously accepting their estimates for the enlarged test ('type 2 error'). A more stable combined 'no change' probability which always falls between 0.5 and 0 is derived from the (logarithmic) width of the uncertainty region of an equivalent 'inverted' sequential probability ratio test (SPRT, Wald 1945) in which the error probabilities are calculated rather than prescribed. A parameter change is indicated when the compound probability undergoes a progressive decrease. The test is explicitly formulated and exemplified for Gaussian samples.
Momentum-Based Dynamics for Spacecraft with Chained Revolute Appendages
NASA Technical Reports Server (NTRS)
Queen, Steven; London, Ken; Gonzalez, Marcelo
2005-01-01
An efficient formulation is presented for a sub-class of multi-body dynamics problems that involve a six degree-of-freedom base body and a chain of N rigid linkages connected in series by single degree-of-freedom revolute joints. This general method is particularly well suited for simulations of spacecraft dynamics and control that include the modeling of an orbiting platform with or without internal degrees of freedom such as reaction wheels, dampers, and/or booms. In the present work, particular emphasis is placed on dynamic simulation of multi-linkage robotic manipulators. The differential equations of motion are explicitly given in terms of linear and angular momentum states, which can be evaluated recursively along a serial chain of linkages for an efficient real-time solution on par with the best of the O(N3) methods.
NASA Astrophysics Data System (ADS)
Wijayarathne, D. B.; Gomezdelcampo, E.
2017-12-01
The existence of wet prairies is wholly dependent on the groundwater and surface water interaction. Any process that alters this interaction has a significant impact on the eco-hydrology of wet prairies. The Oak Openings Region (OOR) in Northwest Ohio supports globally rare wet prairie habitats and the precious few remaining have been drained by ditches, altering their natural flow and making them an unusually variable and artificial system. The Gridded Surface Subsurface Hydrologic Analysis (GSSHA) model from the US Army Engineer Research and Development Center was used to assess the long-term impacts of land-use change on wet prairie restoration. This study is the first spatially explicit, continuous, long-term modeling approach for understanding the response of the shallow groundwater system of the OOR to human intervention, both positive and negative. The GSSHA model was calibrated using a 2-year weekly time series of water table elevations collected with an array of piezometers in the field. Basic statistical analysis indicates a good fit between observed and simulated water table elevations on a weekly level, though the model was run on an hourly time step and a pixel size of 10 m. Spatially-explicit results show that removal of a local ditch may not drastically change the amount of ponding in the area during spring storms, but large flooding over the entire area would occur if two other ditches are removed. This model is being used by The Nature Conservancy and Toledo Metroparks to develop different scenarios for prairie restoration that minimize its effect on local homeowners.
Earing Prediction in Cup Drawing using the BBC2008 Yield Criterion
NASA Astrophysics Data System (ADS)
Vrh, Marko; Halilovič, Miroslav; Starman, Bojan; Štok, Boris; Comsa, Dan-Sorin; Banabic, Dorel
2011-08-01
The paper deals with constitutive modelling of highly anisotropic sheet metals. It presents FEM based earing predictions in cup drawing simulation of highly anisotropic aluminium alloys where more than four ears occur. For that purpose the BBC2008 yield criterion, which is a plane-stress yield criterion formulated in the form of a finite series, is used. Thus defined criterion can be expanded to retain more or less terms, depending on the amount of given experimental data. In order to use the model in sheet metal forming simulations we have implemented it in a general purpose finite element code ABAQUS/Explicit via VUMAT subroutine, considering alternatively eight or sixteen parameters (8p and 16p version). For the integration of the constitutive model the explicit NICE (Next Increment Corrects Error) integration scheme has been used. Due to the scheme effectiveness the CPU time consumption for a simulation is comparable to the time consumption of built-in constitutive models. Two aluminium alloys, namely AA5042-H2 and AA2090-T3, have been used for a validation of the model. For both alloys the parameters of the BBC2008 model have been identified with a developed numerical procedure, based on a minimization of the developed cost function. For both materials, the predictions of the BBC2008 model prove to be in very good agreement with the experimental results. The flexibility and the accuracy of the model together with the identification and integration procedure guarantee the applicability of the BBC2008 yield criterion in industrial applications.
Bondy, Andrew S.
1982-01-01
Twelve preschool children participated in a study of the effects of explicit training on the imitation of modeled behavior. The responses trained involved a marble-dropping pattern that differed from the modeled pattern. Training consisted of physical prompts and verbal praise during a single session. No prompts or praise were used during test periods. After operant levels of the experimental responses were measured, training either preceded or was interposed within a series of exposures to modeled behavior that differed from the trained behavior. Children who were initially exposed to a modeling session immediately imitated, whereas those children who were initially trained immediately performed the appropriate response. Children initially trained on one pattern generally continued to exhibit that pattern even after many modeling sessions. Children who first viewed the modeled response and then were exposed to explicit training of a different response reversed their response pattern from the trained response to the modeled response within a few sessions. The results suggest that under certain conditions explicit training will exert greater control over responding than immediate modeling stimuli. PMID:16812260
Scott, Christina L; Cortez, Angelberto
2011-01-01
Research on sexual arousal and erotica has focused primarily on men and women's responses to erotic films and stories designed for a sex-specific audience. To reduce the confounds of relying on separate materials when evaluating sex differences in arousal, the present study designed suggestive and explicit erotic stories that were rated as being equally appealing to men and women. Participants were 212 undergraduate students who completed self-report measures of sexual self-esteem, sexual desire, and pre- and posttest measures of arousal. As hypothesized, women in the suggestive and explicit conditions reported a significant increase in sexual arousal; however, only men who read the explicit story demonstrated significant elevations in arousal. The creation of "equally appealing" erotic stories has challenged the existing research paradigm and has initiated the investigation of sexual arousal from a set of common materials designed for both sexes. The benefits of creating a series of equally appealing erotic materials extends beyond empirical research and may ultimately facilitate greater openness and communication between heterosexual couples.
NASA Astrophysics Data System (ADS)
Cheng, Shengfeng; Wen, Chengyuan; Egorov, Sergei
2015-03-01
Molecular dynamics simulations and self-consistent field theory calculations are employed to study the interactions between a nanoparticle and a polymer brush at various densities of chains grafted to a plane. Simulations with both implicit and explicit solvent are performed. In either case the nanoparticle is loaded to the brush at a constant velocity. Then a series of simulations are performed to compute the force exerted on the nanoparticle that is fixed at various distances from the grafting plane. The potential of mean force is calculated and compared to the prediction based on a self-consistent field theory. Our simulations show that the explicit solvent leads to effects that are not captured in simulations with implicit solvent, indicating the importance of including explicit solvent in molecular simulations of such systems. Our results also demonstrate an interesting correlation between the force on the nanoparticle and the density profile of the brush. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPU used for this research.
A Generalized Wave Diagram for Moving Sources
NASA Astrophysics Data System (ADS)
Alt, Robert; Wiley, Sam
2004-12-01
Many introductory physics texts1-5 accompany the discussion of the Doppler effect and the formation of shock waves with diagrams illustrating the effect of a source moving through an elastic medium. Typically these diagrams consist of a series of equally spaced dots, representing the location of the source at different times. These are surrounded by a series of successively smaller circles representing wave fronts (see Fig. 1). While such a diagram provides a clear illustration of the shock wave produced by a source moving at a speed greater than the wave speed, and also the resultant pattern when the source speed is less than the wave speed (the Doppler effect), the texts do not often show the details of the construction. As a result, the key connection between the relative distance traveled by the source and the distance traveled by the wave is not explicitly made. In this paper we describe an approach emphasizing this connection that we have found to be a useful classroom supplement to the usual text presentation. As shown in Fig. 2 and Fig. 3, the Doppler effect and the shock wave can be illustrated by diagrams generated by the construction that follows.
En Route to Depression: Self-Esteem Discrepancies and Habitual Rumination.
Phillips, Wendy J; Hine, Donald W
2016-02-01
Dual-process models of cognitive vulnerability to depression suggest that some individuals possess discrepant implicit and explicit self-views, such as high explicit and low implicit self-esteem (fragile self-esteem) or low explicit and high implicit self-esteem (damaged self-esteem). This study investigated whether individuals with discrepant self-esteem may employ depressive rumination in an effort to reduce discrepancy-related dissonance, and whether the relationship between self-esteem discrepancy and future depressive symptoms varies as a function of rumination tendencies. Hierarchical regressions examined whether self-esteem discrepancy was associated with rumination in an Australian undergraduate sample at Time 1 (N = 306; M(age) = 29.9), and whether rumination tendencies moderated the relationship between self-esteem discrepancy and depressive symptoms assessed 3 months later (n = 160). Damaged self-esteem was associated with rumination at Time 1. As hypothesized, rumination moderated the relationship between self-esteem discrepancy and depressive symptoms at Time 2, where fragile self-esteem and high rumination tendencies at Time 1 predicted the highest levels of subsequent dysphoria. Results are consistent with dual-process propositions that (a) explicit self-regulation strategies may be triggered when explicit and implicit self-beliefs are incongruent, and (b) rumination may increase the likelihood of depression by expending cognitive resources and/or amplifying negative implicit biases. © 2014 Wiley Periodicals, Inc.
A point implicit time integration technique for slow transient flow problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kadioglu, Samet Y.; Berry, Ray A.; Martineau, Richard C.
2015-05-01
We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation ofmore » explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust.« less
Visibility graphs and symbolic dynamics
NASA Astrophysics Data System (ADS)
Lacasa, Lucas; Just, Wolfram
2018-07-01
Visibility algorithms are a family of geometric and ordering criteria by which a real-valued time series of N data is mapped into a graph of N nodes. This graph has been shown to often inherit in its topology nontrivial properties of the series structure, and can thus be seen as a combinatorial representation of a dynamical system. Here we explore in some detail the relation between visibility graphs and symbolic dynamics. To do that, we consider the degree sequence of horizontal visibility graphs generated by the one-parameter logistic map, for a range of values of the parameter for which the map shows chaotic behaviour. Numerically, we observe that in the chaotic region the block entropies of these sequences systematically converge to the Lyapunov exponent of the time series. Hence, Pesin's identity suggests that these block entropies are converging to the Kolmogorov-Sinai entropy of the physical measure, which ultimately suggests that the algorithm is implicitly and adaptively constructing phase space partitions which might have the generating property. To give analytical insight, we explore the relation k(x) , x ∈ [ 0 , 1 ] that, for a given datum with value x, assigns in graph space a node with degree k. In the case of the out-degree sequence, such relation is indeed a piece-wise constant function. By making use of explicit methods and tools from symbolic dynamics we are able to analytically show that the algorithm indeed performs an effective partition of the phase space and that such partition is naturally expressed as a countable union of subintervals, where the endpoints of each subinterval are related to the fixed point structure of the iterates of the map and the subinterval enumeration is associated with particular ordering structures that we called motifs.
Detecting Potential Synchronization Constraint Deadlocks from Formal System Specifications
1992-03-01
family of languages, consisting of the Larch Shared Language and a series of Larch interface languages, specific to particular programming languages...specify sequential (non- concurrent) programs , and explicitly does not include the ability to specify atomic actions (Guttag, 1985). Larch is therefore...synchronized communication between two such agents is ronsidered as a single action. The transitions in CCS trees are labelled to show how they are
ERIC Educational Resources Information Center
1984
This collection of abstracts is part of a continuing series providing information on recent doctoral dissertations. The 19 titles deal with a variety of topics, including the following: (1) factual, elaborative, and inferential levels of text processing; (2) the effect of explicitly and implicitly presented rhetorical functions on the…
Mine or Yours? Development of Sharing in Toddlers in Relation to Ownership Understanding
ERIC Educational Resources Information Center
Brownell, Celia A.; Iesue, Stephanie S.; Nichols, Sara R.; Svetlova, Margarita
2013-01-01
To examine early developments in other-oriented resource sharing, fifty-one 18- and 24-month-old children were administered 6 tasks with toys or food that could be shared with an adult playmate who had none. On each task the playmate communicated her desire for the items in a series of progressively more explicit cues. Twenty-four-month-olds…
ERIC Educational Resources Information Center
Masouleh, Fatemeh Abdollahizadeh; Arjmandi, Masoumeh; Vahdany, Fereydoon
2014-01-01
This study deals with the application of the pragmatics research to EFL teaching. The need for language learners to utilize a form of speech acts such as request which involves a series of strategies was significance of the study. Although defining different speech acts has been established since 1960s, recently there has been a shift towards…
ERIC Educational Resources Information Center
ERIC Clearinghouse on Reading and Communication Skills, Urbana, IL.
This collection of abstracts is part of a continuing series providing information on recent doctoral dissertations. The 31 titles deal with a variety of topics, including the following: (1) teaching strategies for reading comprehension; (2) strategies for assigning meaning to unfamiliar words in context; (3) differential processing of explicit and…
Explicit asymmetric bounds for robust stability of continuous and discrete-time systems
NASA Technical Reports Server (NTRS)
Gao, Zhiqiang; Antsaklis, Panos J.
1993-01-01
The problem of robust stability in linear systems with parametric uncertainties is considered. Explicit stability bounds on uncertain parameters are derived and expressed in terms of linear inequalities for continuous systems, and inequalities with quadratic terms for discrete-times systems. Cases where system parameters are nonlinear functions of an uncertainty are also examined.
Mixed time integration methods for transient thermal analysis of structures
NASA Technical Reports Server (NTRS)
Liu, W. K.
1982-01-01
The computational methods used to predict and optimize the thermal structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a different yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.
Mixed time integration methods for transient thermal analysis of structures
NASA Technical Reports Server (NTRS)
Liu, W. K.
1983-01-01
The computational methods used to predict and optimize the thermal-structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a difficult yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally-useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.
Implicit and explicit social mentalizing: dual processes driven by a shared neural network
Van Overwalle, Frank; Vandekerckhove, Marie
2013-01-01
Recent social neuroscientific evidence indicates that implicit and explicit inferences on the mind of another person (i.e., intentions, attributions or traits), are subserved by a shared mentalizing network. Under both implicit and explicit instructions, ERP studies reveal that early inferences occur at about the same time, and fMRI studies demonstrate an overlap in core mentalizing areas, including the temporo-parietal junction (TPJ) and the medial prefrontal cortex (mPFC). These results suggest a rapid shared implicit intuition followed by a slower explicit verification processes (as revealed by additional brain activation during explicit vs. implicit inferences). These data provide support for a default-adjustment dual-process framework of social mentalizing. PMID:24062663
La, Y. S.; Camredon, M.; Ziemann, P. J.; ...
2016-02-08
Recent studies have shown that low volatility gas-phase species can be lost onto the smog chamber wall surfaces. Although this loss of organic vapors to walls could be substantial during experiments, its effect on secondary organic aerosol (SOA) formation has not been well characterized and quantified yet. Here the potential impact of chamber walls on the loss of gaseous organic species and SOA formation has been explored using the Generator for Explicit Chemistry and Kinetics of the Organics in the Atmosphere (GECKO-A) modeling tool, which explicitly represents SOA formation and gas–wall partitioning. The model was compared with 41 smog chambermore » experiments of SOA formation under OH oxidation of alkane and alkene series (linear, cyclic and C 12-branched alkanes and terminal, internal and 2-methyl alkenes with 7 to 17 carbon atoms) under high NO x conditions. Simulated trends match observed trends within and between homologous series. The loss of organic vapors to the chamber walls is found to affect SOA yields as well as the composition of the gas and the particle phases. Simulated distributions of the species in various phases suggest that nitrates, hydroxynitrates and carbonylesters could substantially be lost onto walls. The extent of this process depends on the rate of gas–wall mass transfer, the vapor pressure of the species and the duration of the experiments. Furthermore, this work suggests that SOA yields inferred from chamber experiments could be underestimated up a factor of 2 due to the loss of organic vapors to chamber walls.« less
NASA Astrophysics Data System (ADS)
Fuchssteiner, Benno; Carillo, Sandra
1989-01-01
Bäcklund transformations between all known completely integrable third-order differential equations in (1 + 1)-dimensions are established and the corresponding transformations formulas for their hereditary operators and Hamiltonian formulations are exhibited. Some of these Bäcklund transformations are not injective; therefore additional non-commutative symmetry groups are found for some equations. These non-commutative symmetry groups are classified as having a semisimple part isomorphic to the affine algebra A(1)1. New completely integrable third-order integro-differential equations, some depending explicitly on x, are given. These new equations give rise to nonin equation. Connections between the singularity equations (from the Painlevé analysis) and the nonlinear equations for interacting solitons are established. A common approach to singularity analysis and soliton structure is introduced. The Painlevé analysis is modified in such a sense that it carries over directly and without difficulty to the time evolution of singularity manifolds of equations like the sine-Gordon and nonlinear Schrödinger equation. A method to recover the Painlevé series from its constant level term is exhibit. The soliton-singularity transform is recognized to be connected to the Möbius group. This gives rise to a Darboux-like result for the spectral properties of the recursion operator. These connections are used in order to explain why poles of soliton equations move like trajectories of interacting solitons. Furthermore it is explicitly computed how solitons of singularity equations behave under the effect of this soliton-singularity transform. This then leads to the result that only for scaling degrees α = -1 and α = -2 the usual Painlevé analysis can be carried out. A new invariance principle, connected to kernels of differential operators is discovered. This new invariance, for example, connects the explicit solutions of the Liouville equation with the Miura transform. Simple methods are exhibited which allow to compute out of N-soliton solutions of the KdV (Bargman potentials) explicit solutions of equations like the Harry Dym equation. Certain solutions are plotted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallin, Erin L.; Johnson, Timothy C.; Greenwood, William J.
2013-03-29
The Hanford 300 Area is located adjacent to the Columbia River in south-central Washington State, USA, and was a former site for nuclear fuel processing operations. Waste disposal practices resulted in persistent unsaturated zone and groundwater contamination, the primary contaminant of concern being uranium. Uranium behavior at the site is intimately linked with river stage driven groundwater-river water exchange such that understanding the nature of river water intrusion into the 300 Area is critical for predicting uranium desorption and transport. In this paper we use time-lapse electrical resistivity tomography (ERT) to image the inland intrusion of river during high stagemore » conditions. We demonstrate a modified time-lapse inversion approach, whereby the transient water table elevation is explicitly modeled by removing regularization constraints across the water table boundary. This implementation was critical for producing meaningful imaging results. We inverted approximately 1200 data sets (400 per line over 3 lines) using high performance computing resources to produce a time-lapse sequence of changes in bulk conductivity caused by river water intrusion during the 2011 spring runoff cycle over approximately 125 days. The resulting time series for each mesh element was then analyzed using common time series analysis to reveal the timing and location of river water intrusion beneath each line. The results reveal non-uniform flows characterized by preferred flow zones where river water enters and exits quickly with stage increase and decrease, and low permeability zones with broader bulk conductivity ‘break through’ curves and longer river water residence times. The time-lapse ERT inversion approach removes the deleterious effects of changing water table elevation and enables remote and spatial continuous groundwater-river water exchange monitoring using surface based ERT arrays under conditions where groundwater and river water conductivity are in contrast.« less
A recurrent neural network for classification of unevenly sampled variable stars
NASA Astrophysics Data System (ADS)
Naul, Brett; Bloom, Joshua S.; Pérez, Fernando; van der Walt, Stéfan
2018-02-01
Astronomical surveys of celestial sources produce streams of noisy time series measuring flux versus time (`light curves'). Unlike in many other physical domains, however, large (and source-specific) temporal gaps in data arise naturally due to intranight cadence choices as well as diurnal and seasonal constraints1-5. With nightly observations of millions of variable stars and transients from upcoming surveys4,6, efficient and accurate discovery and classification techniques on noisy, irregularly sampled data must be employed with minimal human-in-the-loop involvement. Machine learning for inference tasks on such data traditionally requires the laborious hand-coding of domain-specific numerical summaries of raw data (`features')7. Here, we present a novel unsupervised autoencoding recurrent neural network8 that makes explicit use of sampling times and known heteroskedastic noise properties. When trained on optical variable star catalogues, this network produces supervised classification models that rival other best-in-class approaches. We find that autoencoded features learned in one time-domain survey perform nearly as well when applied to another survey. These networks can continue to learn from new unlabelled observations and may be used in other unsupervised tasks, such as forecasting and anomaly detection.
1994-02-01
numerical treatment. An explicit numerical procedure based on Runqe-Kutta time stepping for cell-centered, hexahedral finite volumes is...An explicit numerical procedure based on Runge-Kutta time stepping for cell-centered, hexahedral finite volumes is outlined for the approximate...Discretization 16 3.1 Cell-Centered Finite -Volume Discretization in Space 16 3.2 Artificial Dissipation 17 3.3 Time Integration 21 3.4 Convergence
Bullying and defending behavior: The role of explicit and implicit moral cognition.
Pozzoli, Tiziana; Gini, Gianluca; Thornberg, Robert
2016-12-01
Research on bullying has highlighted the role of morality in explaining the different behavior of students during bullying episodes. However, the research has been limited to the analysis of explicit measures of moral characteristics and moral reasoning, whereas implicit measures have yet to be fully considered. To overcome this limitation, this study investigated the association between bullying and defending, on one hand, and both explicit (moral disengagement, self-importance of moral values) and implicit (immediate affect toward moral stimuli [IAMS]) moral components, on the other hand. Young adolescents (N=279, mean age=11years, 9months, 44.4% girls) completed a series of self-report scales and individually performed a computer task investigating the IAMS. Two hierarchical regressions (bootstrapping method) were performed. Results showed that moral disengagement was associated with bullying and defending behavior at high levels of IAMS, however not when IAMS was low. In contrast, self-importance of moral values was not significantly associated to the two behaviors when IAMS was high whereas both associations were significant at low levels of IAMS. These results significantly expand previous knowledge about the role of morality in bullying and defending behavior. In particular, they highlight the role of the interaction between explicit and implicit moral dimensions in predicting bullying and defending behaviors. Copyright © 2016 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Modeling SOA formation from the oxidation of intermediate volatility n-alkanes
NASA Astrophysics Data System (ADS)
Aumont, B.; Valorso, R.; Mouchel-Vallon, C.; Camredon, M.; Lee-Taylor, J.; Madronich, S.
2012-08-01
The chemical mechanism leading to SOA formation and ageing is expected to be a multigenerational process, i.e. a successive formation of organic compounds with higher oxidation degree and lower vapor pressure. This process is here investigated with the explicit oxidation model GECKO-A (Generator of Explicit Chemistry and Kinetics of Organics in the Atmosphere). Gas phase oxidation schemes are generated for the C8-C24 series of n-alkanes. Simulations are conducted to explore the time evolution of organic compounds and the behavior of secondary organic aerosol (SOA) formation for various preexisting organic aerosol concentration (COA). As expected, simulation results show that (i) SOA yield increases with the carbon chain length of the parent hydrocarbon, (ii) SOA yield decreases with decreasing COA, (iii) SOA production rates increase with increasing COA and (iv) the number of oxidation steps (i.e. generations) needed to describe SOA formation and evolution grows when COA decreases. The simulated oxidative trajectories are examined in a two dimensional space defined by the mean carbon oxidation state and the volatility. Most SOA contributors are not oxidized enough to be categorized as highly oxygenated organic aerosols (OOA) but reduced enough to be categorized as hydrocarbon like organic aerosols (HOA), suggesting that OOA may underestimate SOA. Results show that the model is unable to produce highly oxygenated aerosols (OOA) with large yields. The limitations of the model are discussed.
Modeling SOA formation from the oxidation of intermediate volatility n-alkanes
NASA Astrophysics Data System (ADS)
Aumont, B.; Valorso, R.; Mouchel-Vallon, C.; Camredon, M.; Lee-Taylor, J.; Madronich, S.
2012-06-01
The chemical mechanism leading to SOA formation and ageing is expected to be a multigenerational process, i.e. a successive formation of organic compounds with higher oxidation degree and lower vapor pressure. This process is here investigated with the explicit oxidation model GECKO-A (Generator of Explicit Chemistry and Kinetics of Organics in the Atmosphere). Gas phase oxidation schemes are generated for the C8-C24 series of n-alkanes. Simulations are conducted to explore the time evolution of organic compounds and the behavior of secondary organic aerosol (SOA) formation for various preexisting organic aerosol concentration (COA). As expected, simulation results show that (i) SOA yield increases with the carbon chain length of the parent hydrocarbon, (ii) SOA yield decreases with decreasing COA, (iii) SOA production rates increase with increasing COA and (iv) the number of oxidation steps (i.e. generations) needed to describe SOA formation and evolution grows when COA decreases. The simulated oxidative trajectories are examined in a two dimensional space defined by the mean carbon oxidation state and the volatility. Most SOA contributors are not oxidized enough to be categorized as highly oxygenated organic aerosols (OOA) but reduced enough to be categorized as hydrocarbon like organic aerosols (HOA), suggesting that OOA may underestimate SOA. Results show that the model is unable to produce highly oxygenated aerosols (OOA) with large yields. The limitations of the model are discussed.
First-order analytic propagation of satellites in the exponential atmosphere of an oblate planet
NASA Astrophysics Data System (ADS)
Martinusi, Vladimir; Dell'Elce, Lamberto; Kerschen, Gaëtan
2017-04-01
The paper offers the fully analytic solution to the motion of a satellite orbiting under the influence of the two major perturbations, due to the oblateness and the atmospheric drag. The solution is presented in a time-explicit form, and takes into account an exponential distribution of the atmospheric density, an assumption that is reasonably close to reality. The approach involves two essential steps. The first one concerns a new approximate mathematical model that admits a closed-form solution with respect to a set of new variables. The second step is the determination of an infinitesimal contact transformation that allows to navigate between the new and the original variables. This contact transformation is obtained in exact form, and afterwards a Taylor series approximation is proposed in order to make all the computations explicit. The aforementioned transformation accommodates both perturbations, improving the accuracy of the orbit predictions by one order of magnitude with respect to the case when the atmospheric drag is absent from the transformation. Numerical simulations are performed for a low Earth orbit starting at an altitude of 350 km, and they show that the incorporation of drag terms into the contact transformation generates an error reduction by a factor of 7 in the position vector. The proposed method aims at improving the accuracy of analytic orbit propagation and transforming it into a viable alternative to the computationally intensive numerical methods.
A Representation of an Instantaneous Unit Hydrograph From Geomorphology
NASA Astrophysics Data System (ADS)
Gupta, Vijay K.; Waymire, Ed; Wang, C. T.
1980-10-01
The channel network and the overland flow regions in a river basin satisfy Horton's empirical geo-morphologic laws when ordered according to the Strahler ordering scheme. This setting is presently employed in a kinetic theoretic framework for obtaining an explicit mathematical representation for the instantaneous unit hydrograph (iuh) at the basin outlet. Two examples are developed which lead to explicit formulae for the iuh. These examples are formally analogous to the solutions that would result if a basin is represented in terms of linear reservoirs and channels, respectively, in series and in parallel. However, this analogy is only formal, and it does not carry through physically. All but one of the parameters appearing in the iuh formulae are obtained in terms of Horton's bifurcation ratio, stream length ratio, and stream area ratio. The one unknown parameter is obtained through specifying the basin mean lag time independently. Three basins from Illinois are selected to check the theoretical results with the observed direct surface runoff hydrographs. The theory provided excellent agreement for two basins with areas of the order of 1100 mi2 (1770 km2) but underestimates the peak flow for the smaller basin with 300-mi2 (483-km2) area. This relative lack of agreement for the smaller basin may be used to question the validity of the linearity assumption in the rainfall runoff transformation which is embedded in the above development.
Resting state neural networks for visual Chinese word processing in Chinese adults and children.
Li, Ling; Liu, Jiangang; Chen, Feiyan; Feng, Lu; Li, Hong; Tian, Jie; Lee, Kang
2013-07-01
This study examined the resting state neural networks for visual Chinese word processing in Chinese children and adults. Both the functional connectivity (FC) and amplitude of low frequency fluctuation (ALFF) approaches were used to analyze the fMRI data collected when Chinese participants were not engaged in any specific explicit tasks. We correlated time series extracted from the visual word form area (VWFA) with those in other regions in the brain. We also performed ALFF analysis in the resting state FC networks. The FC results revealed that, regarding the functionally connected brain regions, there exist similar intrinsically organized resting state networks for visual Chinese word processing in adults and children, suggesting that such networks may already be functional after 3-4 years of informal exposure to reading plus 3-4 years formal schooling. The ALFF results revealed that children appear to recruit more neural resources than adults in generally reading-irrelevant brain regions. Differences between child and adult ALFF results suggest that children's intrinsic word processing network during the resting state, though similar in functional connectivity, is still undergoing development. Further exposure to visual words and experience with reading are needed for children to develop a mature intrinsic network for word processing. The developmental course of the intrinsically organized word processing network may parallel that of the explicit word processing network. Copyright © 2013 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Callens, Andy M.; Atchison, Timothy B.; Engler, Rachel R.
2009-01-01
Instructions for the Matrix Reasoning Test (MRT) of the Wechsler Adult Intelligence Scale-Third Edition were modified by explicitly stating that the subtest was untimed or that a per-item time limit would be imposed. The MRT was administered within one of four conditions: with (a) standard administration instructions, (b) explicit instructions…
NASA Astrophysics Data System (ADS)
Cavaglieri, Daniele; Bewley, Thomas
2015-04-01
Implicit/explicit (IMEX) Runge-Kutta (RK) schemes are effective for time-marching ODE systems with both stiff and nonstiff terms on the RHS; such schemes implement an (often A-stable or better) implicit RK scheme for the stiff part of the ODE, which is often linear, and, simultaneously, a (more convenient) explicit RK scheme for the nonstiff part of the ODE, which is often nonlinear. Low-storage RK schemes are especially effective for time-marching high-dimensional ODE discretizations of PDE systems on modern (cache-based) computational hardware, in which memory management is often the most significant computational bottleneck. In this paper, we develop and characterize eight new low-storage implicit/explicit RK schemes which have higher accuracy and better stability properties than the only low-storage implicit/explicit RK scheme available previously, the venerable second-order Crank-Nicolson/Runge-Kutta-Wray (CN/RKW3) algorithm that has dominated the DNS/LES literature for the last 25 years, while requiring similar storage (two, three, or four registers of length N) and comparable floating-point operations per timestep.
Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models
Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; ...
2018-04-17
The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less
Implicit-explicit (IMEX) Runge-Kutta methods for non-hydrostatic atmospheric models
NASA Astrophysics Data System (ADS)
Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; Reynolds, Daniel R.; Ullrich, Paul A.; Woodward, Carol S.
2018-04-01
The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit-explicit (IMEX) additive Runge-Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit - vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored. The accuracy and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.
Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, David J.; Guerra, Jorge E.; Hamon, François P.
The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less
On Feeling Torn About One’s Sexuality
Windsor-Shellard, Ben
2014-01-01
Three studies offer novel evidence addressing the consequences of explicit–implicit sexual orientation (SO) ambivalence. In Study 1, self-identified straight females completed explicit and implicit measures of SO. The results revealed that participants with greater SO ambivalence took longer responding to explicit questions about their sexual preferences, an effect moderated by the direction of ambivalence. Study 2 replicated this effect using a different paradigm. Study 3 included self-identified straight and gay female and male participants; participants completed explicit and implicit measures of SO, plus measures of self-esteem and affect regarding their SO. Among straight participants, the response time results replicated the findings of Studies 1 and 2. Among gay participants, trends suggested that SO ambivalence influenced time spent deliberating on explicit questions relevant to sexuality, but in a different way. Furthermore, the amount and direction of SO ambivalence was related to self-esteem. PMID:24972940
NASA Astrophysics Data System (ADS)
Abell, J. T.; Jacobsen, J.; Bjorkstedt, E.
2016-02-01
Determining aragonite saturation state (Ω) in seawater requires measurement of two parameters of the carbonate system: most commonly dissolved inorganic carbon (DIC) and total alkalinity (TA). The routine measurement of DIC and TA is not always possible on frequently repeated hydrographic lines or at moored-time series that collect hydrographic data at short time intervals. In such cases a proxy can be developed that relates the saturation state as derived from one time or infrequent DIC and TA measurements (Ωmeas) to more frequently measured parameters such as dissolved oxygen (DO) and temperature (Temp). These proxies are generally based on best-fit parameterizations that utilize references values of DO and Temp and adjust linear coefficients until the error between the proxy-derived saturation state (Ωproxy) and Ωmeas is minimized. Proxies have been used to infer Ω from moored hydrographic sensors and gliders which routinely collect DO and Temp data but do not include carbonate parameter measurements. Proxies can also calculate Ω in regional oceanographic models which do not explicitly include carbonate parameters. Here we examine the variability and accuracy of Ωproxy along a near-shore hydrographic line and a moored-time series stations at Trinidad Head, CA. The saturation state is determined using proxies from different coastal regions of the California Current Large Marine Ecosystem and from different years of sampling along the hydrographic line. We then calculate the variability and error associated with the use of different proxy coefficients, the sensitivity to reference values and the inclusion of additional variables. We demonstrate how this variability affects estimates of the intensity and duration of exposure to aragonite corrosive conditions on the near-shore shelf and in the water column.
Probability density and exceedance rate functions of locally Gaussian turbulence
NASA Technical Reports Server (NTRS)
Mark, W. D.
1989-01-01
A locally Gaussian model of turbulence velocities is postulated which consists of the superposition of a slowly varying strictly Gaussian component representing slow temporal changes in the mean wind speed and a more rapidly varying locally Gaussian turbulence component possessing a temporally fluctuating local variance. Series expansions of the probability density and exceedance rate functions of the turbulence velocity model, based on Taylor's series, are derived. Comparisons of the resulting two-term approximations with measured probability density and exceedance rate functions of atmospheric turbulence velocity records show encouraging agreement, thereby confirming the consistency of the measured records with the locally Gaussian model. Explicit formulas are derived for computing all required expansion coefficients from measured turbulence records.
Algorithms and software for nonlinear structural dynamics
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.
1989-01-01
The objective of this research is to develop efficient methods for explicit time integration in nonlinear structural dynamics for computers which utilize both concurrency and vectorization. As a framework for these studies, the program WHAMS, which is described in Explicit Algorithms for the Nonlinear Dynamics of Shells (T. Belytschko, J. I. Lin, and C.-S. Tsay, Computer Methods in Applied Mechanics and Engineering, Vol. 42, 1984, pp 225 to 251), is used. There are two factors which make the development of efficient concurrent explicit time integration programs a challenge in a structural dynamics program: (1) the need for a variety of element types, which complicates the scheduling-allocation problem; and (2) the need for different time steps in different parts of the mesh, which is here called mixed delta t integration, so that a few stiff elements do not reduce the time steps throughout the mesh.
NASA Astrophysics Data System (ADS)
Zhang, Ruili; Wang, Yulei; He, Yang; Xiao, Jianyuan; Liu, Jian; Qin, Hong; Tang, Yifa
2018-02-01
Relativistic dynamics of a charged particle in time-dependent electromagnetic fields has theoretical significance and a wide range of applications. The numerical simulation of relativistic dynamics is often multi-scale and requires accurate long-term numerical simulations. Therefore, explicit symplectic algorithms are much more preferable than non-symplectic methods and implicit symplectic algorithms. In this paper, we employ the proper time and express the Hamiltonian as the sum of exactly solvable terms and product-separable terms in space-time coordinates. Then, we give the explicit symplectic algorithms based on the generating functions of orders 2 and 3 for relativistic dynamics of a charged particle. The methodology is not new, which has been applied to non-relativistic dynamics of charged particles, but the algorithm for relativistic dynamics has much significance in practical simulations, such as the secular simulation of runaway electrons in tokamaks.
Why Do Young Children Hide by Closing Their Eyes? Self-Visibility and the Developing Concept of Self
ERIC Educational Resources Information Center
Russell, James; Gee, Brioney; Bullard, Christina
2012-01-01
In a series of four experiments, the authors begin by replicating Flavell, Shipstead, and Croft's (1980) finding that many children between 2 and 4 years of age will affirm the invisibility both of themselves and of others--but "not" of the body--when the person's eyes are closed. The authors also render explicit certain trends in the Flavell et…
Representation of the Coulomb Matrix Elements by Means of Appell Hypergeometric Function F 2
NASA Astrophysics Data System (ADS)
Bentalha, Zine el abidine
2018-06-01
Exact analytical representation for the Coulomb matrix elements by means of Appell's double series F 2 is derived. The finite sum obtained for the Appell function F 2 allows us to evaluate explicitly the matrix elements of the two-body Coulomb interaction in the lowest Landau level. An application requiring the matrix elements of Coulomb potential in quantum Hall effect regime is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Henan, E-mail: wuhenanby@163.com; Chen, Qiufan; Yue, Xiaoqing
The Lie conformal algebra of loop Virasoro algebra, denoted by CW, is introduced in this paper. Explicitly, CW is a Lie conformal algebra with C[∂]-basis (L{sub i} | i∈Z) and λ-brackets [L{sub i} {sub λ} L{sub j}] = (−∂−2λ)L{sub i+j}. Then conformal derivations of CW are determined. Finally, rank one conformal modules and Z-graded free intermediate series modules over CW are classified.
ERIC Educational Resources Information Center
Simon, Roger I.
An attempt is made to wrestle with the question of critical pedagogy, trying to refine and make more explicit the author's political vision; the idea of pedagogy as a form of cultural politics; and teachers as cultural workers. Eight essays are divided into two sections. The four essays of Part 1 try to draw together a statement about efforts to…
Modeling SOA production from the oxidation of intermediate volatility alkanes
NASA Astrophysics Data System (ADS)
Aumont, B.; Mouchel-Vallon, C.; Camredon, M.; Lee-Taylor, J.; Madronich, S.
2012-12-01
Secondary Organic Aerosols (SOA) production and ageing is a multigenerational oxidation process involving the formation of successive organic compounds with higher oxidation degree and lower vapour pressure. This process was investigated using the explicit oxidation model GECKO-A (Generator for Explicit Chemistry and Kinetics of Organics in the Atmosphere). Results for the C8-C24 n-alkane series show the expected trends, i.e. (i) SOA yield grows with the carbon backbone of the parent hydrocarbon, (ii) SOA yields decreases with the decreasing pre-existing organic aerosol concentration, (iii) the number of generations required to describe SOA production increases when the pre-existing organic aerosol concentration decreases. Most SOA contributors were found to be not oxidized enough to be categorized as highly oxygenated organic aerosols (OOA) but reduced enough to be categorized as hydrocarbon like organic aerosols (HOA). Branched alkanes are more prone to fragment in the early stage of the oxidation than their corresponding linear analogues. Fragmentation is expected to alter both the yield and the mean oxidation state of the SOA. Here, GECKO-A is applied to generate highly detailed oxidation schemes for various series of branched and cyclised alkanes. Branching and cyclisation effects on SOA yields and oxidation states will be examined.
Numerically stable formulas for a particle-based explicit exponential integrator
NASA Astrophysics Data System (ADS)
Nadukandi, Prashanth
2015-05-01
Numerically stable formulas are presented for the closed-form analytical solution of the X-IVAS scheme in 3D. This scheme is a state-of-the-art particle-based explicit exponential integrator developed for the particle finite element method. Algebraically, this scheme involves two steps: (1) the solution of tangent curves for piecewise linear vector fields defined on simplicial meshes and (2) the solution of line integrals of piecewise linear vector-valued functions along these tangent curves. Hence, the stable formulas presented here have general applicability, e.g. exact integration of trajectories in particle-based (Lagrangian-type) methods, flow visualization and computer graphics. The Newton form of the polynomial interpolation definition is used to express exponential functions of matrices which appear in the analytical solution of the X-IVAS scheme. The divided difference coefficients in these expressions are defined in a piecewise manner, i.e. in a prescribed neighbourhood of removable singularities their series approximations are computed. An optimal series approximation of divided differences is presented which plays a critical role in this methodology. At least ten significant decimal digits in the formula computations are guaranteed to be exact using double-precision floating-point arithmetic. The worst case scenarios occur in the neighbourhood of removable singularities found in fourth-order divided differences of the exponential function.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silin, D.; Goloshubin, G.
Analysis of compression wave propagation in a poroelastic medium predicts a peak of reflection from a high-permeability layer in the low-frequency end of the spectrum. An explicit formula expresses the resonant frequency through the elastic moduli of the solid skeleton, the permeability of the reservoir rock, the fluid viscosity and compressibility, and the reservoir thickness. This result is obtained through a low-frequency asymptotic analysis of Biot's model of poroelasticity. A review of the derivation of the main equations from the Hooke's law, momentum and mass balance equations, and Darcy's law suggests an alternative new physical interpretation of some coefficients ofmore » the classical poroelasticity. The velocity of wave propagation, the attenuation factor, and the wave number, are expressed in the form of power series with respect to a small dimensionless parameter. The absolute value of this parameter is equal to the product of the kinematic reservoir fluid mobility and the wave frequency. Retaining only the leading terms of the series leads to explicit and relatively simple expressions for the reflection and transmission coefficients for a planar wave crossing an interface between two permeable media, as well as wave reflection from a thin highly-permeable layer (a lens). Practical applications of the obtained asymptotic formulae are seismic modeling, inversion, and at-tribute analysis.« less
NASA Astrophysics Data System (ADS)
Cohen, W. B.; Yang, Z.; Stehman, S.; Huang, C.; Healey, S. P.
2013-12-01
Forest ecosystem process models require spatially and temporally detailed disturbance data to accurately predict fluxes of carbon or changes in biodiversity over time. A variety of new mapping algorithms using dense Landsat time series show great promise for providing disturbance characterizations at an annual time step. These algorithms provide unprecedented detail with respect to timing, magnitude, and duration of individual disturbance events, and causal agent. But all maps have error and disturbance maps in particular can have significant omission error because many disturbances are relatively subtle. Because disturbance, although ubiquitous, can be a relatively rare event spatially in any given year, omission errors can have a great impact on mapped rates. Using a high quality reference disturbance dataset, it is possible to not only characterize map errors but also to adjust mapped disturbance rates to provide unbiased rate estimates with confidence intervals. We present results from a national-level disturbance mapping project (the North American Forest Dynamics project) based on the Vegetation Change Tracker (VCT) with annual Landsat time series and uncertainty analyses that consist of three basic components: response design, statistical design, and analyses. The response design describes the reference data collection, in terms of the tool used (TimeSync), a formal description of interpretations, and the approach for data collection. The statistical design defines the selection of plot samples to be interpreted, whether stratification is used, and the sample size. Analyses involve derivation of standard agreement matrices between the map and the reference data, and use of inclusion probabilities and post-stratification to adjust mapped disturbance rates. Because for NAFD we use annual time series, both mapped and adjusted rates are provided at an annual time step from ~1985-present. Preliminary evaluations indicate that VCT captures most of the higher intensity disturbances, but that many of the lower intensity disturbances (thinnings, stress related to insects and disease, etc.) are missed. Because lower intensity disturbances are a large proportion of the total set of disturbances, adjusting mapped disturbance rates to include these can be important for inclusion in ecosystem process models. The described statistical disturbance rate adjustments are aspatial in nature, such that the basic underlying map is unchanged. For spatially explicit ecosystem modeling, such adjustments, although important, can be difficult to directly incorporate. One approach for improving the basic underlying map is an ensemble modeling approach that uses several different complementary maps, each derived from a different algorithm and having their own strengths and weaknesses relative to disturbance magnitude and causal agent of disturbance. We will present results from a pilot study associated with the Landscape Change Monitoring System (LCMS), an emerging national-level program that builds upon NAFD and the well-established Monitoring Trends in Burn Severity (MTBS) program.
a Norm Pairing in Formal Modules
NASA Astrophysics Data System (ADS)
Vostokov, S. V.
1980-02-01
A pairing of the multiplicative group of a local field (a finite extension of the field of p-adic numbers Qp) with the group of points of a Lubin-Tate formal group is defined explicitly. The values of the pairing are roots of an isogeny of the formal group. The main properties of this pairing are established: bilinearity, invariance under the choice of a local uniformizing element, and independence of the method of expanding elements into series with respect to this uniformizing element. These properties of the pairing are used to prove that it agrees with the generalized Hilbert norm residue symbol when the field over whose ring of integers the formal group is defined is totally ramified over Qp. This yields an explicit expression for the generalized Hilbert symbol on the group of points of the formal group. Bibliography: 12 titles.
NASA Astrophysics Data System (ADS)
Grants, Ilmārs; Bojarevičs, Andris; Gerbeth, Gunter
2016-06-01
Powerful forces arise when a pulse of a magnetic field in the order of a few tesla diffuses into a conductor. Such pulses are used in electromagnetic forming, impact welding of dissimilar materials and grain refinement of solidifying alloys. Strong magnetic field pulses are generated by the discharge current of a capacitor bank. We consider analytically the penetration of such pulse into a conducting half-space. Besides the exact solution we obtain two simple self-similar approximate solutions for two sequential stages of the initial transient. Furthermore, a general solution is provided for the external field given as a power series of time. Each term of this solution represents a self-similar function for which we obtain an explicit expression. The validity range of various approximate analytical solutions is evaluated by comparison to the exact solution.
NASA Astrophysics Data System (ADS)
Xie, Wen-Jie; Jiang, Zhi-Qiang; Gu, Gao-Feng; Xiong, Xiong; Zhou, Wei-Xing
2015-10-01
Many complex systems generate multifractal time series which are long-range cross-correlated. Numerous methods have been proposed to characterize the multifractal nature of these long-range cross correlations. However, several important issues about these methods are not well understood and most methods consider only one moment order. We study the joint multifractal analysis based on partition function with two moment orders, which was initially invented to investigate fluid fields, and derive analytically several important properties. We apply the method numerically to binomial measures with multifractal cross correlations and bivariate fractional Brownian motions without multifractal cross correlations. For binomial multifractal measures, the explicit expressions of mass function, singularity strength and multifractal spectrum of the cross correlations are derived, which agree excellently with the numerical results. We also apply the method to stock market indexes and unveil intriguing multifractality in the cross correlations of index volatilities.
Response-Guided Community Detection: Application to Climate Index Discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bello, Gonzalo; Angus, Michael; Pedemane, Navya
Discovering climate indices-time series that summarize spatiotemporal climate patterns-is a key task in the climate science domain. In this work, we approach this task as a problem of response-guided community detection; that is, identifying communities in a graph associated with a response variable of interest. To this end, we propose a general strategy for response-guided community detection that explicitly incorporates information of the response variable during the community detection process, and introduce a graph representation of spatiotemporal data that leverages information from multiple variables. We apply our proposed methodology to the discovery of climate indices associated with seasonal rainfall variability.more » Our results suggest that our methodology is able to capture the underlying patterns known to be associated with the response variable of interest and to improve its predictability compared to existing methodologies for data-driven climate index discovery and official forecasts.« less
Niethammer, Marc; Hart, Gabriel L.; Pace, Danielle F.; Vespa, Paul M.; Irimia, Andrei; Van Horn, John D.; Aylward, Stephen R.
2013-01-01
Standard image registration methods do not account for changes in image appearance. Hence, metamorphosis approaches have been developed which jointly estimate a space deformation and a change in image appearance to construct a spatio-temporal trajectory smoothly transforming a source to a target image. For standard metamorphosis, geometric changes are not explicitly modeled. We propose a geometric metamorphosis formulation, which explains changes in image appearance by a global deformation, a deformation of a geometric model, and an image composition model. This work is motivated by the clinical challenge of predicting the long-term effects of traumatic brain injuries based on time-series images. This work is also applicable to the quantification of tumor progression (e.g., estimating its infiltrating and displacing components) and predicting chronic blood perfusion changes after stroke. We demonstrate the utility of the method using simulated data as well as scans from a clinical traumatic brain injury patient. PMID:21995083
An Extensive Study on Data Anonymization Algorithms Based on K-Anonymity
NASA Astrophysics Data System (ADS)
Simi, Ms. M. S.; Sankara Nayaki, Mrs. K.; Sudheep Elayidom, M., Dr.
2017-08-01
For business and research oriented works engaging Data Analysis and Cloud services needing qualitative data, many organizations release huge microdata. It excludes an individual’s explicit identity marks like name, address and comprises of specific information like DOB, Pin-code, sex, marital status, which can be combined with other public data to recognize a person. This implication attack can be manipulated to acquire any sensitive information from social network platform, thereby putting the privacy of a person in grave danger. To prevent such attacks by modifying microdata, K-anonymization is used. With potentially increasing data, the effective method to anonymize it stands challenging. After series of trails and systematic comparison, in this paper, we propose three best algorithms along with its efficiency and effectiveness. Studies help researchers to identify the relationship between the values of k, degree of anonymization, choosing a quasi-identifier and focus on execution time.
The piecewise-linear predictor-corrector code - A Lagrangian-remap method for astrophysical flows
NASA Technical Reports Server (NTRS)
Lufkin, Eric A.; Hawley, John F.
1993-01-01
We describe a time-explicit finite-difference algorithm for solving the nonlinear fluid equations. The method is similar to existing Eulerian schemes in its use of operator-splitting and artificial viscosity, except that we solve the Lagrangian equations of motion with a predictor-corrector and then remap onto a fixed Eulerian grid. The remap is formulated to eliminate errors associated with coordinate singularities, with a general prescription for remaps of arbitrary order. We perform a comprehensive series of tests on standard problems. Self-convergence tests show that the code has a second-order rate of convergence in smooth, two-dimensional flow, with pressure forces, gravity, and curvilinear geometry included. While not as accurate on idealized problems as high-order Riemann-solving schemes, the predictor-corrector Lagrangian-remap code has great flexibility for application to a variety of astrophysical problems.
Explicit reference governor for linear systems
NASA Astrophysics Data System (ADS)
Garone, Emanuele; Nicotra, Marco; Ntogramatzidis, Lorenzo
2018-06-01
The explicit reference governor is a constrained control scheme that was originally introduced for generic nonlinear systems. This paper presents two explicit reference governor strategies that are specifically tailored for the constrained control of linear time-invariant systems subject to linear constraints. Both strategies are based on the idea of maintaining the system states within an invariant set which is entirely contained in the constraints. This invariant set can be constructed by exploiting either the Lyapunov inequality or modal decomposition. To improve the performance, we show that the two strategies can be combined by choosing at each time instant the least restrictive set. Numerical simulations illustrate that the proposed scheme achieves performances that are comparable to optimisation-based reference governors.
Path-space variational inference for non-equilibrium coarse-grained systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harmandaris, Vagelis, E-mail: harman@uoc.gr; Institute of Applied and Computational Mathematics; Kalligiannaki, Evangelia, E-mail: ekalligian@tem.uoc.gr
In this paper we discuss information-theoretic tools for obtaining optimized coarse-grained molecular models for both equilibrium and non-equilibrium molecular simulations. The latter are ubiquitous in physicochemical and biological applications, where they are typically associated with coupling mechanisms, multi-physics and/or boundary conditions. In general the non-equilibrium steady states are not known explicitly as they do not necessarily have a Gibbs structure. The presented approach can compare microscopic behavior of molecular systems to parametric and non-parametric coarse-grained models using the relative entropy between distributions on the path space and setting up a corresponding path-space variational inference problem. The methods can become entirelymore » data-driven when the microscopic dynamics are replaced with corresponding correlated data in the form of time series. Furthermore, we present connections and generalizations of force matching methods in coarse-graining with path-space information methods. We demonstrate the enhanced transferability of information-based parameterizations to different observables, at a specific thermodynamic point, due to information inequalities. We discuss methodological connections between information-based coarse-graining of molecular systems and variational inference methods primarily developed in the machine learning community. However, we note that the work presented here addresses variational inference for correlated time series due to the focus on dynamics. The applicability of the proposed methods is demonstrated on high-dimensional stochastic processes given by overdamped and driven Langevin dynamics of interacting particles.« less
Explicit and implicit calculations of turbulent cavity flows with and without yaw angle
NASA Astrophysics Data System (ADS)
Yen, Guan-Wei
1989-08-01
Computations were performed to simulate turbulent supersonic flows past three-dimensional deep cavities with and without yaw. Simulation of these self-sustained oscillatory flows were generated through time accurate solutions of the Reynolds averaged complete Navier-Stokes equations using two different schemes: (1) MacCormack, finite-difference; and (2) implicit, upwind, finite-volume schemes. The second scheme, which is approximately 30 percent faster, is found to produce better time accurate results. The Reynolds stresses were modeled, using the Baldwin-Lomax algebraic turbulence model with certain modifications. The computational results include instantaneous and time averaged flow properties everywhere in the computational domain. Time series analyses were performed for the instantaneous pressure values on the cavity floor. The time averaged computational results show good agreement with the experimental data along the cavity floor and walls. When the yaw angle is nonzero, there is no longer a single length scale (length-to-depth ratio) for the flow, as is the case for zero yaw angle flow. The dominant directions and inclinations of the vortices are dramatically different for this nonsymmetric flow. The vortex shedding from the cavity into the mainstream flow is captured computationally. This phenomenon, which is due to the oscillation of the shear layer, is confirmed by the solutions of both schemes.
Classical space-times from the S-matrix
NASA Astrophysics Data System (ADS)
Neill, Duff; Rothstein, Ira Z.
2013-12-01
We show that classical space-times can be derived directly from the S-matrix for a theory of massive particles coupled to a massless spin two particle. As an explicit example we derive the Schwarzchild space-time as a series in GN. At no point of the derivation is any use made of the Einstein-Hilbert action or the Einstein equations. The intermediate steps involve only on-shell S-matrix elements which are generated via BCFW recursion relations and unitarity sewing techniques. The notion of a space-time metric is only introduced at the end of the calculation where it is extracted by matching the potential determined by the S-matrix to the geodesic motion of a test particle. Other static space-times such as Kerr follow in a similar manner. Furthermore, given that the procedure is action independent and depends only upon the choice of the representation of the little group, solutions to Yang-Mills (YM) theory can be generated in the same fashion. Moreover, the squaring relation between the YM and gravity three point functions shows that the seeds that generate solutions in the two theories are algebraically related. From a technical standpoint our methodology can also be utilized to calculate quantities relevant for the binary inspiral problem more efficiently then the more traditional Feynman diagram approach.
Explicit and implicit calculations of turbulent cavity flows with and without yaw angle. M.S. Thesis
NASA Technical Reports Server (NTRS)
Yen, Guan-Wei
1989-01-01
Computations were performed to simulate turbulent supersonic flows past three-dimensional deep cavities with and without yaw. Simulation of these self-sustained oscillatory flows were generated through time accurate solutions of the Reynolds averaged complete Navier-Stokes equations using two different schemes: (1) MacCormack, finite-difference; and (2) implicit, upwind, finite-volume schemes. The second scheme, which is approximately 30 percent faster, is found to produce better time accurate results. The Reynolds stresses were modeled, using the Baldwin-Lomax algebraic turbulence model with certain modifications. The computational results include instantaneous and time averaged flow properties everywhere in the computational domain. Time series analyses were performed for the instantaneous pressure values on the cavity floor. The time averaged computational results show good agreement with the experimental data along the cavity floor and walls. When the yaw angle is nonzero, there is no longer a single length scale (length-to-depth ratio) for the flow, as is the case for zero yaw angle flow. The dominant directions and inclinations of the vortices are dramatically different for this nonsymmetric flow. The vortex shedding from the cavity into the mainstream flow is captured computationally. This phenomenon, which is due to the oscillation of the shear layer, is confirmed by the solutions of both schemes.
Changes of Explicit and Implicit Stigma in Medical Students during Psychiatric Clerkship.
Wang, Peng-Wei; Ko, Chih-Hung; Chen, Cheng-Sheng; Yang, Yi-Hsin Connine; Lin, Huang-Chi; Cheng, Cheng-Chung; Tsang, Hin-Yeung; Wu, Ching-Kuan; Yen, Cheng-Fang
2016-04-01
This study examines the differences in explicit and implicit stigma between medical and non-medical undergraduate students at baseline; the changes of explicit and implicit stigma in medical undergraduate and non-medical undergraduate students after a 1-month psychiatric clerkship and 1-month follow-up period; and the differences in the changes of explicit and implicit stigma between medical and non-medical undergraduate students. Seventy-two medical undergraduate students and 64 non-medical undergraduate students were enrolled. All participants were interviewed at intake and after 1 month. The Taiwanese version of the Stigma Assessment Scale and the Implicit Association Test were used to measure the participants' explicit and implicit stigma. Neither explicit nor implicit stigma differed between two groups at baseline. The medical, but not the non-medical, undergraduate students had a significant decrease in explicit stigma during the 1-month period of follow-up. Neither the medical nor the non-medical undergraduate students exhibited a significant change in implicit stigma during the one-month of follow-up, however. There was an interactive effect between group and time on explicit stigma but not on implicit stigma. Explicit but not implicit stigma toward mental illness decreased in the medical undergraduate students after a psychiatric clerkship. Further study is needed to examine how to improve implicit stigma toward mental illness.
Santora, Jarrod A; Schroeder, Isaac D; Field, John C; Wells, Brian K; Sydeman, William J
Studies of predator–prey demographic responses and the physical drivers of such relationships are rare, yet essential for predicting future changes in the structure and dynamics of marine ecosystems. Here, we hypothesize that predator–prey relationships vary spatially in association with underlying physical ocean conditions, leading to observable changes in demographic rates, such as reproduction. To test this hypothesis, we quantified spatio-temporal variability in hydrographic conditions, krill, and forage fish to model predator (seabird) demographic responses over 18 years (1990–2007). We used principal component analysis and spatial correlation maps to assess coherence among ocean conditions, krill, and forage fish, and generalized additive models to quantify interannual variability in seabird breeding success relative to prey abundance. The first principal component of four hydrographic measurements yielded an index that partitioned “warm/weak upwelling” and “cool/strong upwelling” years. Partitioning of krill and forage fish time series among shelf and oceanic regions yielded spatially explicit indicators of prey availability. Krill abundance within the oceanic region was remarkably consistent between years, whereas krill over the shelf showed marked interannual fluctuations in relation to ocean conditions. Anchovy abundance varied on the shelf, and was greater in years of strong stratification, weak upwelling and warmer temperatures. Spatio-temporal variability of juvenile forage fish co-varied strongly with each other and with krill, but was weakly correlated with hydrographic conditions. Demographic responses between seabirds and prey availability revealed spatially variable associations indicative of the dynamic nature of “predator–habitat” relationships. Quantification of spatially explicit demographic responses, and their variability through time, demonstrate the possibility of delineating specific critical areas where the implementation of protective measures could maintain functions and productivity of central place foraging predators.
Land use patterns and related carbon losses following deforestation in South America
NASA Astrophysics Data System (ADS)
De Sy, V.; Herold, M.; Achard, F.; Beuchle, R.; Clevers, J. G. P. W.; Lindquist, E.; Verchot, L.
2015-12-01
Land use change in South America, mainly deforestation, is a large source of anthropogenic CO2 emissions. Identifying and addressing the causes or drivers of anthropogenic forest change is considered crucial for global climate change mitigation. Few countries however, monitor deforestation drivers in a systematic manner. National-level quantitative spatially explicit information on drivers is often lacking. This study quantifies proximate drivers of deforestation and related carbon losses in South America based on remote sensing time series in a systematic, spatially explicit manner. Deforestation areas were derived from the 2010 global remote sensing survey of the Food and Agricultural Organisation Forest Resource Assessment. To assess proximate drivers, land use following deforestation was assigned by visual interpretation of high-resolution satellite imagery. To estimate gross carbon losses from deforestation, default Tier 1 biomass levels per country and eco-zone were used. Pasture was the dominant driver of forest area (71.2%) and related carbon loss (71.6%) in South America, followed by commercial cropland (14% and 12.1% respectively). Hotspots of deforestation due to pasture occurred in Northern Argentina, Western Paraguay, and along the arc of deforestation in Brazil where they gradually moved into higher biomass forests causing additional carbon losses. Deforestation driven by commercial cropland increased in time, with hotspots occurring in Brazil (Mato Grosso State), Northern Argentina, Eastern Paraguay and Central Bolivia. Infrastructure, such as urban expansion and roads, contributed little as proximate drivers of forest area loss (1.7%). Our findings contribute to the understanding of drivers of deforestation and related carbon losses in South America, and are comparable at the national, regional and continental level. In addition, they support the development of national REDD+ interventions and forest monitoring systems, and provide valuable input for statistical analysis and modelling of underlying drivers of deforestation.
Memory-Efficient Analysis of Dense Functional Connectomes.
Loewe, Kristian; Donohue, Sarah E; Schoenfeld, Mircea A; Kruse, Rudolf; Borgelt, Christian
2016-01-01
The functioning of the human brain relies on the interplay and integration of numerous individual units within a complex network. To identify network configurations characteristic of specific cognitive tasks or mental illnesses, functional connectomes can be constructed based on the assessment of synchronous fMRI activity at separate brain sites, and then analyzed using graph-theoretical concepts. In most previous studies, relatively coarse parcellations of the brain were used to define regions as graphical nodes. Such parcellated connectomes are highly dependent on parcellation quality because regional and functional boundaries need to be relatively consistent for the results to be interpretable. In contrast, dense connectomes are not subject to this limitation, since the parcellation inherent to the data is used to define graphical nodes, also allowing for a more detailed spatial mapping of connectivity patterns. However, dense connectomes are associated with considerable computational demands in terms of both time and memory requirements. The memory required to explicitly store dense connectomes in main memory can render their analysis infeasible, especially when considering high-resolution data or analyses across multiple subjects or conditions. Here, we present an object-based matrix representation that achieves a very low memory footprint by computing matrix elements on demand instead of explicitly storing them. In doing so, memory required for a dense connectome is reduced to the amount needed to store the underlying time series data. Based on theoretical considerations and benchmarks, different matrix object implementations and additional programs (based on available Matlab functions and Matlab-based third-party software) are compared with regard to their computational efficiency. The matrix implementation based on on-demand computations has very low memory requirements, thus enabling analyses that would be otherwise infeasible to conduct due to insufficient memory. An open source software package containing the created programs is available for download.
Memory-Efficient Analysis of Dense Functional Connectomes
Loewe, Kristian; Donohue, Sarah E.; Schoenfeld, Mircea A.; Kruse, Rudolf; Borgelt, Christian
2016-01-01
The functioning of the human brain relies on the interplay and integration of numerous individual units within a complex network. To identify network configurations characteristic of specific cognitive tasks or mental illnesses, functional connectomes can be constructed based on the assessment of synchronous fMRI activity at separate brain sites, and then analyzed using graph-theoretical concepts. In most previous studies, relatively coarse parcellations of the brain were used to define regions as graphical nodes. Such parcellated connectomes are highly dependent on parcellation quality because regional and functional boundaries need to be relatively consistent for the results to be interpretable. In contrast, dense connectomes are not subject to this limitation, since the parcellation inherent to the data is used to define graphical nodes, also allowing for a more detailed spatial mapping of connectivity patterns. However, dense connectomes are associated with considerable computational demands in terms of both time and memory requirements. The memory required to explicitly store dense connectomes in main memory can render their analysis infeasible, especially when considering high-resolution data or analyses across multiple subjects or conditions. Here, we present an object-based matrix representation that achieves a very low memory footprint by computing matrix elements on demand instead of explicitly storing them. In doing so, memory required for a dense connectome is reduced to the amount needed to store the underlying time series data. Based on theoretical considerations and benchmarks, different matrix object implementations and additional programs (based on available Matlab functions and Matlab-based third-party software) are compared with regard to their computational efficiency. The matrix implementation based on on-demand computations has very low memory requirements, thus enabling analyses that would be otherwise infeasible to conduct due to insufficient memory. An open source software package containing the created programs is available for download. PMID:27965565
Emotional memory is perceptual.
Arntz, Arnoud; de Groot, Corlijn; Kindt, Merel
2005-03-01
In two experiments it was investigated which aspects of memory are influenced by emotion. Using a framework proposed by Roediger (American Psychologist 45 (1990) 1043-1056), two dimensions relevant for memory were distinguished the implicit-explicit distinction, and the perceptual versus conceptual distinction. In week 1, subjects viewed a series of slides accompanied with a spoken story in either of the two versions, a neutral version, or a version with an emotional mid-phase. In week 2, memory performance for the slides and story was assessed unexpectedly. A free recall test revealed superior memory in the emotional condition for the story's mid-phase stimuli as compared to the neutral condition, replicating earlier findings. Furthermore, memory performance was assessed using tests that systematically assessed all combinations of implicit versus explicit and perceptual versus conceptual memory. Subjects who had listened to the emotional story had superior perceptual memory, on both implicit and explicit level, compared to those who had listened to the neutral story. Conceptual memory was not superior in the emotional condition. The results suggest that emotion specifically promotes perceptual memory, probably by better encoding of perceptual aspects of emotional experiences. This might be related to the prominent position of perceptual memories in traumatic memory, manifest in intrusions, nightmares and reliving experiences.
Subliminal mere exposure and explicit and implicit positive affective responses.
Hicks, Joshua A; King, Laura A
2011-06-01
Research suggests that repeated subliminal exposure to environmental stimuli enhances positive affective responses. To date, this research has primarily concentrated on the effects of repeated exposure on explicit measures of positive affect (PA). However, recent research suggests that repeated subliminal presentations may increase implicit PA as well. The present study tested this hypothesis. Participants were either subliminally primed with repeated presentations of the same stimuli or only exposed to each stimulus one time. Results confirmed predictions showing that repeated exposure to the same stimuli increased both explicit and implicit PA. Implications for the role of explicit and implicit PA in attitudinal judgements are discussed.
Large time-step stability of explicit one-dimensional advection schemes
NASA Technical Reports Server (NTRS)
Leonard, B. P.
1993-01-01
There is a wide-spread belief that most explicit one-dimensional advection schemes need to satisfy the so-called 'CFL condition' - that the Courant number, c = udelta(t)/delta(x), must be less than or equal to one, for stability in the von Neumann sense. This puts severe limitations on the time-step in high-speed, fine-grid calculations and is an impetus for the development of implicit schemes, which often require less restrictive time-step conditions for stability, but are more expensive per time-step. However, it turns out that, at least in one dimension, if explicit schemes are formulated in a consistent flux-based conservative finite-volume form, von Neumann stability analysis does not place any restriction on the allowable Courant number. Any explicit scheme that is stable for c is less than 1, with a complex amplitude ratio, G(c), can be easily extended to arbitrarily large c. The complex amplitude ratio is then given by exp(- (Iota)(Nu)(Theta)) G(delta(c)), where N is the integer part of c, and delta(c) = c - N (less than 1); this is clearly stable. The CFL condition is, in fact, not a stability condition at all, but, rather, a 'range restriction' on the 'pieces' in a piece-wise polynomial interpolation. When a global view is taken of the interpolation, the need for a CFL condition evaporates. A number of well-known explicit advection schemes are considered and thus extended to large delta(t). The analysis also includes a simple interpretation of (large delta(t)) total-variation-diminishing (TVD) constraints.
NASA Astrophysics Data System (ADS)
Vaidya, Bhargav; Prasad, Deovrat; Mignone, Andrea; Sharma, Prateek; Rickler, Luca
2017-12-01
An important ingredient in numerical modelling of high temperature magnetized astrophysical plasmas is the anisotropic transport of heat along magnetic field lines from higher to lower temperatures. Magnetohydrodynamics typically involves solving the hyperbolic set of conservation equations along with the induction equation. Incorporating anisotropic thermal conduction requires to also treat parabolic terms arising from the diffusion operator. An explicit treatment of parabolic terms will considerably reduce the simulation time step due to its dependence on the square of the grid resolution (Δx) for stability. Although an implicit scheme relaxes the constraint on stability, it is difficult to distribute efficiently on a parallel architecture. Treating parabolic terms with accelerated super-time-stepping (STS) methods has been discussed in literature, but these methods suffer from poor accuracy (first order in time) and also have difficult-to-choose tuneable stability parameters. In this work, we highlight a second-order (in time) Runge-Kutta-Legendre (RKL) scheme (first described by Meyer, Balsara & Aslam 2012) that is robust, fast and accurate in treating parabolic terms alongside the hyperbolic conversation laws. We demonstrate its superiority over the first-order STS schemes with standard tests and astrophysical applications. We also show that explicit conduction is particularly robust in handling saturated thermal conduction. Parallel scaling of explicit conduction using RKL scheme is demonstrated up to more than 104 processors.
Class of self-limiting growth models in the presence of nonlinear diffusion
NASA Astrophysics Data System (ADS)
Kar, Sandip; Banik, Suman Kumar; Ray, Deb Shankar
2002-06-01
The source term in a reaction-diffusion system, in general, does not involve explicit time dependence. A class of self-limiting growth models dealing with animal and tumor growth and bacterial population in a culture, on the other hand, are described by kinetics with explicit functions of time. We analyze a reaction-diffusion system to study the propagation of spatial front for these models.
ERIC Educational Resources Information Center
Shintani, Natsuko
2017-01-01
This study examines the effects of the timing of explicit instruction (EI) on grammatical accuracy. A total of 123 learners were divided into two groups: those with some productive knowledge of past-counterfactual conditionals (+Prior Knowledge) and those without such knowledge (-Prior Knowledge). Each group was divided into four conditions. Two…
Finite Element Modeling of Coupled Flexible Multibody Dynamics and Liquid Sloshing
2006-09-01
tanks is presented. The semi-discrete combined solid and fluid equations of motions are integrated using a time- accurate parallel explicit solver...Incompressible fluid flow in a moving/deforming container including accurate modeling of the free-surface, turbulence, and viscous effects ...paper, a single computational code which uses a time- accurate explicit solution procedure is used to solve both the solid and fluid equations of
High-Order Implicit-Explicit Multi-Block Time-stepping Method for Hyperbolic PDEs
NASA Technical Reports Server (NTRS)
Nielsen, Tanner B.; Carpenter, Mark H.; Fisher, Travis C.; Frankel, Steven H.
2014-01-01
This work seeks to explore and improve the current time-stepping schemes used in computational fluid dynamics (CFD) in order to reduce overall computational time. A high-order scheme has been developed using a combination of implicit and explicit (IMEX) time-stepping Runge-Kutta (RK) schemes which increases numerical stability with respect to the time step size, resulting in decreased computational time. The IMEX scheme alone does not yield the desired increase in numerical stability, but when used in conjunction with an overlapping partitioned (multi-block) domain significant increase in stability is observed. To show this, the Overlapping-Partition IMEX (OP IMEX) scheme is applied to both one-dimensional (1D) and two-dimensional (2D) problems, the nonlinear viscous Burger's equation and 2D advection equation, respectively. The method uses two different summation by parts (SBP) derivative approximations, second-order and fourth-order accurate. The Dirichlet boundary conditions are imposed using the Simultaneous Approximation Term (SAT) penalty method. The 6-stage additive Runge-Kutta IMEX time integration schemes are fourth-order accurate in time. An increase in numerical stability 65 times greater than the fully explicit scheme is demonstrated to be achievable with the OP IMEX method applied to 1D Burger's equation. Results from the 2D, purely convective, advection equation show stability increases on the order of 10 times the explicit scheme using the OP IMEX method. Also, the domain partitioning method in this work shows potential for breaking the computational domain into manageable sizes such that implicit solutions for full three-dimensional CFD simulations can be computed using direct solving methods rather than the standard iterative methods currently used.
Ramirez, Jason J.; Dennhardt, Ashley A.; Baldwin, Scott A.; Murphy, James G.; Lindgren, Kristen P.
2016-01-01
Behavioral economic demand curve indices of alcohol consumption reflect decisions to consume alcohol at varying costs. Although these indices predict alcohol-related problems beyond established predictors, little is known about the determinants of elevated demand. Two cognitive constructs that may underlie alcohol demand are alcohol-approach inclinations and drinking identity. The aim of this study was to evaluate implicit and explicit measures of these constructs as predictors of alcohol demand curve indices. College student drinkers (N = 223, 59% female) completed implicit and explicit measures of drinking identity and alcohol-approach inclinations at three timepoints separated by three-month intervals, and completed the Alcohol Purchase Task to assess demand at Time 3. Given no change in our alcohol-approach inclinations and drinking identity measures over time, random intercept-only models were used to predict two demand indices: Amplitude, which represents maximum hypothetical alcohol consumption and expenditures, and Persistence, which represents sensitivity to increasing prices. When modeled separately, implicit and explicit measures of drinking identity and alcohol-approach inclinations positively predicted demand indices. When implicit and explicit measures were included in the same model, both measures of drinking identity predicted Amplitude, but only explicit drinking identity predicted Persistence. In contrast, explicit measures of alcohol-approach inclinations, but not implicit measures, predicted both demand indices. Therefore, there was more support for explicit, versus implicit, measures as unique predictors of alcohol demand. Overall, drinking identity and alcohol-approach inclinations both exhibit positive associations with alcohol demand and represent potentially modifiable cognitive constructs that may underlie elevated demand in college student drinkers. PMID:27379444
Wei, Kun; Zhong, Suchuan
2017-08-01
Phenomenologically inspired by dolphins' unihemispheric sleep, we introduce a minimal model for random walks with physiological memory. The physiological memory consists of long-term memory which includes unconscious implicit memory and conscious explicit memory, and working memory which serves as a multi-component system for integrating, manipulating and managing short-term storage. The model assumes that the sleeping state allows retrievals of episodic objects merely from the episodic buffer where these memory objects are invoked corresponding to the ambient objects and are thus object-oriented, together with intermittent but increasing use of implicit memory in which decisions are unconsciously picked up from historical time series. The process of memory decay and forgetting is constructed in the episodic buffer. The walker's risk attitude, as a product of physiological heuristics according to the performance of objected-oriented decisions, is imposed on implicit memory. The analytical results of unihemispheric random walks with the mixture of object-oriented and time-oriented memory, as well as the long-time behavior which tends to the use of implicit memory, are provided, indicating the common sense that a conservative risk attitude is inclinable to slow movement.
Liang, H; Shi, B C; Guo, Z L; Chai, Z H
2014-05-01
In this paper, a phase-field-based multiple-relaxation-time lattice Boltzmann (LB) model is proposed for incompressible multiphase flow systems. In this model, one distribution function is used to solve the Chan-Hilliard equation and the other is adopted to solve the Navier-Stokes equations. Unlike previous phase-field-based LB models, a proper source term is incorporated in the interfacial evolution equation such that the Chan-Hilliard equation can be derived exactly and also a pressure distribution is designed to recover the correct hydrodynamic equations. Furthermore, the pressure and velocity fields can be calculated explicitly. A series of numerical tests, including Zalesak's disk rotation, a single vortex, a deformation field, and a static droplet, have been performed to test the accuracy and stability of the present model. The results show that, compared with the previous models, the present model is more stable and achieves an overall improvement in the accuracy of the capturing interface. In addition, compared to the single-relaxation-time LB model, the present model can effectively reduce the spurious velocity and fluctuation of the kinetic energy. Finally, as an application, the Rayleigh-Taylor instability at high Reynolds numbers is investigated.
EventThread: Visual Summarization and Stage Analysis of Event Sequence Data.
Guo, Shunan; Xu, Ke; Zhao, Rongwen; Gotz, David; Zha, Hongyuan; Cao, Nan
2018-01-01
Event sequence data such as electronic health records, a person's academic records, or car service records, are ordered series of events which have occurred over a period of time. Analyzing collections of event sequences can reveal common or semantically important sequential patterns. For example, event sequence analysis might reveal frequently used care plans for treating a disease, typical publishing patterns of professors, and the patterns of service that result in a well-maintained car. It is challenging, however, to visually explore large numbers of event sequences, or sequences with large numbers of event types. Existing methods focus on extracting explicitly matching patterns of events using statistical analysis to create stages of event progression over time. However, these methods fail to capture latent clusters of similar but not identical evolutions of event sequences. In this paper, we introduce a novel visualization system named EventThread which clusters event sequences into threads based on tensor analysis and visualizes the latent stage categories and evolution patterns by interactively grouping the threads by similarity into time-specific clusters. We demonstrate the effectiveness of EventThread through usage scenarios in three different application domains and via interviews with an expert user.
Intermediate energy proton-deuteron elastic scattering
NASA Technical Reports Server (NTRS)
Wilson, J. W.
1973-01-01
A fully symmetrized multiple scattering series is considered for the description of proton-deuteron elastic scattering. An off-shell continuation of the experimentally known twobody amplitudes that retains the exchange symmeteries required for the calculation is presented. The one boson exchange terms of the two body amplitudes are evaluated exactly in this off-shell prescription. The first two terms of the multiple scattering series are calculated explicitly whereas multiple scattering effects are obtained as minimum variance estimates from the 146-MeV data of Postma and Wilson. The multiple scattering corrections indeed consist of low order partial waves as suggested by Sloan based on model studies with separable interactions. The Hamada-Johnston wave function is shown consistent with the data for internucleon distances greater than about 0.84 fm.
NASA Technical Reports Server (NTRS)
Elmiligui, Alaa; Cannizzaro, Frank; Melson, N. D.
1991-01-01
A general multiblock method for the solution of the three-dimensional, unsteady, compressible, thin-layer Navier-Stokes equations has been developed. The convective and pressure terms are spatially discretized using Roe's flux differencing technique while the viscous terms are centrally differenced. An explicit Runge-Kutta method is used to advance the solution in time. Local time stepping, adaptive implicit residual smoothing, and the Full Approximation Storage (FAS) multigrid scheme are added to the explicit time stepping scheme to accelerate convergence to steady state. Results for three-dimensional test cases are presented and discussed.
A solution to the surface intersection problem. [Boolean functions in geometric modeling
NASA Technical Reports Server (NTRS)
Timer, H. G.
1977-01-01
An application-independent geometric model within a data base framework should support the use of Boolean operators which allow the user to construct a complex model by appropriately combining a series of simple models. The use of these operators leads to the concept of implicitly and explicitly defined surfaces. With an explicitly defined model, the surface area may be computed by simply summing the surface areas of the bounding surfaces. For an implicitly defined model, the surface area computation must deal with active and inactive regions. Because the surface intersection problem involves four unknowns and its solution is a space curve, the parametric coordinates of each surface must be determined as a function of the arc length. Various subproblems involved in the general intersection problem are discussed, and the mathematical basis for their solution is presented along with a program written in FORTRAN IV for implementation on the IBM 370 TSO system.
The explicit form of the rate function for semi-Markov processes and its contractions
NASA Astrophysics Data System (ADS)
Sughiyama, Yuki; Kobayashi, Testuya J.
2018-03-01
We derive the explicit form of the rate function for semi-Markov processes. Here, the ‘random time change trick’ plays an essential role. Also, by exploiting the contraction principle of large deviation theory to the explicit form, we show that the fluctuation theorem (Gallavotti-Cohen symmetry) holds for semi-Markov cases. Furthermore, we elucidate that our rate function is an extension of the level 2.5 rate function for Markov processes to semi-Markov cases.
Full versus divided attention and implicit memory performance.
Wolters, G; Prinsen, A
1997-11-01
Effects of full and divided attention during study on explicit and implicit memory performance were investigated in two experiments. Study time was manipulated in a third experiment. Experiment 1 showed that both similar and dissociative effects can be found in the two kinds of memory test, depending on the difficulty of the concurrent tasks used in the divided-attention condition. In this experiment, however, standard implicit memory tests were used and contamination by explicit memory influences cannot be ruled out. Therefore, in Experiments 2 and 3 the process dissociation procedure was applied. Manipulations of attention during study and of study time clearly affected the controlled (explicit) memory component, but had no effect on the automatic (implicit) memory component. Theoretical implications of these findings are discussed.
NASA Astrophysics Data System (ADS)
Kang, S.; Muralikrishnan, S.; Bui-Thanh, T.
2017-12-01
We propose IMEX HDG-DG schemes for Euler systems on cubed sphere. Of interest is subsonic flow, where the speed of the acoustic wave is faster than that of the nonlinear advection. In order to simulate these flows efficiently, we split the governing system into stiff part describing the fast waves and non-stiff part associated with nonlinear advection. The former is discretized implicitly with HDG method while explicit Runge-Kutta DG discretization is employed for the latter. The proposed IMEX HDG-DG framework: 1) facilitates high-order solution both in time and space; 2) avoids overly small time stepsizes; 3) requires only one linear system solve per time step; and 4) relatively to DG generates smaller and sparser linear system while promoting further parallelism owing to HDG discretization. Numerical results for various test cases demonstrate that our methods are comparable to explicit Runge-Kutta DG schemes in terms of accuracy, while allowing for much larger time stepsizes.
Rational trigonometric approximations using Fourier series partial sums
NASA Technical Reports Server (NTRS)
Geer, James F.
1993-01-01
A class of approximations (S(sub N,M)) to a periodic function f which uses the ideas of Pade, or rational function, approximations based on the Fourier series representation of f, rather than on the Taylor series representation of f, is introduced and studied. Each approximation S(sub N,M) is the quotient of a trigonometric polynomial of degree N and a trigonometric polynomial of degree M. The coefficients in these polynomials are determined by requiring that an appropriate number of the Fourier coefficients of S(sub N,M) agree with those of f. Explicit expressions are derived for these coefficients in terms of the Fourier coefficients of f. It is proven that these 'Fourier-Pade' approximations converge point-wise to (f(x(exp +))+f(x(exp -)))/2 more rapidly (in some cases by a factor of 1/k(exp 2M)) than the Fourier series partial sums on which they are based. The approximations are illustrated by several examples and an application to the solution of an initial, boundary value problem for the simple heat equation is presented.
Batterink, Laura; Neville, Helen
2011-11-01
The vast majority of word meanings are learned simply by extracting them from context rather than by rote memorization or explicit instruction. Although this skill is remarkable, little is known about the brain mechanisms involved. In the present study, ERPs were recorded as participants read stories in which pseudowords were presented multiple times, embedded in consistent, meaningful contexts (referred to as meaning condition, M+) or inconsistent, meaningless contexts (M-). Word learning was then assessed implicitly using a lexical decision task and explicitly through recall and recognition tasks. Overall, during story reading, M- words elicited a larger N400 than M+ words, suggesting that participants were better able to semantically integrate M+ words than M- words throughout the story. In addition, M+ words whose meanings were subsequently correctly recognized and recalled elicited a more positive ERP in a later time window compared with M+ words whose meanings were incorrectly remembered, consistent with the idea that the late positive component is an index of encoding processes. In the lexical decision task, no behavioral or electrophysiological evidence for implicit priming was found for M+ words. In contrast, during the explicit recognition task, M+ words showed a robust N400 effect. The N400 effect was dependent upon recognition performance, such that only correctly recognized M+ words elicited an N400. This pattern of results provides evidence that the explicit representations of word meanings can develop rapidly, whereas implicit representations may require more extensive exposure or more time to emerge.
NASA Astrophysics Data System (ADS)
Wang, Jinting; Lu, Liqiao; Zhu, Fei
2018-01-01
Finite element (FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations (RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time (TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method (CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ (λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.
Time irreversibility in reversible shell models of turbulence.
De Pietro, Massimo; Biferale, Luca; Boffetta, Guido; Cencini, Massimo
2018-04-06
Turbulent flows governed by the Navier-Stokes equations (NSE) generate an out-of-equilibrium time irreversible energy cascade from large to small scales. In the NSE, the energy transfer is due to the nonlinear terms that are formally symmetric under time reversal. As for the dissipative term: first, it explicitly breaks time reversibility; second, it produces a small-scale sink for the energy transfer that remains effective even in the limit of vanishing viscosity. As a result, it is not clear how to disentangle the time irreversibility originating from the non-equilibrium energy cascade from the explicit time-reversal symmetry breaking due to the viscous term. To this aim, in this paper we investigate the properties of the energy transfer in turbulent shell models by using a reversible viscous mechanism, avoiding any explicit breaking of the [Formula: see text] symmetry. We probe time irreversibility by studying the statistics of Lagrangian power, which is found to be asymmetric under time reversal also in the time-reversible model. This suggests that the turbulent dynamics converges to a strange attractor where time reversibility is spontaneously broken and whose properties are robust for what concerns purely inertial degrees of freedoms, as verified by the anomalous scaling behavior of the velocity structure functions.
Mapping and Visualization of Storm-Surge Dynamics for Hurricane Katrina and Hurricane Rita
Gesch, Dean B.
2009-01-01
The damages caused by the storm surges from Hurricane Katrina and Hurricane Rita were significant and occurred over broad areas. Storm-surge maps are among the most useful geospatial datasets for hurricane recovery, impact assessments, and mitigation planning for future storms. Surveyed high-water marks were used to generate a maximum storm-surge surface for Hurricane Katrina extending from eastern Louisiana to Mobile Bay, Alabama. The interpolated surface was intersected with high-resolution lidar elevation data covering the study area to produce a highly detailed digital storm-surge inundation map. The storm-surge dataset and related data are available for display and query in a Web-based viewer application. A unique water-level dataset from a network of portable pressure sensors deployed in the days just prior to Hurricane Rita's landfall captured the hurricane's storm surge. The recorded sensor data provided water-level measurements with a very high temporal resolution at surveyed point locations. The resulting dataset was used to generate a time series of storm-surge surfaces that documents the surge dynamics in a new, spatially explicit way. The temporal information contained in the multiple storm-surge surfaces can be visualized in a number of ways to portray how the surge interacted with and was affected by land surface features. Spatially explicit storm-surge products can be useful for a variety of hurricane impact assessments, especially studies of wetland and land changes where knowledge of the extent and magnitude of storm-surge flooding is critical.
Linking river management to species conservation using dynamic landscape scale models
Freeman, Mary C.; Buell, Gary R.; Hay, Lauren E.; Hughes, W. Brian; Jacobson, Robert B.; Jones, John W.; Jones, S.A.; LaFontaine, Jacob H.; Odom, Kenneth R.; Peterson, James T.; Riley, Jeffrey W.; Schindler, J. Stephen; Shea, C.; Weaver, J.D.
2013-01-01
Efforts to conserve stream and river biota could benefit from tools that allow managers to evaluate landscape-scale changes in species distributions in response to water management decisions. We present a framework and methods for integrating hydrology, geographic context and metapopulation processes to simulate effects of changes in streamflow on fish occupancy dynamics across a landscape of interconnected stream segments. We illustrate this approach using a 482 km2 catchment in the southeastern US supporting 50 or more stream fish species. A spatially distributed, deterministic and physically based hydrologic model is used to simulate daily streamflow for sub-basins composing the catchment. We use geographic data to characterize stream segments with respect to channel size, confinement, position and connectedness within the stream network. Simulated streamflow dynamics are then applied to model fish metapopulation dynamics in stream segments, using hypothesized effects of streamflow magnitude and variability on population processes, conditioned by channel characteristics. The resulting time series simulate spatially explicit, annual changes in species occurrences or assemblage metrics (e.g. species richness) across the catchment as outcomes of management scenarios. Sensitivity analyses using alternative, plausible links between streamflow components and metapopulation processes, or allowing for alternative modes of fish dispersal, demonstrate large effects of ecological uncertainty on model outcomes and highlight needed research and monitoring. Nonetheless, with uncertainties explicitly acknowledged, dynamic, landscape-scale simulations may prove useful for quantitatively comparing river management alternatives with respect to species conservation.
NASA Technical Reports Server (NTRS)
Chao, W. C.
1982-01-01
With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.
Toward real-time performance benchmarks for Ada
NASA Technical Reports Server (NTRS)
Clapp, Russell M.; Duchesneau, Louis; Volz, Richard A.; Mudge, Trevor N.; Schultze, Timothy
1986-01-01
The issue of real-time performance measurements for the Ada programming language through the use of benchmarks is addressed. First, the Ada notion of time is examined and a set of basic measurement techniques are developed. Then a set of Ada language features believed to be important for real-time performance are presented and specific measurement methods discussed. In addition, other important time related features which are not explicitly part of the language but are part of the run-time related features which are not explicitly part of the language but are part of the run-time system are also identified and measurement techniques developed. The measurement techniques are applied to the language and run-time system features and the results are presented.
Assessing REDD+ performance of countries with low monitoring capacities: the matrix approach
NASA Astrophysics Data System (ADS)
Bucki, M.; Cuypers, D.; Mayaux, P.; Achard, F.; Estreguil, C.; Grassi, G.
2012-03-01
Estimating emissions from deforestation and degradation of forests in many developing countries is so uncertain that the effects of changes in forest management could remain within error ranges (i.e. undetectable) for several years. Meanwhile UNFCCC Parties need consistent time series of meaningful performance indicators to set credible benchmarks and allocate REDD+ incentives to the countries, programs and activities that actually reduce emissions, while providing social and environmental benefits. Introducing widespread measuring of carbon in forest land (which would be required to estimate more accurately changes in emissions from degradation and forest management) will take time and considerable resources. To ensure the overall credibility and effectiveness of REDD+, parties must consider the design of cost-effective systems which can provide reliable and comparable data on anthropogenic forest emissions. Remote sensing can provide consistent time series of land cover maps for most non-Annex-I countries, retrospectively. These maps can be analyzed to identify the forests that are intact (i.e. beyond significant human influence), and whose fragmentation could be a proxy for degradation. This binary stratification of forests biomes (intact/non-intact), a transition matrix and the use of default carbon stock change factors can then be used to provide initial estimates of trends in emission changes. A proof-of-concept is provided for one biome of the Democratic Republic of the Congo over a virtual commitment period (2005-2010). This approach could allow assessment of the performance of the five REDD+ activities (deforestation, degradation, conservation, management and enhancement of forest carbon stocks) in a spatially explicit, verifiable manner. Incentives could then be tailored to prioritize activities depending on the national context and objectives.
Bock von Wülfingen, Bettina
2015-03-01
The article analyses the role of time in the visual culture of two phases in embryological research: at the end of the nineteenth century, and in the years around 2000. The first case study involves microscopical cytology, the second reproductive genetics. In the 1870s we observe the first of a series of abstractions in research methodology on conception and development, moving from a method propagated as the observation of the "real" living object to the production of stained and fixated objects that are then aligned in temporal order. This process of abstraction ultimately fosters a dissociation between space and time in the research phenomenon, which after 2000 is problematized and explicitly tackled in embryology. Mass data computing made it possible partially to re-include temporal complexity in reproductive genetics in certain, though not all, fields of reproductive genetics. Here research question, instrument and modelling interact in ways that produce very different temporal relationships. Specifically, this article suggests that the different techniques in the late nineteenth century and around 2000 were employed in order to align the time of the researcher with that of the phenomenon and to economize the researcher's work in interaction with the research material's own temporal challenges.
Three-dimensional inverse modelling of damped elastic wave propagation in the Fourier domain
NASA Astrophysics Data System (ADS)
Petrov, Petr V.; Newman, Gregory A.
2014-09-01
3-D full waveform inversion (FWI) of seismic wavefields is routinely implemented with explicit time-stepping simulators. A clear advantage of explicit time stepping is the avoidance of solving large-scale implicit linear systems that arise with frequency domain formulations. However, FWI using explicit time stepping may require a very fine time step and (as a consequence) significant computational resources and run times. If the computational challenges of wavefield simulation can be effectively handled, an FWI scheme implemented within the frequency domain utilizing only a few frequencies, offers a cost effective alternative to FWI in the time domain. We have therefore implemented a 3-D FWI scheme for elastic wave propagation in the Fourier domain. To overcome the computational bottleneck in wavefield simulation, we have exploited an efficient Krylov iterative solver for the elastic wave equations approximated with second and fourth order finite differences. The solver does not exploit multilevel preconditioning for wavefield simulation, but is coupled efficiently to the inversion iteration workflow to reduce computational cost. The workflow is best described as a series of sequential inversion experiments, where in the case of seismic reflection acquisition geometries, the data has been laddered such that we first image highly damped data, followed by data where damping is systemically reduced. The key to our modelling approach is its ability to take advantage of solver efficiency when the elastic wavefields are damped. As the inversion experiment progresses, damping is significantly reduced, effectively simulating non-damped wavefields in the Fourier domain. While the cost of the forward simulation increases as damping is reduced, this is counterbalanced by the cost of the outer inversion iteration, which is reduced because of a better starting model obtained from the larger damped wavefield used in the previous inversion experiment. For cross-well data, it is also possible to launch a successful inversion experiment without laddering the damping constants. With this type of acquisition geometry, the solver is still quite effective using a small fixed damping constant. To avoid cycle skipping, we also employ a multiscale imaging approach, in which frequency content of the data is also laddered (with the data now including both reflection and cross-well data acquisition geometries). Thus the inversion process is launched using low frequency data to first recover the long spatial wavelength of the image. With this image as a new starting model, adding higher frequency data refines and enhances the resolution of the image. FWI using laddered frequencies with an efficient damping schemed enables reconstructing elastic attributes of the subsurface at a resolution that approaches half the smallest wavelength utilized to image the subsurface. We show the possibility of effectively carrying out such reconstructions using two to six frequencies, depending upon the application. Using the proposed FWI scheme, massively parallel computing resources are essential for reasonable execution times.
NASA Astrophysics Data System (ADS)
Van Londersele, Arne; De Zutter, Daniël; Vande Ginste, Dries
2017-08-01
This work focuses on efficient full-wave solutions of multiscale electromagnetic problems in the time domain. Three local implicitization techniques are proposed and carefully analyzed in order to relax the traditional time step limit of the Finite-Difference Time-Domain (FDTD) method on a nonuniform, staggered, tensor product grid: Newmark, Crank-Nicolson (CN) and Alternating-Direction-Implicit (ADI) implicitization. All of them are applied in preferable directions, alike Hybrid Implicit-Explicit (HIE) methods, as to limit the rank of the sparse linear systems. Both exponential and linear stability are rigorously investigated for arbitrary grid spacings and arbitrary inhomogeneous, possibly lossy, isotropic media. Numerical examples confirm the conservation of energy inside a cavity for a million iterations if the time step is chosen below the proposed, relaxed limit. Apart from the theoretical contributions, new accomplishments such as the development of the leapfrog Alternating-Direction-Hybrid-Implicit-Explicit (ADHIE) FDTD method and a less stringent Courant-like time step limit for the conventional, fully explicit FDTD method on a nonuniform grid, have immediate practical applications.
2016-01-01
Moderate Resolution Imaging Spectroradiometer (MODIS) data forms the basis for numerous land use and land cover (LULC) mapping and analysis frameworks at regional scale. Compared to other satellite sensors, the spatial, temporal and spectral specifications of MODIS are considered as highly suitable for LULC classifications which support many different aspects of social, environmental and developmental research. The LULC mapping of this study was carried out in the context of the development of an evaluation approach for Zimbabwe’s land reform program. Within the discourse about the success of this program, a lack of spatially explicit methods to produce objective data, such as on the extent of agricultural area, is apparent. We therefore assessed the suitability of moderate spatial and high temporal resolution imagery and phenological parameters to retrieve regional figures about the extent of cropland area in former freehold tenure in a series of 13 years from 2001–2013. Time-series data was processed with TIMESAT and was stratified according to agro-ecological potential zoning of Zimbabwe. Random Forest (RF) classifications were used to produce annual binary crop/non crop maps which were evaluated with high spatial resolution data from other satellite sensors. We assessed the cropland products in former freehold tenure in terms of classification accuracy, inter-annual comparability and heterogeneity. Although general LULC patterns were depicted in classification results and an overall accuracy of over 80% was achieved, user accuracies for rainfed agriculture were limited to below 65%. We conclude that phenological analysis has to be treated with caution when rainfed agriculture and grassland in semi-humid tropical regions have to be separated based on MODIS spectral data and phenological parameters. Because classification results significantly underestimate redistributed commercial farmland in Zimbabwe, we argue that the method cannot be used to produce spatial information on land-use which could be linked to tenure change. Hence capabilities of moderate resolution data are limited to assess Zimbabwe’s land reform. To make use of the unquestionable potential of MODIS time-series analysis, we propose an analysis of plant productivity which allows to link annual growth and production of vegetation to ownership after Zimbabwe’s land reform. PMID:27253327
Spectral resolution of SU(3)-invariant solutions of the Yang-Baxter equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alishauskas, S.I.; Kulish, P.P.
1986-11-20
The spectral resolution of invariant R-matrices is computed on the basis of solution of the defining equation. Multiple representations in the Clebsch-Gordon series are considered by means of the classifying operator A: a linear combination of known operators of third and fourth degrees in the group generators. The matrix elements of A in a nonorthonormal basis are found. Explicit expressions are presented for the spectral resolutions for a number of representations.
Spectral resolution of SU(3)-invariant solutions of the Yang-Baxter equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alishavskas, S.I.; Kulish, P.P.
1986-11-01
The spectral resolution of invariant R-matrices is computed on the basis of solution of the defining equation. Multiple representations in the Clebsch-Gordon series are considered by means of the classifying operator A: a linear combination of known operators of third and fourth degrees in the group generators. The matrix elements of A in a nonorthonormal basis are found. Explicit expressions are presented for the spectral resolutions for a number of representations.
A MULTIPLE GRID APPROACH FOR OPEN CHANNEL FLOWS WITH STRONG SHOCKS. (R825200)
Explicit finite difference schemes are being widely used for modeling open channel flows accompanied with shocks. A characteristic feature of explicit schemes is the small time step, which is limited by the CFL stability condition. To overcome this limitation,...
What do we know about implicit false-belief tracking?
Schneider, Dana; Slaughter, Virginia P; Dux, Paul E
2015-02-01
There is now considerable evidence that neurotypical individuals track the internal cognitions of others, even in the absence of instructions to do so. This finding has prompted the suggestion that humans possess an implicit mental state tracking system (implicit Theory of Mind, ToM) that exists alongside a system that allows the deliberate and explicit analysis of the mental states of others (explicit ToM). Here we evaluate the evidence for this hypothesis and assess the extent to which implicit and explicit ToM operations are distinct. We review evidence showing that adults can indeed engage in ToM processing even without being conscious of doing so. However, at the same time, there is evidence that explicit and implicit ToM operations share some functional features, including drawing on executive resources. Based on the available evidence, we propose that implicit and explicit ToM operations overlap and should only be considered partially distinct.
Higher-order hybrid implicit/explicit FDTD time-stepping
NASA Astrophysics Data System (ADS)
Tierens, W.
2016-12-01
Both partially implicit FDTD methods, and symplectic FDTD methods of high temporal accuracy (3rd or 4th order), are well documented in the literature. In this paper we combine them: we construct a conservative FDTD method which is fourth order accurate in time and is partially implicit. We show that the stability condition for this method depends exclusively on the explicit part, which makes it suitable for use in e.g. modelling wave propagation in plasmas.
Three-dimensional compact explicit-finite difference time domain scheme with density variation
NASA Astrophysics Data System (ADS)
Tsuchiya, Takao; Maruta, Naoki
2018-07-01
In this paper, the density variation is implemented in the three-dimensional compact-explicit finite-difference time-domain (CE-FDTD) method. The formulation is first developed based on the continuity equation and the equation of motion, which include the density. Some numerical demonstrations are performed for the three-dimensional sound wave propagation in a two density layered medium. The numerical results are compared with the theoretical results to verify the proposed formulation.
Swain, Eric D.; Chin, David A.
2003-01-01
A predominant cause of dispersion in groundwater is advective mixing due to variability in seepage rates. Hydraulic conductivity variations have been extensively researched as a cause of this seepage variability. In this paper the effect of variations in surface recharge to a shallow surficial aquifer is investigated as an important additional effect. An analytical formulation has been developed that relates aquifer parameters and the statistics of recharge variability to increases in the dispersivity. This is accomplished by solving Fourier transforms of the small perturbation forms of the groundwater flow equations. Two field studies are presented in this paper to determine the statistics of recharge variability for input to the analytical formulation. A time series of water levels at a continuous groundwater recorder is used to investigate the temporal statistics of hydraulic head caused by recharge, and a series of infiltrometer measurements are used to define the spatial variability in the recharge parameters. With these field statistics representing head fluctuations due to recharge, the analytical formulation can be used to compute the dispersivity without an explicit representation of the recharge boundary. Results from a series of numerical experiments are used to define the limits of this analytical formulation and to provide some comparison. A sophisticated model has been developed using a particle‐tracking algorithm (modified to account for temporal variations) to estimate groundwater dispersion. Dispersivity increases of 9 percent are indicated by the analytical formulation for the aquifer at the field site. A comparison with numerical model results indicates that the analytical results are reasonable for shallow surficial aquifers in which two‐dimensional flow can be assumed.
Interprofessional Collaboration and Turf Wars How Prevalent Are Hidden Attitudes?*
Chung, Chadwick L. R.; Manga, Jasmin; McGregor, Marion; Michailidis, Christos; Stavros, Demetrios; Woodhouse, Linda J.
2012-01-01
Purpose: Interprofessional collaboration in health care is believed to enhance patient outcomes. However, where professions have overlapping scopes of practice (eg, chiropractors and physical therapists), "turf wars" can hinder effective collaboration. Deep-rooted beliefs, identified as implicit attitudes, provide a potential explanation. Even with positive explicit attitudes toward a social group, negative stereotypes may be influential. Previous studies on interprofessional attitudes have mostly used qualitative research methodologies. This study used quantitative methods to evaluate explicit and implicit attitudes of physical therapy students toward chiropractic. Methods: A paper-and-pencil instrument was developed and administered to 49 individuals (students and faculty) associated with a Canadian University master's entry-level physical therapy program after approval by the Research Ethics Board. The instrument evaluated explicit and implicit attitudes toward the chiropractic profession. Implicit attitudes were determined by comparing response times of chiropractic paired with positive versus negative descriptors. Results: Mean time to complete a word association task was significantly longer (t = 4.75, p =.00) when chiropractic was associated with positive rather than negative words. Explicit and implicit attitudes were not correlated (r = 0.13, p =.38). Conclusions: While little explicit bias existed, individuals associated with a master's entry-level physical therapy program appeared to have a significant negative implicit bias toward chiropractic PMID:22778528
Interprofessional collaboration and turf wars how prevalent are hidden attitudes?
Chung, Chadwick L R; Manga, Jasmin; McGregor, Marion; Michailidis, Christos; Stavros, Demetrios; Woodhouse, Linda J
2012-01-01
Interprofessional collaboration in health care is believed to enhance patient outcomes. However, where professions have overlapping scopes of practice (eg, chiropractors and physical therapists), "turf wars" can hinder effective collaboration. Deep-rooted beliefs, identified as implicit attitudes, provide a potential explanation. Even with positive explicit attitudes toward a social group, negative stereotypes may be influential. Previous studies on interprofessional attitudes have mostly used qualitative research methodologies. This study used quantitative methods to evaluate explicit and implicit attitudes of physical therapy students toward chiropractic. A paper-and-pencil instrument was developed and administered to 49 individuals (students and faculty) associated with a Canadian University master's entry-level physical therapy program after approval by the Research Ethics Board. The instrument evaluated explicit and implicit attitudes toward the chiropractic profession. Implicit attitudes were determined by comparing response times of chiropractic paired with positive versus negative descriptors. Mean time to complete a word association task was significantly longer (t = 4.75, p =.00) when chiropractic was associated with positive rather than negative words. Explicit and implicit attitudes were not correlated (r = 0.13, p =.38). While little explicit bias existed, individuals associated with a master's entry-level physical therapy program appeared to have a significant negative implicit bias toward chiropractic.
Narcissistic Traits and Explicit Self-Esteem: The Moderating Role of Implicit Self-View
Di Pierro, Rossella; Mattavelli, Simone; Gallucci, Marcello
2016-01-01
Objective: Whilst the relationship between narcissism and self-esteem has been studied for a long time, findings are still controversial. The majority of studies investigated narcissistic grandiosity (NG), neglecting the existence of vulnerable manifestations of narcissism. Moreover, recent studies have shown that grandiosity traits are not always associated with inflated explicit self-esteem. The aim of the present study is to investigate the relationship between narcissistic traits and explicit self-esteem, distinguishing between grandiosity and vulnerability. Moreover, we consider the role of implicit self-esteem in qualifying these associations. Method: Narcissistic traits, explicit and implicit self-esteem measures were assessed among 120 university students (55.8% women, Mage = 22.55, SD = 3.03). Results: Results showed different patterns of association between narcissistic traits and explicit self-esteem, depending on phenotypic manifestations of narcissism. Narcissistic vulnerability (NV) was linked to low explicit self-evaluations regardless of one’s levels of implicit self-esteem. On the other hand, the link between NG and explicit self-esteem was qualified by levels of implicit self-views, such that grandiosity was significantly associated with inflated explicit self-evaluations only at either high or medium levels of implicit self-views. Discussion: These findings showed that the relationship between narcissistic traits and explicit self-esteem is not univocal, highlighting the importance of distinguishing between NG and NV. Finally, the study suggested that both researchers and clinicians should consider the relevant role of implicit self-views in conditioning self-esteem levels reported explicitly by individuals with grandiose narcissistic traits. PMID:27920739
Narcissistic Traits and Explicit Self-Esteem: The Moderating Role of Implicit Self-View.
Di Pierro, Rossella; Mattavelli, Simone; Gallucci, Marcello
2016-01-01
Objective: Whilst the relationship between narcissism and self-esteem has been studied for a long time, findings are still controversial. The majority of studies investigated narcissistic grandiosity (NG), neglecting the existence of vulnerable manifestations of narcissism. Moreover, recent studies have shown that grandiosity traits are not always associated with inflated explicit self-esteem. The aim of the present study is to investigate the relationship between narcissistic traits and explicit self-esteem, distinguishing between grandiosity and vulnerability. Moreover, we consider the role of implicit self-esteem in qualifying these associations. Method: Narcissistic traits, explicit and implicit self-esteem measures were assessed among 120 university students (55.8% women, M age = 22.55, SD = 3.03). Results: Results showed different patterns of association between narcissistic traits and explicit self-esteem, depending on phenotypic manifestations of narcissism. Narcissistic vulnerability (NV) was linked to low explicit self-evaluations regardless of one's levels of implicit self-esteem. On the other hand, the link between NG and explicit self-esteem was qualified by levels of implicit self-views, such that grandiosity was significantly associated with inflated explicit self-evaluations only at either high or medium levels of implicit self-views. Discussion: These findings showed that the relationship between narcissistic traits and explicit self-esteem is not univocal, highlighting the importance of distinguishing between NG and NV. Finally, the study suggested that both researchers and clinicians should consider the relevant role of implicit self-views in conditioning self-esteem levels reported explicitly by individuals with grandiose narcissistic traits.
Implicit time accurate simulation of unsteady flow
NASA Astrophysics Data System (ADS)
van Buuren, René; Kuerten, Hans; Geurts, Bernard J.
2001-03-01
Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright
A numerical scheme to solve unstable boundary value problems
NASA Technical Reports Server (NTRS)
Kalnay Derivas, E.
1975-01-01
A new iterative scheme for solving boundary value problems is presented. It consists of the introduction of an artificial time dependence into a modified version of the system of equations. Then explicit forward integrations in time are followed by explicit integrations backwards in time. The method converges under much more general conditions than schemes based in forward time integrations (false transient schemes). In particular it can attain a steady state solution of an elliptical system of equations even if the solution is unstable, in which case other iterative schemes fail to converge. The simplicity of its use makes it attractive for solving large systems of nonlinear equations.
Stability of mixed time integration schemes for transient thermal analysis
NASA Technical Reports Server (NTRS)
Liu, W. K.; Lin, J. I.
1982-01-01
A current research topic in coupled-field problems is the development of effective transient algorithms that permit different time integration methods with different time steps to be used simultaneously in various regions of the problems. The implicit-explicit approach seems to be very successful in structural, fluid, and fluid-structure problems. This paper summarizes this research direction. A family of mixed time integration schemes, with the capabilities mentioned above, is also introduced for transient thermal analysis. A stability analysis and the computer implementation of this technique are also presented. In particular, it is shown that the mixed time implicit-explicit methods provide a natural framework for the further development of efficient, clean, modularized computer codes.
Functional differences between statistical learning with and without explicit training
Reber, Paul J.; Paller, Ken A.
2015-01-01
Humans are capable of rapidly extracting regularities from environmental input, a process known as statistical learning. This type of learning typically occurs automatically, through passive exposure to environmental input. The presumed function of statistical learning is to optimize processing, allowing the brain to more accurately predict and prepare for incoming input. In this study, we ask whether the function of statistical learning may be enhanced through supplementary explicit training, in which underlying regularities are explicitly taught rather than simply abstracted through exposure. Learners were randomly assigned either to an explicit group or an implicit group. All learners were exposed to a continuous stream of repeating nonsense words. Prior to this implicit training, learners in the explicit group received supplementary explicit training on the nonsense words. Statistical learning was assessed through a speeded reaction-time (RT) task, which measured the extent to which learners used acquired statistical knowledge to optimize online processing. Both RTs and brain potentials revealed significant differences in online processing as a function of training condition. RTs showed a crossover interaction; responses in the explicit group were faster to predictable targets and marginally slower to less predictable targets relative to responses in the implicit group. P300 potentials to predictable targets were larger in the explicit group than in the implicit group, suggesting greater recruitment of controlled, effortful processes. Taken together, these results suggest that information abstracted through passive exposure during statistical learning may be processed more automatically and with less effort than information that is acquired explicitly. PMID:26472644
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwerdtfeger, Christine A.; Soudackov, Alexander V.; Hammes-Schiffer, Sharon, E-mail: shs3@illinois.edu
2014-01-21
The development of efficient theoretical methods for describing electron transfer (ET) reactions in condensed phases is important for a variety of chemical and biological applications. Previously, dynamical dielectric continuum theory was used to derive Langevin equations for a single collective solvent coordinate describing ET in a polar solvent. In this theory, the parameters are directly related to the physical properties of the system and can be determined from experimental data or explicit molecular dynamics simulations. Herein, we combine these Langevin equations with surface hopping nonadiabatic dynamics methods to calculate the rate constants for thermal ET reactions in polar solvents formore » a wide range of electronic couplings and reaction free energies. Comparison of explicit and implicit solvent calculations illustrates that the mapping from explicit to implicit solvent models is valid even for solvents exhibiting complex relaxation behavior with multiple relaxation time scales and a short-time inertial response. The rate constants calculated for implicit solvent models with a single solvent relaxation time scale corresponding to water, acetonitrile, and methanol agree well with analytical theories in the Golden rule and solvent-controlled regimes, as well as in the intermediate regime. The implicit solvent models with two relaxation time scales are in qualitative agreement with the analytical theories but quantitatively overestimate the rate constants compared to these theories. Analysis of these simulations elucidates the importance of multiple relaxation time scales and the inertial component of the solvent response, as well as potential shortcomings of the analytical theories based on single time scale solvent relaxation models. This implicit solvent approach will enable the simulation of a wide range of ET reactions via the stochastic dynamics of a single collective solvent coordinate with parameters that are relevant to experimentally accessible systems.« less
Explicit finite-difference simulation of optical integrated devices on massive parallel computers.
Sterkenburgh, T; Michels, R M; Dress, P; Franke, H
1997-02-20
An explicit method for the numerical simulation of optical integrated circuits by means of the finite-difference time-domain (FDTD) method is presented. This method, based on an explicit solution of Maxwell's equations, is well established in microwave technology. Although the simulation areas are small, we verified the behavior of three interesting problems, especially nonparaxial problems, with typical aspects of integrated optical devices. Because numerical losses are within acceptable limits, we suggest the use of the FDTD method to achieve promising quantitative simulation results.
Age differences in implicit memory: conceptual, perceptual, or methodological?
Mitchell, David B; Bruss, Peter J
2003-12-01
The authors examined age differences in conceptual and perceptual implicit memory via word-fragment completion, word-stem completion, category exemplar generation, picture-fragment identification, and picture naming. Young, middle-aged, and older participants (N = 60) named pictures and words at study. Limited test exposure minimized explicit memory contamination, yielding no reliable age differences and equivalent cross-format effects. In contrast, explicit memory and neuropsychological measures produced significant age differences. In a follow-up experiment, 24 young adults were informed a priori about implicit testing. Their priming was equivalent to the main experiment, showing that test trial time restrictions limit explicit memory strategies. The authors concluded that most implicit memory processes remain stable across adulthood and suggest that explicit contamination be rigorously monitored in aging studies.
NASA Astrophysics Data System (ADS)
Győrffy, Werner; Knizia, Gerald; Werner, Hans-Joachim
2017-12-01
We present the theory and algorithms for computing analytical energy gradients for explicitly correlated second-order Møller-Plesset perturbation theory (MP2-F12). The main difficulty in F12 gradient theory arises from the large number of two-electron integrals for which effective two-body density matrices and integral derivatives need to be calculated. For efficiency, the density fitting approximation is used for evaluating all two-electron integrals and their derivatives. The accuracies of various previously proposed MP2-F12 approximations [3C, 3C(HY1), 3*C(HY1), and 3*A] are demonstrated by computing equilibrium geometries for a set of molecules containing first- and second-row elements, using double-ζ to quintuple-ζ basis sets. Generally, the convergence of the bond lengths and angles with respect to the basis set size is strongly improved by the F12 treatment, and augmented triple-ζ basis sets are sufficient to closely approach the basis set limit. The results obtained with the different approximations differ only very slightly. This paper is the first step towards analytical gradients for coupled-cluster singles and doubles with perturbative treatment of triple excitations, which will be presented in the second part of this series.
NASA Astrophysics Data System (ADS)
Fang, Su-Chi; Hsu, Ying-Shao; Hsu, Wei Hsiu
2016-07-01
The study explored how to best use scaffolds for supporting students' inquiry practices in computer-supported learning environments. We designed a series of inquiry units assisted with three versions of written inquiry prompts (generic and context-specific); that is, three scaffold-fading conditions: implicit, explicit, and fading. We then examined how the three scaffold-fading conditions influenced students' conceptual understanding, understanding of scientific inquiry, and inquiry abilities. Three grade-10 classes (N = 105) participated in this study; they were randomly assigned to and taught in the three conditions. Data-collection procedures included a pretest-posttest approach and in-depth observations of the target students. The findings showed that after these inquiry units, all of the students exhibited significant learning gains in conceptual knowledge and performed better inquiry abilities regardless of which condition was used. The explicit and fading conditions were more effective in enhancing students' understanding of scientific inquiry. The fading condition tended to better support the students' development of inquiry abilities and help transfer these abilities to a new setting involving an independent socioscientific task about where to build a dam. The results suggest that fading plays an essential role in enhancing the effectiveness of scaffolds.
The Effectiveness of Gateway Communications in Anti-Marijuana Campaigns
YZER, MARCO C.; CAPPELLA, JOSEPH N.; FISHBEIN, MARTIN; HORNIK, ROBERT; AHERN, R. KIRKLAND
2014-01-01
Successful anti-marijuana messages can be hypothesized to have two types of effects, namely persuasion effects, that is, a change in people’s beliefs about using marijuana, and priming effects, that is, a strengthened correlation between beliefs and associated variables such as attitude and intention. This study examined different sets of anti-drug advertisements for persuasion and priming effects. The ads targeted the belief that marijuana is a gateway to stronger drugs, a belief that is often endorsed by campaign planning officials and health educators. A sample of 418 middle and high school students was randomly assigned to a control video or one of three series of ads, two of which included the gateway message in either an explicit or implicit way. Results did not support the use of the gateway belief in anti-marijuana interventions. Whereas no clear persuasion or priming effects were found for any of the ad sequences, there is some possibility that an explicit gateway argument may actually boomerang. In comparison to the control condition, adolescents in the explicit gateway condition tended to agree less with the gateway message and displayed weaker correlations between anti-marijuana beliefs and their attitude toward marijuana use. The results suggest that the gateway message should not be used in anti-drug interventions. PMID:12746037
EdgeMaps: visualizing explicit and implicit relations
NASA Astrophysics Data System (ADS)
Dörk, Marian; Carpendale, Sheelagh; Williamson, Carey
2011-01-01
In this work, we introduce EdgeMaps as a new method for integrating the visualization of explicit and implicit data relations. Explicit relations are specific connections between entities already present in a given dataset, while implicit relations are derived from multidimensional data based on shared properties and similarity measures. Many datasets include both types of relations, which are often difficult to represent together in information visualizations. Node-link diagrams typically focus on explicit data connections, while not incorporating implicit similarities between entities. Multi-dimensional scaling considers similarities between items, however, explicit links between nodes are not displayed. In contrast, EdgeMaps visualize both implicit and explicit relations by combining and complementing spatialization and graph drawing techniques. As a case study for this approach we chose a dataset of philosophers, their interests, influences, and birthdates. By introducing the limitation of activating only one node at a time, interesting visual patterns emerge that resemble the aesthetics of fireworks and waves. We argue that the interactive exploration of these patterns may allow the viewer to grasp the structure of a graph better than complex node-link visualizations.
Implicit and explicit motor learning: Application to children with Autism Spectrum Disorder (ASD).
Izadi-Najafabadi, Sara; Mirzakhani-Araghi, Navid; Miri-Lavasani, Negar; Nejati, Vahid; Pashazadeh-Azari, Zahra
2015-12-01
This study aims to determine whether children with Autism Spectrum Disorder (ASD) are capable of learning a motor skill both implicitly and explicitly. In the present study, 30 boys with ASD, aged 7-11 with IQ average of 81.2, were compared with 32 typical IQ- and age-matched boys on their performance on a serial reaction time task (SRTT). Children were grouped by ASD and typical children and by implicit and explicit learning groups for the SRTT. Implicit motor learning occurred in both children with ASD (p=.02) and typical children (p=.01). There were no significant differences between groups (p=.39). However, explicit motor learning was only observed in typical children (p=.01) not children with ASD (p=.40). There was a significant difference between groups for explicit learning (p=.01). The results of our study showed that implicit motor learning is not affected in children with ASD. Implications for implicit and explicit learning are applied to the CO-OP approach of motor learning with children with ASD. Copyright © 2015 Elsevier Ltd. All rights reserved.
Henriksen, Niel M.; Roe, Daniel R.; Cheatham, Thomas E.
2013-01-01
Molecular dynamics force field development and assessment requires a reliable means for obtaining a well-converged conformational ensemble of a molecule in both a time-efficient and cost-effective manner. This remains a challenge for RNA because its rugged energy landscape results in slow conformational sampling and accurate results typically require explicit solvent which increases computational cost. To address this, we performed both traditional and modified replica exchange molecular dynamics simulations on a test system (alanine dipeptide) and an RNA tetramer known to populate A-form-like conformations in solution (single-stranded rGACC). A key focus is on providing the means to demonstrate that convergence is obtained, for example by investigating replica RMSD profiles and/or detailed ensemble analysis through clustering. We found that traditional replica exchange simulations still require prohibitive time and resource expenditures, even when using GPU accelerated hardware, and our results are not well converged even at 2 microseconds of simulation time per replica. In contrast, a modified version of replica exchange, reservoir replica exchange in explicit solvent, showed much better convergence and proved to be both a cost-effective and reliable alternative to the traditional approach. We expect this method will be attractive for future research that requires quantitative conformational analysis from explicitly solvated simulations. PMID:23477537
Henriksen, Niel M; Roe, Daniel R; Cheatham, Thomas E
2013-04-18
Molecular dynamics force field development and assessment requires a reliable means for obtaining a well-converged conformational ensemble of a molecule in both a time-efficient and cost-effective manner. This remains a challenge for RNA because its rugged energy landscape results in slow conformational sampling and accurate results typically require explicit solvent which increases computational cost. To address this, we performed both traditional and modified replica exchange molecular dynamics simulations on a test system (alanine dipeptide) and an RNA tetramer known to populate A-form-like conformations in solution (single-stranded rGACC). A key focus is on providing the means to demonstrate that convergence is obtained, for example, by investigating replica RMSD profiles and/or detailed ensemble analysis through clustering. We found that traditional replica exchange simulations still require prohibitive time and resource expenditures, even when using GPU accelerated hardware, and our results are not well converged even at 2 μs of simulation time per replica. In contrast, a modified version of replica exchange, reservoir replica exchange in explicit solvent, showed much better convergence and proved to be both a cost-effective and reliable alternative to the traditional approach. We expect this method will be attractive for future research that requires quantitative conformational analysis from explicitly solvated simulations.
LES with and without explicit filtering: comparison and assessment of various models
NASA Astrophysics Data System (ADS)
Winckelmans, Gregoire S.; Jeanmart, Herve; Wray, Alan A.; Carati, Daniele
2000-11-01
The proper mathematical formalism for large eddy simulation (LES) of turbulent flows assumes that a regular ``explicit" filter (i.e., a filter with a well-defined second moment, such as the gaussian, the top hat, etc.) is applied to the equations of fluid motion. This filter is then responsible for a ``filtered-scale" stress. Because of the discretization of the filtered equations, using the LES grid, there is also a ``subgrid-scale" stress. The global effective stress is found to be the discretization of a filtered-scale stress plus a subgrid-scale stress. The former can be partially reconstructed from an exact, infinite, series, the first term of which is the ``tensor-diffusivity" model of Leonard and is found, in practice, to be sufficient for modeling. Alternatively, sufficient reconstruction can also be achieved using the ``scale-similarity" model of Bardina. The latter corresponds to loss of information: it cannot be reconstructed; its effect (essentially dissipation) must be modeled using ad hoc modeling strategies (such as the dynamic version of the ``effective viscosity" model of Smagorinsky). Practitionners also often assume LES without explicit filtering: the effective stress is then only a subgrid-scale stress. We here compare the performance of various LES models for both approaches (with and without explicit filtering), and for cases without solid boundaries: (1) decay of isotropic turbulence; (2) decay of aircraft wake vortices in a turbulent atmosphere. One main conclusion is that better subgrid-scale models are still needed, the effective viscosity models being too active at the large scales.
An explicit scheme for ohmic dissipation with smoothed particle magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Tsukamoto, Yusuke; Iwasaki, Kazunari; Inutsuka, Shu-ichiro
2013-09-01
In this paper, we present an explicit scheme for Ohmic dissipation with smoothed particle magnetohydrodynamics (SPMHD). We propose an SPH discretization of Ohmic dissipation and solve Ohmic dissipation part of induction equation with the super-time-stepping method (STS) which allows us to take a longer time step than Courant-Friedrich-Levy stability condition. Our scheme is second-order accurate in space and first-order accurate in time. Our numerical experiments show that optimal choice of the parameters of STS for Ohmic dissipation of SPMHD is νsts ˜ 0.01 and Nsts ˜ 5.
An explicit mixed numerical method for mesoscale model
NASA Technical Reports Server (NTRS)
Hsu, H.-M.
1981-01-01
A mixed numerical method has been developed for mesoscale models. The technique consists of a forward difference scheme for time tendency terms, an upstream scheme for advective terms, and a central scheme for the other terms in a physical system. It is shown that the mixed method is conditionally stable and highly accurate for approximating the system of either shallow-water equations in one dimension or primitive equations in three dimensions. Since the technique is explicit and two time level, it conserves computer and programming resources.
NASA Technical Reports Server (NTRS)
Dey, C.; Dey, S. K.
1983-01-01
An explicit finite difference scheme consisting of a predictor and a corrector has been developed and applied to solve some hyperbolic partial differential equations (PDEs). The corrector is a convex-type function which is applied at each time level and at each mesh point. It consists of a parameter which may be estimated such that for larger time steps the algorithm should remain stable and generate a fast speed of convergence to the steady-state solution. Some examples have been given.
Computing the Baker-Campbell-Hausdorff series and the Zassenhaus product
NASA Astrophysics Data System (ADS)
Weyrauch, Michael; Scholz, Daniel
2009-09-01
The Baker-Campbell-Hausdorff (BCH) series and the Zassenhaus product are of fundamental importance for the theory of Lie groups and their applications in physics and physical chemistry. Standard methods for the explicit construction of the BCH and Zassenhaus terms yield polynomial representations, which must be translated into the usually required commutator representation. We prove that a new translation proposed recently yields a correct representation of the BCH and Zassenhaus terms. This representation entails fewer terms than the well-known Dynkin-Specht-Wever representation, which is of relevance for practical applications. Furthermore, various methods for the computation of the BCH and Zassenhaus terms are compared, and a new efficient approach for the calculation of the Zassenhaus terms is proposed. Mathematica implementations for the most efficient algorithms are provided together with comparisons of efficiency.
Poole, Sandra; Vis, Marc; Knight, Rodney; Seibert, Jan
2017-01-01
Ecologically relevant streamflow characteristics (SFCs) of ungauged catchments are often estimated from simulated runoff of hydrologic models that were originally calibrated on gauged catchments. However, SFC estimates of the gauged donor catchments and subsequently the ungauged catchments can be substantially uncertain when models are calibrated using traditional approaches based on optimization of statistical performance metrics (e.g., Nash–Sutcliffe model efficiency). An improved calibration strategy for gauged catchments is therefore crucial to help reduce the uncertainties of estimated SFCs for ungauged catchments. The aim of this study was to improve SFC estimates from modeled runoff time series in gauged catchments by explicitly including one or several SFCs in the calibration process. Different types of objective functions were defined consisting of the Nash–Sutcliffe model efficiency, single SFCs, or combinations thereof. We calibrated a bucket-type runoff model (HBV – Hydrologiska Byråns Vattenavdelning – model) for 25 catchments in the Tennessee River basin and evaluated the proposed calibration approach on 13 ecologically relevant SFCs representing major flow regime components and different flow conditions. While the model generally tended to underestimate the tested SFCs related to mean and high-flow conditions, SFCs related to low flow were generally overestimated. The highest estimation accuracies were achieved by a SFC-specific model calibration. Estimates of SFCs not included in the calibration process were of similar quality when comparing a multi-SFC calibration approach to a traditional model efficiency calibration. For practical applications, this implies that SFCs should preferably be estimated from targeted runoff model calibration, and modeled estimates need to be carefully interpreted.
NASA Astrophysics Data System (ADS)
Kennedy, R. E.; Hughes, J.; Neeti, N.; Yang, Z.; Gregory, M.; Roberts, H.; Kane, V. R.; Powell, S. L.; Ohmann, J.
2016-12-01
Because carbon pools and fluxes on wooded landscapes are constrained by their type, age and health, understanding the causes and consequences of carbon change requires frequent observation of forest condition and of disturbance, mortality, and growth processes. As part of USDA and NASA funded efforts, we built empirical monitoring system that integrates time-series Landsat imagery, Forest Inventory and Analysis (FIA) plot data, small-footprint lidar data, and aerial photos to characterize key carbon dynamics in forested ecosystems of Washington, Oregon and California. Here we report yearly biomass estimates for every forested 30 by 30m pixel in the states of Washington, Oregon, and California from 1990 to 2010, including spatially explicit estimates of uncertainty in our yearly predictions. Total biomass at the ecoregion scale agrees well with estimates from FIA plot data alone, currently the only method for reliable monitoring in the forests of the region. Comparisons with estimates of biomass modeled from four small-footprint lidar acquisitions in overlapping portions of our study area show general patterns of agreement between the two types of estimation, but also reveal some disparities in spatial pattern potentially attributable to age and vegetation condition. Using machine-learning techniques based on both Landsat image time series and high resolution aerial photos, we then modeled the agent causing change in biomass for every change event in the region, and report the relative distribution of carbon loss attributable to natural disturbances (primarily fire and insect-related mortality) versus anthropogenic causes (forest management and development).
Batterink, Laura; Neville, Helen
2011-01-01
The vast majority of word meanings are learned simply by extracting them from context, rather than by rote memorization or explicit instruction. Although this skill is remarkable, little is known about the brain mechanisms involved. In the present study, ERPs were recorded as participants read stories in which pseudowords were presented multiple times, embedded in consistent, meaningful contexts (referred to as meaning condition, M+) or inconsistent, meaningless contexts (M−). Word learning was then assessed implicitly using a lexical decision task and explicitly through recall and recognition tasks. Overall, during story reading, M− words elicited a larger N400 than M+ words, suggesting that participants were better able to semantically integrate M+ words than M− words throughout the story. In addition, M+ words whose meanings were subsequently correctly recognized and recalled elicited a more positive ERP in a later time-window compared to M+ words whose meanings were incorrectly remembered, consistent with the idea that the late positive component (LPC) is an index of encoding processes. In the lexical decision task, no behavioral or electrophysiological evidence for implicit priming was found for M+ words. In contrast, during the explicit recognition task, M+ words showed a robust N400 effect. The N400 effect was dependent upon recognition performance, such that only correctly recognized M+ words elicited an N400. This pattern of results provides evidence that the explicit representations of word meanings can develop rapidly, while implicit representations may require more extensive exposure or more time to emerge. PMID:21452941
Reduced Implicit and Explicit Sequence Learning in First-Episode Schizophrenia
ERIC Educational Resources Information Center
Pedersen, Anya; Siegmund, Ansgar; Ohrmann, Patricia; Rist, Fred; Rothermundt, Matthias; Suslow, Thomas; Arolt, Volker
2008-01-01
A high prevalence of deficits in explicit learning has been reported for schizophrenic patients, but it is less clear whether these patients are impaired in implicit learning. Deficits in implicit learning indicative of a fronto-striatal dysfunction have been reported using a serial reaction-time task (SRT), but the impact of typical neuroleptic…
A Conceptual Model for the Design and Delivery of Explicit Thinking Skills Instruction
ERIC Educational Resources Information Center
Kassem, Cherrie L.
2005-01-01
Developing student thinking skills is an important goal for most educators. However, due to time constraints and weighty content standards, thinking skills instruction is often embedded in subject matter, implicit and incidental. For best results, thinking skills instruction requires a systematic design and explicit teaching strategies. The…
Explicit and Implicit Verbal Response Inhibition in Preschool-Age Children Who Stutter.
Anderson, Julie D; Wagovich, Stacy A
2017-04-14
The purpose of this study was to examine (a) explicit and implicit verbal response inhibition in preschool children who do stutter (CWS) and do not stutter (CWNS) and (b) the relationship between response inhibition and language skills. Participants were 41 CWS and 41 CWNS between the ages of 3;1 and 6;1 (years;months). Explicit verbal response inhibition was measured using a computerized version of the grass-snow task (Carlson & Moses, 2001), and implicit verbal response inhibition was measured using the baa-meow task. Main dependent variables were reaction time and accuracy. The CWS were significantly less accurate than the CWNS on the implicit task, but not the explicit task. The CWS also exhibited slower reaction times than the CWNS on both tasks. Between-group differences in performance could not be attributed to working memory demands. Overall, children's performance on the inhibition tasks corresponded with parents' perceptions of their children's inhibition skills in daily life. CWS are less effective and efficient than CWNS in suppressing a dominant response while executing a conflicting response in the verbal domain.
Explicit and Implicit Verbal Response Inhibition in Preschool-Age Children Who Stutter
Wagovich, Stacy A.
2017-01-01
Purpose The purpose of this study was to examine (a) explicit and implicit verbal response inhibition in preschool children who do stutter (CWS) and do not stutter (CWNS) and (b) the relationship between response inhibition and language skills. Method Participants were 41 CWS and 41 CWNS between the ages of 3;1 and 6;1 (years;months). Explicit verbal response inhibition was measured using a computerized version of the grass–snow task (Carlson & Moses, 2001), and implicit verbal response inhibition was measured using the baa–meow task. Main dependent variables were reaction time and accuracy. Results The CWS were significantly less accurate than the CWNS on the implicit task, but not the explicit task. The CWS also exhibited slower reaction times than the CWNS on both tasks. Between-group differences in performance could not be attributed to working memory demands. Overall, children's performance on the inhibition tasks corresponded with parents' perceptions of their children's inhibition skills in daily life. Conclusions CWS are less effective and efficient than CWNS in suppressing a dominant response while executing a conflicting response in the verbal domain. PMID:28384673
Unlikely Fluctuations and Non-Equilibrium Work Theorems-A Simple Example.
Muzikar, Paul
2016-06-30
An exciting development in statistical mechanics has been the elucidation of a series of surprising equalities involving the work done during a nonequilibrium process. Astumian has presented an elegant example of such an equality, involving a colloidal particle undergoing Brownian motion in the presence of gravity. We analyze this example; its simplicity, and its link to geometric Brownian motion, allows us to clarify the inner workings of the equality. Our analysis explicitly shows the important role played by large, unlikely fluctuations.
Supersonic flow past oscillating airfoils including nonlinear thickness effects
NASA Technical Reports Server (NTRS)
Van Dyke, Milton D
1954-01-01
A solution to second order in thickness is derived for harmonically oscillating two-dimensional airfoils in supersonic flow. For slow oscillations of an arbitrary profile, the result is found as a series including the third power of frequency. For arbitrary frequencies, the method of solution for any specific profile is indicated, and the explicit solution derived for a single wedge. Nonlinear thickness effects are found generally to reduce the torsional damping, and so enlarge the range of Mach numbers within which torsional instability is possible.
Invariant Tori in the Secular Motions of the Three-body Planetary Systems
NASA Astrophysics Data System (ADS)
Locatelli, Ugo; Giorgilli, Antonio
We consider the problem of the applicability of KAM theorem to a realistic problem of three bodies. In the framework of the averaged dynamics over the fast angles for the Sun-Jupiter-Saturn system we can prove the perpetual stability of the orbit. The proof is based on semi-numerical algorithms requiring both explicit algebraic manipulations of series and analytical estimates. The proof is made rigorous by using interval arithmetics in order to control the numerical errors.
Mapping Snow Grain Size over Greenland from MODIS
NASA Technical Reports Server (NTRS)
Lyapustin, Alexei; Tedesco, Marco; Wang, Yujie; Kokhanovsky, Alexander
2008-01-01
This paper presents a new automatic algorithm to derive optical snow grain size (SGS) at 1 km resolution using Moderate Resolution Imaging Spectroradiometer (MODIS) measurements. Differently from previous approaches, snow grains are not assumed to be spherical but a fractal approach is used to account for their irregular shape. The retrieval is conceptually based on an analytical asymptotic radiative transfer model which predicts spectral bidirectional snow reflectance as a function of the grain size and ice absorption. The analytical form of solution leads to an explicit and fast retrieval algorithm. The time series analysis of derived SGS shows a good sensitivity to snow metamorphism, including melting and snow precipitation events. Preprocessing is performed by a Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm, which includes gridding MODIS data to 1 km resolution, water vapor retrieval, cloud masking and an atmospheric correction. MAIAC cloud mask (CM) is a new algorithm based on a time series of gridded MODIS measurements and an image-based rather than pixel-based processing. Extensive processing of MODIS TERRA data over Greenland shows a robust performance of CM algorithm in discrimination of clouds over bright snow and ice. As part of the validation analysis, SGS derived from MODIS over selected sites in 2004 was compared to the microwave brightness temperature measurements of SSM\\I radiometer, which is sensitive to the amount of liquid water in the snowpack. The comparison showed a good qualitative agreement, with both datasets detecting two main periods of snowmelt. Additionally, MODIS SGS was compared with predictions of the snow model CROCUS driven by measurements of the automatic whether stations of the Greenland Climate Network. We found that CROCUS grain size is on average a factor of two larger than MODIS-derived SGS. Overall, the agreement between CROCUS and MODIS results was satisfactory, in particular before and during the first melting period in mid-June. Following detailed time series analysis of SGS for four permanent sites, the paper presents SGS maps over the Greenland ice sheet for the March-September period of 2004.
High Performance Programming Using Explicit Shared Memory Model on Cray T3D1
NASA Technical Reports Server (NTRS)
Simon, Horst D.; Saini, Subhash; Grassi, Charles
1994-01-01
The Cray T3D system is the first-phase system in Cray Research, Inc.'s (CRI) three-phase massively parallel processing (MPP) program. This system features a heterogeneous architecture that closely couples DEC's Alpha microprocessors and CRI's parallel-vector technology, i.e., the Cray Y-MP and Cray C90. An overview of the Cray T3D hardware and available programming models is presented. Under Cray Research adaptive Fortran (CRAFT) model four programming methods (data parallel, work sharing, message-passing using PVM, and explicit shared memory model) are available to the users. However, at this time data parallel and work sharing programming models are not available to the user community. The differences between standard PVM and CRI's PVM are highlighted with performance measurements such as latencies and communication bandwidths. We have found that the performance of neither standard PVM nor CRI s PVM exploits the hardware capabilities of the T3D. The reasons for the bad performance of PVM as a native message-passing library are presented. This is illustrated by the performance of NAS Parallel Benchmarks (NPB) programmed in explicit shared memory model on Cray T3D. In general, the performance of standard PVM is about 4 to 5 times less than obtained by using explicit shared memory model. This degradation in performance is also seen on CM-5 where the performance of applications using native message-passing library CMMD on CM-5 is also about 4 to 5 times less than using data parallel methods. The issues involved (such as barriers, synchronization, invalidating data cache, aligning data cache etc.) while programming in explicit shared memory model are discussed. Comparative performance of NPB using explicit shared memory programming model on the Cray T3D and other highly parallel systems such as the TMC CM-5, Intel Paragon, Cray C90, IBM-SP1, etc. is presented.
Comparison of three explicit multigrid methods for the Euler and Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Chima, Rodrick V.; Turkel, Eli; Schaffer, Steve
1987-01-01
Three explicit multigrid methods, Ni's method, Jameson's finite-volume method, and a finite-difference method based on Brandt's work, are described and compared for two model problems. All three methods use an explicit multistage Runge-Kutta scheme on the fine grid, and this scheme is also described. Convergence histories for inviscid flow over a bump in a channel for the fine-grid scheme alone show that convergence rate is proportional to Courant number and that implicit residual smoothing can significantly accelerate the scheme. Ni's method was slightly slower than the implicitly-smoothed scheme alone. Brandt's and Jameson's methods are shown to be equivalent in form but differ in their node versus cell-centered implementations. They are about 8.5 times faster than Ni's method in terms of CPU time. Results for an oblique shock/boundary layer interaction problem verify the accuracy of the finite-difference code. All methods slowed considerably on the stretched viscous grid but Brandt's method was still 2.1 times faster than Ni's method.
Deng, Nanjie; Zhang, Bin W.; Levy, Ronald M.
2015-01-01
The ability to accurately model solvent effects on free energy surfaces is important for understanding many biophysical processes including protein folding and misfolding, allosteric transitions and protein-ligand binding. Although all-atom simulations in explicit solvent can provide an accurate model for biomolecules in solution, explicit solvent simulations are hampered by the slow equilibration on rugged landscapes containing multiple basins separated by barriers. In many cases, implicit solvent models can be used to significantly speed up the conformational sampling; however, implicit solvent simulations do not fully capture the effects of a molecular solvent, and this can lead to loss of accuracy in the estimated free energies. Here we introduce a new approach to compute free energy changes in which the molecular details of explicit solvent simulations are retained while also taking advantage of the speed of the implicit solvent simulations. In this approach, the slow equilibration in explicit solvent, due to the long waiting times before barrier crossing, is avoided by using a thermodynamic cycle which connects the free energy basins in implicit solvent and explicit solvent using a localized decoupling scheme. We test this method by computing conformational free energy differences and solvation free energies of the model system alanine dipeptide in water. The free energy changes between basins in explicit solvent calculated using fully explicit solvent paths agree with the corresponding free energy differences obtained using the implicit/explicit thermodynamic cycle to within 0.3 kcal/mol out of ~3 kcal/mol at only ~8 % of the computational cost. We note that WHAM methods can be used to further improve the efficiency and accuracy of the explicit/implicit thermodynamic cycle. PMID:26236174
Deng, Nanjie; Zhang, Bin W; Levy, Ronald M
2015-06-09
The ability to accurately model solvent effects on free energy surfaces is important for understanding many biophysical processes including protein folding and misfolding, allosteric transitions, and protein–ligand binding. Although all-atom simulations in explicit solvent can provide an accurate model for biomolecules in solution, explicit solvent simulations are hampered by the slow equilibration on rugged landscapes containing multiple basins separated by barriers. In many cases, implicit solvent models can be used to significantly speed up the conformational sampling; however, implicit solvent simulations do not fully capture the effects of a molecular solvent, and this can lead to loss of accuracy in the estimated free energies. Here we introduce a new approach to compute free energy changes in which the molecular details of explicit solvent simulations are retained while also taking advantage of the speed of the implicit solvent simulations. In this approach, the slow equilibration in explicit solvent, due to the long waiting times before barrier crossing, is avoided by using a thermodynamic cycle which connects the free energy basins in implicit solvent and explicit solvent using a localized decoupling scheme. We test this method by computing conformational free energy differences and solvation free energies of the model system alanine dipeptide in water. The free energy changes between basins in explicit solvent calculated using fully explicit solvent paths agree with the corresponding free energy differences obtained using the implicit/explicit thermodynamic cycle to within 0.3 kcal/mol out of ∼3 kcal/mol at only ∼8% of the computational cost. We note that WHAM methods can be used to further improve the efficiency and accuracy of the implicit/explicit thermodynamic cycle.
A review of hybrid implicit explicit finite difference time domain method
NASA Astrophysics Data System (ADS)
Chen, Juan
2018-06-01
The finite-difference time-domain (FDTD) method has been extensively used to simulate varieties of electromagnetic interaction problems. However, because of its Courant-Friedrich-Levy (CFL) condition, the maximum time step size of this method is limited by the minimum size of cell used in the computational domain. So the FDTD method is inefficient to simulate the electromagnetic problems which have very fine structures. To deal with this problem, the Hybrid Implicit Explicit (HIE)-FDTD method is developed. The HIE-FDTD method uses the hybrid implicit explicit difference in the direction with fine structures to avoid the confinement of the fine spatial mesh on the time step size. So this method has much higher computational efficiency than the FDTD method, and is extremely useful for the problems which have fine structures in one direction. In this paper, the basic formulations, time stability condition and dispersion error of the HIE-FDTD method are presented. The implementations of several boundary conditions, including the connect boundary, absorbing boundary and periodic boundary are described, then some applications and important developments of this method are provided. The goal of this paper is to provide an historical overview and future prospects of the HIE-FDTD method.
Lindgren, Kristen P.; Ramirez, Jason J.; Olin, Cecilia C.; Neighbors, Clayton
2016-01-01
Drinking identity – how much individuals view themselves as drinkers– is a promising cognitive factor that predicts problem drinking. Implicit and explicit measures of drinking identity have been developed (the former assesses more reflexive/automatic cognitive processes; the latter more reflective/controlled cognitive processes): each predicts unique variance in alcohol consumption and problems. However, implicit and explicit identity’s utility and uniqueness as a predictor relative to cognitive factors important for problem drinking screening and intervention has not been evaluated. Thus, the current study evaluated implicit and explicit drinking identity as predictors of consumption and problems over time. Baseline measures of drinking identity, social norms, alcohol expectancies, and drinking motives were evaluated as predictors of consumption and problems (evaluated every three months over two academic years) in a sample of 506 students (57% female) in their first or second year of college. Results found that baseline identity measures predicted unique variance in consumption and problems over time. Further, when compared to each set of cognitive factors, the identity measures predicted unique variance in consumption and problems over time. Findings were more robust for explicit, versus, implicit identity and in models that did not control for baseline drinking. Drinking identity appears to be a unique predictor of problem drinking relative to social norms, alcohol expectancies, and drinking motives. Intervention and theory could benefit from including and considering drinking identity. PMID:27428756
Lindgren, Kristen P; Ramirez, Jason J; Olin, Cecilia C; Neighbors, Clayton
2016-09-01
Drinking identity-how much individuals view themselves as drinkers-is a promising cognitive factor that predicts problem drinking. Implicit and explicit measures of drinking identity have been developed (the former assesses more reflexive/automatic cognitive processes; the latter more reflective/controlled cognitive processes): each predicts unique variance in alcohol consumption and problems. However, implicit and explicit identity's utility and uniqueness as predictors relative to cognitive factors important for problem drinking screening and intervention has not been evaluated. Thus, the current study evaluated implicit and explicit drinking identity as predictors of consumption and problems over time. Baseline measures of drinking identity, social norms, alcohol expectancies, and drinking motives were evaluated as predictors of consumption and problems (evaluated every 3 months over 2 academic years) in a sample of 506 students (57% female) in their first or second year of college. Results found that baseline identity measures predicted unique variance in consumption and problems over time. Further, when compared to each set of cognitive factors, the identity measures predicted unique variance in consumption and problems over time. Findings were more robust for explicit versus implicit identity and in models that did not control for baseline drinking. Drinking identity appears to be a unique predictor of problem drinking relative to social norms, alcohol expectancies, and drinking motives. Intervention and theory could benefit from including and considering drinking identity. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Jaggers, R. F.
1977-01-01
A derivation of an explicit solution to the two point boundary-value problem of exoatmospheric guidance and trajectory optimization is presented. Fixed initial conditions and continuous burn, multistage thrusting are assumed. Any number of end conditions from one to six (throttling is required in the case of six) can be satisfied in an explicit and practically optimal manner. The explicit equations converge for off nominal conditions such as engine failure, abort, target switch, etc. The self starting, predictor/corrector solution involves no Newton-Rhapson iterations, numerical integration, or first guess values, and converges rapidly if physically possible. A form of this algorithm has been chosen for onboard guidance, as well as real time and preflight ground targeting and trajectory shaping for the NASA Space Shuttle Program.
Implicit and semi-implicit schemes in the Versatile Advection Code: numerical tests
NASA Astrophysics Data System (ADS)
Toth, G.; Keppens, R.; Botchev, M. A.
1998-04-01
We describe and evaluate various implicit and semi-implicit time integration schemes applied to the numerical simulation of hydrodynamical and magnetohydrodynamical problems. The schemes were implemented recently in the software package Versatile Advection Code, which uses modern shock capturing methods to solve systems of conservation laws with optional source terms. The main advantage of implicit solution strategies over explicit time integration is that the restrictive constraint on the allowed time step can be (partially) eliminated, thus the computational cost is reduced. The test problems cover one and two dimensional, steady state and time accurate computations, and the solutions contain discontinuities. For each test, we confront explicit with implicit solution strategies.
The scaling behavior of hand motions reveals self-organization during an executive function task
NASA Astrophysics Data System (ADS)
Anastas, Jason R.; Stephen, Damian G.; Dixon, James A.
2011-05-01
Recent approaches to cognition explain cognitive phenomena in terms of interaction-dominant dynamics. In the current experiment, we extend this approach to executive function, a construct used to describe flexible, goal-oriented behavior. Participants were asked to perform a widely used executive function task, card sorting, under two conditions. In one condition, participants were given a rule with which to sort the cards. In the other condition, participants had to induce the rule from experimenter feedback. The motion of each participant’s hand was tracked during the sorting task. Detrended fluctuation analysis was performed on the inter-point time series using a windowing strategy to capture changes over each trial. For participants in the induction condition, the Hurst exponent sharply increased and then decreased. The Hurst exponents for the explicit condition did not show this pattern. Our results suggest that executive function may be understood in terms of changes in stability that arise from interaction-dominant dynamics.
A gravity model for the spread of a pollinator-borne plant pathogen.
Ferrari, Matthew J; Bjørnstad, Ottar N; Partain, Jessica L; Antonovics, Janis
2006-09-01
Many pathogens of plants are transmitted by arthropod vectors whose movement between individual hosts is influenced by foraging behavior. Insect foraging has been shown to depend on both the quality of hosts and the distances between hosts. Given the spatial distribution of host plants and individual variation in quality, vector foraging patterns may therefore produce predictable variation in exposure to pathogens. We develop a "gravity" model to describe the spatial spread of a vector-borne plant pathogen from underlying models of insect foraging in response to host quality using the pollinator-borne smut fungus Microbotryum violaceum as a case study. We fit the model to spatially explicit time series of M. violaceum transmission in replicate experimental plots of the white campion Silene latifolia. The gravity model provides a better fit than a mean field model or a model with only distance-dependent transmission. The results highlight the importance of active vector foraging in generating spatial patterns of disease incidence and for pathogen-mediated selection for floral traits.
Nonlinear Acoustic Propagation into the Seafloor
NASA Astrophysics Data System (ADS)
McDonald, B. Edward
2006-05-01
Explosions near the seafloor result in shock waves entering a much more complicated medium than water or air. Nonlinearities may be increased by two processes inherent to granular media: (1) a poroelastic nonlinearity comparable to the addition of bubbles to water, and (2) the Hertz force resulting from elastic deformation of grains, proportional to the Youngs modulus of the grains times the strain rate to the power 3/2. These two types of nonlinearity for shock propagation into the seafloor are investigated using a variant of the NPE model. The traditional Taylor series expansion of the equation of state (pressure as a function of density) is not appropriate to the Hertz force in the limit of small strain. We present a simple nonlinear wave equation model for compressional waves in marine sediments that retains the Hertz force explicitly with overdensity to the power 3/2. Numerical results for shock propagation are compared with similarity solutions for quadratic nonlinearity and for the fractional nonlinearity of the Hertz force.
An examination of stereotype threat effects on girls' mathematics performance.
Ganley, Colleen M; Mingle, Leigh A; Ryan, Allison M; Ryan, Katherine; Vasilyeva, Marina; Perry, Michelle
2013-10-01
Stereotype threat has been proposed as 1 potential explanation for the gender difference in standardized mathematics test performance among high-performing students. At present, it is not entirely clear how susceptibility to stereotype threat develops, as empirical evidence for stereotype threat effects across the school years is inconsistent. In a series of 3 studies, with a total sample of 931 students, we investigated stereotype threat effects during childhood and adolescence. Three activation methods were used, ranging from implicit to explicit. Across studies, we found no evidence that the mathematics performance of school-age girls was impacted by stereotype threat. In 2 of the studies, there were gender differences on the mathematics assessment regardless of whether stereotype threat was activated. Potential reasons for these findings are discussed, including the possibility that stereotype threat effects only occur in very specific circumstances or that they are in fact occurring all the time. We also address the possibility that the literature regarding stereotype threat in children is subject to publication bias.
Inferring HIV Escape Rates from Multi-Locus Genotype Data
Kessinger, Taylor A.; Perelson, Alan S.; Neher, Richard A.
2013-09-03
Cytotoxic T-lymphocytes (CTLs) recognize viral protein fragments displayed by major histocompatibility complex molecules on the surface of virally infected cells and generate an anti-viral response that can kill the infected cells. Virus variants whose protein fragments are not efficiently presented on infected cells or whose fragments are presented but not recognized by CTLs therefore have a competitive advantage and spread rapidly through the population. We present a method that allows a more robust estimation of these escape rates from serially sampled sequence data. The proposed method accounts for competition between multiple escapes by explicitly modeling the accumulation of escape mutationsmore » and the stochastic effects of rare multiple mutants. Applying our method to serially sampled HIV sequence data, we estimate rates of HIV escape that are substantially larger than those previously reported. The method can be extended to complex escapes that require compensatory mutations. We expect our method to be applicable in other contexts such as cancer evolution where time series data is also available.« less
Levine, Matthew E; Albers, David J; Hripcsak, George
2016-01-01
Time series analysis methods have been shown to reveal clinical and biological associations in data collected in the electronic health record. We wish to develop reliable high-throughput methods for identifying adverse drug effects that are easy to implement and produce readily interpretable results. To move toward this goal, we used univariate and multivariate lagged regression models to investigate associations between twenty pairs of drug orders and laboratory measurements. Multivariate lagged regression models exhibited higher sensitivity and specificity than univariate lagged regression in the 20 examples, and incorporating autoregressive terms for labs and drugs produced more robust signals in cases of known associations among the 20 example pairings. Moreover, including inpatient admission terms in the model attenuated the signals for some cases of unlikely associations, demonstrating how multivariate lagged regression models' explicit handling of context-based variables can provide a simple way to probe for health-care processes that confound analyses of EHR data.
NASA Astrophysics Data System (ADS)
Zurek, Sebastian; Guzik, Przemyslaw; Pawlak, Sebastian; Kosmider, Marcin; Piskorski, Jaroslaw
2012-12-01
We explore the relation between correlation dimension, approximate entropy and sample entropy parameters, which are commonly used in nonlinear systems analysis. Using theoretical considerations we identify the points which are shared by all these complexity algorithms and show explicitly that the above parameters are intimately connected and mutually interdependent. A new geometrical interpretation of sample entropy and correlation dimension is provided and the consequences for the interpretation of sample entropy, its relative consistency and some of the algorithms for parameter selection for this quantity are discussed. To get an exact algorithmic relation between the three parameters we construct a very fast algorithm for simultaneous calculations of the above, which uses the full time series as the source of templates, rather than the usual 10%. This algorithm can be used in medical applications of complexity theory, as it can calculate all three parameters for a realistic recording of 104 points within minutes with the use of an average notebook computer.
Keith, David A; Akçakaya, H Resit; Thuiller, Wilfried; Midgley, Guy F; Pearson, Richard G; Phillips, Steven J; Regan, Helen M; Araújo, Miguel B; Rebelo, Tony G
2008-10-23
Species responses to climate change may be influenced by changes in available habitat, as well as population processes, species interactions and interactions between demographic and landscape dynamics. Current methods for assessing these responses fail to provide an integrated view of these influences because they deal with habitat change or population dynamics, but rarely both. In this study, we linked a time series of habitat suitability models with spatially explicit stochastic population models to explore factors that influence the viability of plant species populations under stable and changing climate scenarios in South African fynbos, a global biodiversity hot spot. Results indicate that complex interactions between life history, disturbance regime and distribution pattern mediate species extinction risks under climate change. Our novel mechanistic approach allows more complete and direct appraisal of future biotic responses than do static bioclimatic habitat modelling approaches, and will ultimately support development of more effective conservation strategies to mitigate biodiversity losses due to climate change.
Efficiency Study of Implicit and Explicit Time Integration Operators for Finite Element Applications
1977-07-01
cffiAciency, wherein Beta =0 provides anl exp~licit algorithm, wvhile Beta &0 provides anl implicit algorithm. Both algorithmns arc used in the same...Hlueneme CA: CO, Code C44A Port j IHuenemne, CA NAVSEC Cod,. 6034 (Library), Washington DC NAVSI*CGRUAC’I’ PWO, ’rorri Sta, OkinawaI NAVSIIIPRBFTAC Library
Fong, Kenneth N K; Howie, Dorothy R
2009-01-01
We investigated the effects of an explicit problem-solving skills training program using a metacomponential approach with 33 outpatients with moderate acquired brain injury, in the Hong Kong context. We compared an experimental training intervention with this explicit problem-solving approach, which taught metacomponential strategies, with a conventional cognitive training approach that did not have this explicit metacognitive training. We found significant advantages for the experimental group on the Metacomponential Interview measure in association with the explicit metacomponential training, but transfer to the real-life problem-solving measures was not evidenced in statistically significant findings. Small sample size, limited time of intervention, and some limitations with these tools may have been contributing factors to these results. The training program was demonstrated to have a significantly greater effect than the conventional training approach on metacomponential functioning and the component of problem representation. However, these benefits were not transferable to real-life situations.
Explicit analytical expression for the condition number of polynomials in power form
NASA Astrophysics Data System (ADS)
Rack, Heinz-Joachim
2017-07-01
In his influential papers [1-3] W. Gautschi has defined and reshaped the condition number κ∞ of polynomials Pn of degree ≤ n which are represented in power form on a zero-symmetric interval [-ω, ω]. Basically, κ∞ is expressed as the product of two operator norms: an explicit factor times an implicit one (the l∞-norm of the coefficient vector of the n-th Chebyshev polynomial of the first kind relative to [-ω, ω]). We provide a new proof, economize the second factor and express it by an explicit analytical formula.
High-Order/Low-Order methods for ocean modeling
Newman, Christopher; Womeldorff, Geoff; Chacón, Luis; ...
2015-06-01
In this study, we examine a High Order/Low Order (HOLO) approach for a z-level ocean model and show that the traditional semi-implicit and split-explicit methods, as well as a recent preconditioning strategy, can easily be cast in the framework of HOLO methods. The HOLO formulation admits an implicit-explicit method that is algorithmically scalable and second-order accurate, allowing timesteps much larger than the barotropic time scale. We show how HOLO approaches, in particular the implicit-explicit method, can provide a solid route for ocean simulation to heterogeneous computing and exascale environments.
NASA Astrophysics Data System (ADS)
Chen, Guangsheng; Pan, Shufen; Hayes, Daniel J.; Tian, Hanqin
2017-08-01
Plantation forest area in the conterminous United States (CONUS) ranked second among the world's nations in the land area apportioned to forest plantation. As compared to the naturally regenerated forests, plantation forests demonstrate significant differences in biophysical characteristics, and biogeochemical and hydrological cycles as a result of more intensive management practices. Inventory data have been reported for multiple time periods on plot, state, and regional scales across the CONUS, but the requisite annual and spatially explicit plantation data set over a long-term period for analysis of the role of plantation management on regional or national scales is lacking. Through synthesis of multiple inventory data sources, this study developed methods to spatialize the time series plantation forest and tree species distribution data for the CONUS over the 1928-2012 time period. According to this new data set, plantation forest area increased from near zero in the 1930s to 268.27 thousand km2 in 2012, accounting for 8.65 % of the total forestland area in the CONUS. Regionally, the South contained the highest proportion of plantation forests, accounting for about 19.34 % of total forestland area in 2012. This time series and gridded data set developed here can be readily applied in regional Earth system modeling frameworks for assessing the impacts of plantation management practices on forest productivity, carbon and nitrogen stocks, and greenhouse gases (e.g., CO2, CH4, and N2O) and water fluxes on regional or national scales. The gridded plantation distribution and tree species maps, and the interpolated state-level annual tree planting area and plantation area during 1928-2012, are available from https://doi.org/10.1594/PANGAEA.873558.
NASA Astrophysics Data System (ADS)
Okhovat, Reza; Boström, Anders
2017-04-01
Dynamic equations for an isotropic spherical shell are derived by using a series expansion technique. The displacement field is split into a scalar (radial) part and a vector (tangential) part. Surface differential operators are introduced to decrease the length of all equations. The starting point is a power series expansion of the displacement components in the thickness coordinate relative to the mid-surface of the shell. By using the expansions of the displacement components, the three-dimensional elastodynamic equations yield a set of recursion relations among the expansion functions that can be used to eliminate all but the four of lowest order and to express higher order expansion functions in terms of those of lowest orders. Applying the boundary conditions on the surfaces of the spherical shell and eliminating all but the four lowest order expansion functions give the shell equations as a power series in the shell thickness. After lengthy manipulations, the final four shell equations are obtained in a relatively compact form which are given to second order in shell thickness explicitly. The eigenfrequencies are compared to exact three-dimensional theory with excellent agreement and to membrane theory.
Proteus two-dimensional Navier-Stokes computer code, version 2.0. Volume 1: Analysis description
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Bui, Trong T.
1993-01-01
A computer code called Proteus 2D was developed to solve the two-dimensional planar or axisymmetric, Reynolds-averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The objective in this effort was to develop a code for aerospace propulsion applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The governing equations are solved in generalized nonorthogonal body-fitted coordinates, by marching in time using a fully-coupled ADI solution procedure. The boundary conditions are treated implicitly. All terms, including the diffusion terms, are linearized using second-order Taylor series expansions. Turbulence is modeled using either an algebraic or two-equation eddy viscosity model. The thin-layer or Euler equations may also be solved. The energy equation may be eliminated by the assumption of constant total enthalpy. Explicit and implicit artificial viscosity may be used. Several time step options are available for convergence acceleration. The documentation is divided into three volumes. This is the Analysis Description, and presents the equations and solution procedure. The governing equations, the turbulence model, the linearization of the equations and boundary conditions, the time and space differencing formulas, the ADI solution procedure, and the artificial viscosity models are described in detail.
Proteus three-dimensional Navier-Stokes computer code, version 1.0. Volume 1: Analysis description
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Bui, Trong T.
1993-01-01
A computer code called Proteus 3D has been developed to solve the three dimensional, Reynolds averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The objective in this effort has been to develop a code for aerospace propulsion applications that is easy to use and easy to modify. Code readability, modularity, and documentation have been emphasized. The governing equations are solved in generalized non-orthogonal body-fitted coordinates by marching in time using a fully-coupled ADI solution procedure. The boundary conditions are treated implicitly. All terms, including the diffusion terms, are linearized using second-order Taylor series expansions. Turbulence is modeled using either an algebraic or two-equation eddy viscosity model. The thin-layer or Euler equations may also be solved. The energy equation may be eliminated by the assumption of constant total enthalpy. Explicit and implicit artificial viscosity may be used. Several time step options are available for convergence acceleration. The documentation is divided into three volumes. This is the Analysis Description, and presents the equations and solution procedure. It describes in detail the governing equations, the turbulence model, the linearization of the equations and boundary conditions, the time and space differencing formulas, the ADI solution procedure, and the artificial viscosity models.
Wang, Licheng; Wang, Zidong; Han, Qing-Long; Wei, Guoliang
2018-03-01
This paper is concerned with the distributed filtering problem for a class of discrete time-varying stochastic parameter systems with error variance constraints over a sensor network where the sensor outputs are subject to successive missing measurements. The phenomenon of the successive missing measurements for each sensor is modeled via a sequence of mutually independent random variables obeying the Bernoulli binary distribution law. To reduce the frequency of unnecessary data transmission and alleviate the communication burden, an event-triggered mechanism is introduced for the sensor node such that only some vitally important data is transmitted to its neighboring sensors when specific events occur. The objective of the problem addressed is to design a time-varying filter such that both the requirements and the variance constraints are guaranteed over a given finite-horizon against the random parameter matrices, successive missing measurements, and stochastic noises. By recurring to stochastic analysis techniques, sufficient conditions are established to ensure the existence of the time-varying filters whose gain matrices are then explicitly characterized in term of the solutions to a series of recursive matrix inequalities. A numerical simulation example is provided to illustrate the effectiveness of the developed event-triggered distributed filter design strategy.
Dynamic symmetries and quantum nonadiabatic transitions
Li, Fuxiang; Sinitsyn, Nikolai A.
2016-05-30
Kramers degeneracy theorem is one of the basic results in quantum mechanics. According to it, the time-reversal symmetry makes each energy level of a half-integer spin system at least doubly degenerate, meaning the absence of transitions or scatterings between degenerate states if the Hamiltonian does not depend on time explicitly. Here we generalize this result to the case of explicitly time-dependent spin Hamiltonians. We prove that for a spin system with the total spin being a half integer, if its Hamiltonian and the evolution time interval are symmetric under a specifically defined time reversal operation, the scattering amplitude between anmore » arbitrary initial state and its time reversed counterpart is exactly zero. Lastly, we also discuss applications of this result to the multistate Landau–Zener (LZ) theory.« less
NASA Technical Reports Server (NTRS)
Castelli, Michael G.; Arnold, Steven M.
2000-01-01
Structural materials for the design of advanced aeropropulsion components are usually subject to loading under elevated temperatures, where a material's viscosity (resistance to flow) is greatly reduced in comparison to its viscosity under low-temperature conditions. As a result, the propensity for the material to exhibit time-dependent deformation is significantly enhanced, even when loading is limited to a quasi-linear stress-strain regime as an effort to avoid permanent (irreversible) nonlinear deformation. An understanding and assessment of such time-dependent effects in the context of combined reversible and irreversible deformation is critical to the development of constitutive models that can accurately predict the general hereditary behavior of material deformation. To this end, researchers at the NASA Glenn Research Center at Lewis Field developed a unique experimental technique that identifies the existence of and explicitly determines a threshold stress k, below which the time-dependent material deformation is wholly reversible, and above which irreversible deformation is incurred. This technique is unique in the sense that it allows, for the first time, an objective, explicit, experimental measurement of k. The underlying concept for the experiment is based on the assumption that the material s time-dependent reversible response is invariable, even in the presence of irreversible deformation.
Physician-assisted deaths under the euthanasia law in Belgium: a population-based survey.
Chambaere, Kenneth; Bilsen, Johan; Cohen, Joachim; Onwuteaka-Philipsen, Bregje D; Mortier, Freddy; Deliens, Luc
2010-06-15
Legalization of euthanasia and physician-assisted suicide has been heavily debated in many countries. To help inform this debate, we describe the practices of euthanasia and assisted suicide, and the use of life-ending drugs without an explicit request from the patient, in Flanders, Belgium, where euthanasia is legal. We mailed a questionnaire regarding the use of life-ending drugs with or without explicit patient request to physicians who certified a representative sample (n = 6927) of death certificates of patients who died in Flanders between June and November 2007. The response rate was 58.4%. Overall, 208 deaths involving the use of life-ending drugs were reported: 142 (weighted prevalence 2.0%) were with an explicit patient request (euthanasia or assisted suicide) and 66 (weighted prevalence 1.8%) were without an explicit request. Euthanasia and assisted suicide mostly involved patients less than 80 years of age, those with cancer and those dying at home. Use of life-ending drugs without an explicit request mostly involved patients 80 years of older, those with a disease other than cancer and those in hospital. Of the deaths without an explicit request, the decision was not discussed with the patient in 77.9% of cases. Compared with assisted deaths with the patient's explicit request, those without an explicit request were more likely to have a shorter length of treatment of the terminal illness, to have cure as a goal of treatment in the last week, to have a shorter estimated time by which life was shortened and to involve the administration of opioids. Physician-assisted deaths with an explicit patient request (euthanasia and assisted suicide) and without an explicit request occurred in different patient groups and under different circumstances. Cases without an explicit request often involved patients whose diseases had unpredictable end-of-life trajectories. Although opioids were used in most of these cases, misconceptions seem to persist about their actual life-shortening effects.
Efficient self-consistent viscous-inviscid solutions for unsteady transonic flow
NASA Technical Reports Server (NTRS)
Howlett, J. T.
1985-01-01
An improved method is presented for coupling a boundary layer code with an unsteady inviscid transonic computer code in a quasi-steady fashion. At each fixed time step, the boundary layer and inviscid equations are successively solved until the process converges. An explicit coupling of the equations is described which greatly accelerates the convergence process. Computer times for converged viscous-inviscid solutions are about 1.8 times the comparable inviscid values. Comparison of the results obtained with experimental data on three airfoils are presented. These comparisons demonstrate that the explicitly coupled viscous-inviscid solutions can provide efficient predictions of pressure distributions and lift for unsteady two-dimensional transonic flows.
Efficient self-consistent viscous-inviscid solutions for unsteady transonic flow
NASA Technical Reports Server (NTRS)
Howlett, J. T.
1985-01-01
An improved method is presented for coupling a boundary layer code with an unsteady inviscid transonic computer code in a quasi-steady fashion. At each fixed time step, the boundary layer and inviscid equations are successively solved until the process converges. An explicit coupling of the equations is described which greatly accelerates the convergence process. Computer times for converged viscous-inviscid solutions are about 1.8 times the comparable inviscid values. Comparison of the results obtained with experimental data on three airfoils are presented. These comparisons demonstrate that the explicitly coupled viscous-inviscid solutions can provide efficient predictions of pressure distributions and lift for unsteady two-dimensional transonic flow.
A comparative analysis of massed vs. distributed practice on basic math fact fluency growth rates.
Schutte, Greg M; Duhon, Gary J; Solomon, Benjamin G; Poncy, Brian C; Moore, Kathryn; Story, Bailey
2015-04-01
To best remediate academic deficiencies, educators need to not only identify empirically validated interventions but also be able to apply instructional modifications that result in more efficient student learning. The current study compared the effect of massed and distributed practice with an explicit timing intervention to evaluate the extent to which these modifications lead to increased math fact fluency on basic addition problems. Forty-eight third-grade students were placed into one of three groups with each of the groups completing four 1-min math explicit timing procedures each day across 19 days. Group one completed all four 1-min timings consecutively; group two completed two back-to-back 1-min timings in the morning and two back-to-back 1-min timings in the afternoon, and group three completed one, 1-min independent timing four times distributed across the day. Growth curve modeling was used to examine the progress throughout the course of the study. Results suggested that students in the distributed practice conditions, both four times per day and two times per day, showed significantly higher fluency growth rates than those practicing only once per day in a massed format. These results indicate that combining distributed practice with explicit timing procedures is a useful modification that enhances student learning without the addition of extra instructional time when targeting math fact fluency. Copyright © 2015 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Krieger, Nancy; Waterman, Pamela D.; Kosheleva, Anna; Chen, Jarvis T.; Carney, Dana R.; Smith, Kevin W.; Bennett, Gary G.; Williams, David R.; Freeman, Elmer; Russell, Beverley; Thornhill, Gisele; Mikolowsky, Kristin; Rifkin, Rachel; Samuel, Latrice
2011-01-01
Background To date, research on racial discrimination and health typically has employed explicit self-report measures, despite their potentially being affected by what people are able and willing to say. We accordingly employed an Implicit Association Test (IAT) for racial discrimination, first developed and used in two recent published studies, and measured associations of the explicit and implicit discrimination measures with each other, socioeconomic and psychosocial variables, and smoking. Methodology/Principal Findings Among the 504 black and 501 white US-born participants, age 35–64, randomly recruited in 2008–2010 from 4 community health centers in Boston, MA, black participants were over 1.5 times more likely (p<0.05) to be worse off economically (e.g., for poverty and low education) and have higher social desirability scores (43.8 vs. 28.2); their explicit discrimination exposure was also 2.5 to 3.7 times higher (p<0.05) depending on the measure used, with over 60% reporting exposure in 3 or more domains and within the last year. Higher IAT scores for target vs. perpetrator of discrimination occurred for the black versus white participants: for “black person vs. white person”: 0.26 vs. 0.13; and for “me vs. them”: 0.24 vs. 0.19. In both groups, only low non-significant correlations existed between the implicit and explicit discrimination measures; social desirability was significantly associated with the explicit but not implicit measures. Although neither the explicit nor implicit discrimination measures were associated with odds of being a current smoker, the excess risk for black participants (controlling for age and gender) rose in models that also controlled for the racial discrimination and psychosocial variables; additional control for socioeconomic position sharply reduced and rendered the association null. Conclusions Implicit and explicit measures of racial discrimination are not equivalent and both warrant use in research on racial discrimination and health, along with data on socioeconomic position and social desirability. PMID:22125618
NASA Astrophysics Data System (ADS)
Hanson, David E.
2011-08-01
Based on recent molecular dynamics and ab initio simulations of small isoprene molecules, we propose a new ansatz for rubber elasticity. We envision a network chain as a series of independent molecular kinks, each comprised of a small number of backbone units, and the strain as being imposed along the contour of the chain. We treat chain extension in three distinct force regimes: (Ia) near zero strain, where we assume that the chain is extended within a well defined tube, with all of the kinks participating simultaneously as entropic elastic springs, (II) when the chain becomes sensibly straight, giving rise to a purely enthalpic stretching force (until bond rupture occurs) and, (Ib) a linear entropic regime, between regimes Ia and II, in which a force limit is imposed by tube deformation. In this intermediate regime, the molecular kinks are assumed to be gradually straightened until the chain becomes a series of straight segments between entanglements. We assume that there exists a tube deformation tension limit that is inversely proportional to the chain path tortuosity. Here we report the results of numerical simulations of explicit three-dimensional, periodic, polyisoprene networks, using these extension-only force models. At low strain, crosslink nodes are moved affinely, up to an arbitrary node force limit. Above this limit, non-affine motion of the nodes is allowed to relax unbalanced chain forces. Our simulation results are in good agreement with tensile stress vs. strain experiments.
Hanson, David E
2011-08-07
Based on recent molecular dynamics and ab initio simulations of small isoprene molecules, we propose a new ansatz for rubber elasticity. We envision a network chain as a series of independent molecular kinks, each comprised of a small number of backbone units, and the strain as being imposed along the contour of the chain. We treat chain extension in three distinct force regimes: (Ia) near zero strain, where we assume that the chain is extended within a well defined tube, with all of the kinks participating simultaneously as entropic elastic springs, (II) when the chain becomes sensibly straight, giving rise to a purely enthalpic stretching force (until bond rupture occurs) and, (Ib) a linear entropic regime, between regimes Ia and II, in which a force limit is imposed by tube deformation. In this intermediate regime, the molecular kinks are assumed to be gradually straightened until the chain becomes a series of straight segments between entanglements. We assume that there exists a tube deformation tension limit that is inversely proportional to the chain path tortuosity. Here we report the results of numerical simulations of explicit three-dimensional, periodic, polyisoprene networks, using these extension-only force models. At low strain, crosslink nodes are moved affinely, up to an arbitrary node force limit. Above this limit, non-affine motion of the nodes is allowed to relax unbalanced chain forces. Our simulation results are in good agreement with tensile stress vs. strain experiments.
String Mining in Bioinformatics
NASA Astrophysics Data System (ADS)
Abouelhoda, Mohamed; Ghanem, Moustafa
Sequence analysis is a major area in bioinformatics encompassing the methods and techniques for studying the biological sequences, DNA, RNA, and proteins, on the linear structure level. The focus of this area is generally on the identification of intra- and inter-molecular similarities. Identifying intra-molecular similarities boils down to detecting repeated segments within a given sequence, while identifying inter-molecular similarities amounts to spotting common segments among two or multiple sequences. From a data mining point of view, sequence analysis is nothing but string- or pattern mining specific to biological strings. For a long time, this point of view, however, has not been explicitly embraced neither in the data mining nor in the sequence analysis text books, which may be attributed to the co-evolution of the two apparently independent fields. In other words, although the word "data-mining" is almost missing in the sequence analysis literature, its basic concepts have been implicitly applied. Interestingly, recent research in biological sequence analysis introduced efficient solutions to many problems in data mining, such as querying and analyzing time series [49,53], extracting information from web pages [20], fighting spam mails [50], detecting plagiarism [22], and spotting duplications in software systems [14].
String Mining in Bioinformatics
NASA Astrophysics Data System (ADS)
Abouelhoda, Mohamed; Ghanem, Moustafa
Sequence analysis is a major area in bioinformatics encompassing the methods and techniques for studying the biological sequences, DNA, RNA, and proteins, on the linear structure level. The focus of this area is generally on the identification of intra- and inter-molecular similarities. Identifying intra-molecular similarities boils down to detecting repeated segments within a given sequence, while identifying inter-molecular similarities amounts to spotting common segments among two or multiple sequences. From a data mining point of view, sequence analysis is nothing but string- or pattern mining specific to biological strings. For a long time, this point of view, however, has not been explicitly embraced neither in the data mining nor in the sequence analysis text books, which may be attributed to the co-evolution of the two apparently independent fields. In other words, although the word “data-mining” is almost missing in the sequence analysis literature, its basic concepts have been implicitly applied. Interestingly, recent research in biological sequence analysis introduced efficient solutions to many problems in data mining, such as querying and analyzing time series [49,53], extracting information from web pages [20], fighting spam mails [50], detecting plagiarism [22], and spotting duplications in software systems [14].
Proteus two-dimensional Navier-Stokes computer code, version 2.0. Volume 3: Programmer's reference
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Bui, Trong T.
1993-01-01
A computer code called Proteus 2D was developed to solve the two-dimensional planar or axisymmetric, Reynolds-averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The objective in this effort was to develop a code for aerospace propulsion applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The governing equations are solved in generalized nonorthogonal body-fitted coordinates, by marching in time using a fully-coupled ADI solution procedure. The boundary conditions are treated implicitly. All terms, including the diffusion terms, are linearized using second-order Taylor series expansions. Turbulence is modeled using either an algebraic or two-equation eddy viscosity model. The thin-layer or Euler equations may also be solved. The energy equation may be eliminated by the assumption of constant total enthalpy. Explicit and implicit artificial viscosity may be used. Several time step options are available for convergence acceleration. The documentation is divided into three volumes. The Programmer's Reference contains detailed information useful when modifying the program. The program structure, the Fortran variables stored in common blocks, and the details of each subprogram are described.
Proteus three-dimensional Navier-Stokes computer code, version 1.0. Volume 3: Programmer's reference
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Bui, Trong T.
1993-01-01
A computer code called Proteus 3D was developed to solve the three-dimensional, Reynolds-averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The objective in this effort was to develop a code for aerospace propulsion applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The governing equations are solved in generalized nonorthogonal body fitted coordinates, by marching in time using a fully-coupled ADI solution procedure. The boundary conditions are treated implicitly. All terms, including the diffusion terms, are linearized using second-order Taylor series expansions. Turbulence is modeled using either an algebraic or two-equation eddy viscosity model. The thin-layer or Euler equations may also be solved. The energy equation may be eliminated by the assumption of constant total enthalpy. Explicit and implicit artificial viscosity may be used. Several time step options are available for convergence acceleration. The documentation is divided into three volumes. The Programmer's Reference contains detailed information useful when modifying the program. The program structure, the Fortran variables stored in common blocks, and the details of each subprogram are described.
Proteus three-dimensional Navier-Stokes computer code, version 1.0. Volume 2: User's guide
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Bui, Trong T.
1993-01-01
A computer code called Proteus 3D was developed to solve the three-dimensional, Reynolds-averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The objective in this effort was to develop a code for aerospace propulsion applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The governing equations are solved in generalized nonorthogonal body-fitted coordinates, by marching in time using a fully-coupled ADI solution procedure. The boundary conditions are treated implicitly. All terms, including the diffusion terms, are linearized using second-order Taylor series expansions. Turbulence is modeled using either an algebraic or two-equation eddy viscosity model. The thin-layer or Euler equations may also be solved. The energy equation may be eliminated by the assumption of constant total enthalpy. Explicit and implicit artificial viscosity may be used. Several time step options are available for convergence acceleration. The documentation is divided into three volumes. This User's Guide describes the program's features, the input and output, the procedure for setting up initial conditions, the computer resource requirements, the diagnostic messages that may be generated, the job control language used to run the program, and several test cases.
NASA Astrophysics Data System (ADS)
Camp, H. A.; Moyer, Steven; Moore, Richard K.
2010-04-01
The Night Vision and Electronic Sensors Directorate's current time-limited search (TLS) model, which makes use of the targeting task performance (TTP) metric to describe image quality, does not explicitly account for the effects of visual clutter on observer performance. The TLS model is currently based on empirical fits to describe human performance for a time of day, spectrum and environment. Incorporating a clutter metric into the TLS model may reduce the number of these empirical fits needed. The masked target transform volume (MTTV) clutter metric has been previously presented and compared to other clutter metrics. Using real infrared imagery of rural images with varying levels of clutter, NVESD is currently evaluating the appropriateness of the MTTV metric. NVESD had twenty subject matter experts (SME) rank the amount of clutter in each scene in a series of pair-wise comparisons. MTTV metric values were calculated and then compared to the SME observers rankings. The MTTV metric ranked the clutter in a similar manner to the SME evaluation, suggesting that the MTTV metric may emulate SME response. This paper is a first step in quantifying clutter and measuring the agreement to subjective human evaluation.
Investigating European genetic history through computer simulations.
Currat, Mathias; Silva, Nuno M
2013-01-01
The genetic diversity of Europeans has been shaped by various evolutionary forces including their demographic history. Genetic data can thus be used to draw inferences on the population history of Europe using appropriate statistical methods such as computer simulation, which constitutes a powerful tool to study complex models. Here, we focus on spatially explicit simulation, a method which takes population movements over space and time into account. We present its main principles and then describe a series of studies using this approach that we consider as particularly significant in the context of European prehistory. All simulation studies agree that ancient demographic events played a significant role in the establishment of the European gene pool; but while earlier works support a major genetic input from the Near East during the Neolithic transition, the most recent ones revalue positively the contribution of pre-Neolithic hunter-gatherers and suggest a possible impact of very ancient demographic events. This result of a substantial genetic continuity from pre-Neolithic times to the present challenges some recent studies analyzing ancient DNA. We discuss the possible reasons for this discrepancy and identify future lines of investigation in order to get a better understanding of European evolution.
Chimpanzees and the mathematics of battle.
Wilson, Michael L; Britton, Nicholas F; Franks, Nigel R
2002-06-07
Recent experiments have demonstrated the importance of numerical assessment in animal contests. Nevertheless, few attempts have been made to model explicitly the relationship between the relative number of combatants on each side and the costs and benefits of entering a contest. One framework that may be especially suitable for making such explicit predictions is Lanchester's theory of combat, which has proved useful for understanding combat strategies in humans and several species of ants. We show, with data from a recent series of playback experiments, that a model derived from Lanchester's 'square law' predicts willingness to enter intergroup contests in wild chimpanzees (Pan troglodytes). Furthermore, the model predicts that, in contests with multiple individuals on each side, chimpanzees in this population should be willing to enter a contest only if they outnumber the opposing side by a factor of 1.5. We evaluate these results for intergroup encounters in chimpanzees and also discuss potential applications of Lanchester's square and linear laws for understanding combat strategies in other species.
Memory and consciousness: trace distinctiveness in memory retrievals.
Brunel, Lionel; Oker, Ali; Riou, Benoit; Versace, Rémy
2010-12-01
The aim of this article was to provide experimental evidence that classical dissociation between levels of consciousness associated with memory retrieval (i.e., implicit or explicit) can be explained in terms of task dependency and distinctiveness of traces. In our study phase, we manipulated the level of isolation (partial vs. global) of the memory trace by means of an isolation paradigm (isolated words among non-isolated words). We then tested these two types of isolation in a series of tasks of increasing complexity: a lexical decision task, a recognition task, and a free recall task. The main result of this study was that distinctiveness effects were observed as a function of the type of isolation (level of isolation) and the nature of the task. We concluded that trace distinctiveness improves subsequent access to the trace, while the level of trace distinctiveness also appears to determine the possibility of conscious or explicit retrieval. Copyright © 2010 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Courdurier, M.; Monard, F.; Osses, A.; Romero, F.
2015-09-01
In medical single-photon emission computed tomography (SPECT) imaging, we seek to simultaneously obtain the internal radioactive sources and the attenuation map using not only ballistic measurements but also first-order scattering measurements and assuming a very specific scattering regime. The problem is modeled using the radiative transfer equation by means of an explicit non-linear operator that gives the ballistic and scattering measurements as a function of the radioactive source and attenuation distributions. First, by differentiating this non-linear operator we obtain a linearized inverse problem. Then, under regularity hypothesis for the source distribution and attenuation map and considering small attenuations, we rigorously prove that the linear operator is invertible and we compute its inverse explicitly. This allows proof of local uniqueness for the non-linear inverse problem. Finally, using the previous inversion result for the linear operator, we propose a new type of iterative algorithm for simultaneous source and attenuation recovery for SPECT based on the Neumann series and a Newton-Raphson algorithm.
Neural correlates of HIV risk feelings.
Häcker, Frank E K; Schmälzle, Ralf; Renner, Britta; Schupp, Harald T
2015-04-01
Field studies on HIV risk perception suggest that people rely on impressions they have about the safety of their partner. The present fMRI study investigated the neural correlates of the intuitive perception of risk. First, during an implicit condition, participants viewed a series of unacquainted persons and performed a task unrelated to HIV risk. In the following explicit condition, participants evaluated the HIV risk for each presented person. Contrasting responses for high and low HIV risk revealed that risky stimuli evoked enhanced activity in the anterior insula and medial prefrontal regions, which are involved in salience processing and frequently activated by threatening and negative affect-related stimuli. Importantly, neural regions responding to explicit HIV risk judgments were also enhanced in the implicit condition, suggesting a neural mechanism for intuitive impressions of riskiness. Overall, these findings suggest the saliency network as neural correlate for the intuitive sensing of risk. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Chimpanzees and the mathematics of battle.
Wilson, Michael L; Britton, Nicholas F; Franks, Nigel R
2002-01-01
Recent experiments have demonstrated the importance of numerical assessment in animal contests. Nevertheless, few attempts have been made to model explicitly the relationship between the relative number of combatants on each side and the costs and benefits of entering a contest. One framework that may be especially suitable for making such explicit predictions is Lanchester's theory of combat, which has proved useful for understanding combat strategies in humans and several species of ants. We show, with data from a recent series of playback experiments, that a model derived from Lanchester's 'square law' predicts willingness to enter intergroup contests in wild chimpanzees (Pan troglodytes). Furthermore, the model predicts that, in contests with multiple individuals on each side, chimpanzees in this population should be willing to enter a contest only if they outnumber the opposing side by a factor of 1.5. We evaluate these results for intergroup encounters in chimpanzees and also discuss potential applications of Lanchester's square and linear laws for understanding combat strategies in other species. PMID:12061952
Fiacconi, Chris M; Milliken, Bruce
2011-12-01
In a series of four experiments, we examine the hypothesis that selective attention is crucial for the generation of conscious knowledge of contingency information. We investigated this question using a spatial priming task in which participants were required to localize a target letter in a probe display. In Experiment 1, participants kept track of the frequency with which the predictive letter in the prime appeared in various locations. This manipulation had a negligible impact on contingency awareness. Subsequent experiments requiring participants to attend to features (color, location) of the predictive letter increased contingency awareness somewhat, but there remained a large proportion of individuals who remained unaware of the strong contingency. Together the results of our experiments suggest that the construct of attention does not fully capture the processes that lead to contingency awareness, and suggest a critical role for bottom-up feature integration in explicit contingency learning. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Doha, E. H.
2003-05-01
A formula expressing the Laguerre coefficients of a general-order derivative of an infinitely differentiable function in terms of its original coefficients is proved, and a formula expressing explicitly the derivatives of Laguerre polynomials of any degree and for any order as a linear combination of suitable Laguerre polynomials is deduced. A formula for the Laguerre coefficients of the moments of one single Laguerre polynomial of certain degree is given. Formulae for the Laguerre coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its Laguerre coefficients are also obtained. A simple approach in order to build and solve recursively for the connection coefficients between Jacobi-Laguerre and Hermite-Laguerre polynomials is described. An explicit formula for these coefficients between Jacobi and Laguerre polynomials is given, of which the ultra-spherical polynomials of the first and second kinds and Legendre polynomials are important special cases. An analytical formula for the connection coefficients between Hermite and Laguerre polynomials is also obtained.
NASA Astrophysics Data System (ADS)
Doha, E. H.
2004-01-01
Formulae expressing explicitly the Jacobi coefficients of a general-order derivative (integral) of an infinitely differentiable function in terms of its original expansion coefficients, and formulae for the derivatives (integrals) of Jacobi polynomials in terms of Jacobi polynomials themselves are stated. A formula for the Jacobi coefficients of the moments of one single Jacobi polynomial of certain degree is proved. Another formula for the Jacobi coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its original expanded coefficients is also given. A simple approach in order to construct and solve recursively for the connection coefficients between Jacobi-Jacobi polynomials is described. Explicit formulae for these coefficients between ultraspherical and Jacobi polynomials are deduced, of which the Chebyshev polynomials of the first and second kinds and Legendre polynomials are important special cases. Two analytical formulae for the connection coefficients between Laguerre-Jacobi and Hermite-Jacobi are developed.
NASA Astrophysics Data System (ADS)
Speck, Jared
2013-07-01
In this article, we study the 1 + 3-dimensional relativistic Euler equations on a pre-specified conformally flat expanding spacetime background with spatial slices that are diffeomorphic to {R}^3. We assume that the fluid verifies the equation of state {p = c2s ρ,} where {0 ≤ cs ≤ √{1/3}} is the speed of sound. We also assume that the reciprocal of the scale factor associated with the expanding spacetime metric verifies a c s -dependent time-integrability condition. Under these assumptions, we use the vector field energy method to prove that an explicit family of physically motivated, spatially homogeneous, and spatially isotropic fluid solutions are globally future-stable under small perturbations of their initial conditions. The explicit solutions corresponding to each scale factor are analogs of the well-known spatially flat Friedmann-Lemaître-Robertson-Walker family. Our nonlinear analysis, which exploits dissipative terms generated by the expansion, shows that the perturbed solutions exist for all future times and remain close to the explicit solutions. This work is an extension of previous results, which showed that an analogous stability result holds when the spacetime is exponentially expanding. In the case of the radiation equation of state p = (1/3)ρ, we also show that if the time-integrability condition for the reciprocal of the scale factor fails to hold, then the explicit fluid solutions are unstable. More precisely, we show the existence of an open family of initial data such that (i) it contains arbitrarily small smooth perturbations of the explicit solutions' data and (ii) the corresponding perturbed solutions necessarily form shocks in finite time. The shock formation proof is based on the conformal invariance of the relativistic Euler equations when {c2s = 1/3,} which allows for a reduction to a well-known result of Christodoulou.
NASA Astrophysics Data System (ADS)
Tsai, Meng-Jung; Hsu, Chung-Yuan; Tsai, Chin-Chung
2012-04-01
Due to a growing trend of exploring scientific knowledge on the Web, a number of studies have been conducted to highlight examination of students' online searching strategies. The investigation of online searching generally employs methods including a survey, interview, screen-capturing, or transactional logs. The present study firstly intended to utilize a survey, the Online Information Searching Strategies Inventory (OISSI), to examine users' searching strategies in terms of control, orientation, trial and error, problem solving, purposeful thinking, selecting main ideas, and evaluation, which is defined as implicit strategies. Second, this study conducted screen-capturing to investigate the students' searching behaviors regarding the number of keywords, the quantity and depth of Web page exploration, and time attributes, which is defined as explicit strategies. Ultimately, this study explored the role that these two types of strategies played in predicting the students' online science information searching outcomes. A total of 103 Grade 10 students were recruited from a high school in northern Taiwan. Through Pearson correlation and multiple regression analyses, the results showed that the students' explicit strategies, particularly the time attributes proposed in the present study, were more successful than their implicit strategies in predicting their outcomes of searching science information. The participants who spent more time on detailed reading (explicit strategies) and had better skills of evaluating Web information (implicit strategies) tended to have superior searching performance.
Why didn't Box-Jenkins win (again)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pack, D.J.; Downing, D.J.
This paper focuses on the forecasting performance of the Box-Jenkins methodology applied to the 111 time series of the Makridakis competition. It considers the influence of the following factors: (1) time series length, (2) time-series information (autocorrelation) content, (3) time-series outliers or structural changes, (4) averaging results over time series, and (5) forecast time origin choice. It is found that the 111 time series contain substantial numbers of very short series, series with obvious structural change, and series whose histories are relatively uninformative. If these series are typical of those that one must face in practice, the real message ofmore » the competition is that univariate time series extrapolations will frequently fail regardless of the methodology employed to produce them.« less
Multifractal analysis of visibility graph-based Ito-related connectivity time series.
Czechowski, Zbigniew; Lovallo, Michele; Telesca, Luciano
2016-02-01
In this study, we investigate multifractal properties of connectivity time series resulting from the visibility graph applied to normally distributed time series generated by the Ito equations with multiplicative power-law noise. We show that multifractality of the connectivity time series (i.e., the series of numbers of links outgoing any node) increases with the exponent of the power-law noise. The multifractality of the connectivity time series could be due to the width of connectivity degree distribution that can be related to the exit time of the associated Ito time series. Furthermore, the connectivity time series are characterized by persistence, although the original Ito time series are random; this is due to the procedure of visibility graph that, connecting the values of the time series, generates persistence but destroys most of the nonlinear correlations. Moreover, the visibility graph is sensitive for detecting wide "depressions" in input time series.
NASA Astrophysics Data System (ADS)
Hermance, J. F.; Jacob, R. W.; Bradley, B. A.; Mustard, J. F.
2005-12-01
In studying vegetation patterns remotely, the objective is to draw inferences on the development of specific or general land surface phenology (LSP) as a function of space and time by determining the behavior of a parameter (in our case NDVI), when the parameter estimate may be biased by noise, data dropouts and obfuscations from atmospheric and other effects. We describe the underpinning concepts of a procedure for a robust interpolation of NDVI data that does not have the limitations of other mathematical approaches which require orthonormal basis functions (e.g. Fourier analysis). In this approach, data need not be uniformly sampled in time, nor do we expect noise to be Gaussian-distributed. Our approach is intuitive and straightforward, and is applied here to the refined modeling of LSP using 7 years of weekly and biweekly AVHRR NDVI data for a 150 x 150 km study area in central Nevada. This site is a microcosm of a broad range of vegetation classes, from irrigated agriculture with annual NDVIvalues of up to 0.7 to playas and alkali salt flats with annual NDVI values of only 0.07. Our procedure involves a form of parameter estimation employing Bayesian statistics. In utilitarian terms, the latter procedure is a method of statistical analysis (in our case, robustified, weighted least-squares recursive curve-fitting) that incorporates a variety of prior knowledge when forming current estimates of a particular process or parameter. In addition to the standard Bayesian approach, we account for outliers due to data dropouts or obfuscations because of clouds and snow cover. An initial "starting model" for the average annual cycle and long term (7 year) trend is determined by jointly fitting a common set of complex annual harmonics and a low order polynomial to an entire multi-year time series in one step. This is not a formal Fourier series in the conventional sense, but rather a set of 4 cosine and 4 sine coefficients with fundamental periods of 12, 6, 3 and 1.5 months. Instabilities during large time gaps in the data are suppressed by introducing an expectation of minimum roughness on the fitted time series. Our next significant computational step involves a constrained least squares fit to the observed NDVI data. Residuals between the observed NDVI value and the predicted starting model are computed, and the inverse of these residuals provide the weights for a weighted least squares analysis whereby a set of annual eighth-order splines are fit to the 7 years of NDVI data. Although a series of independent 8-th order annual functionals over a period of 7 years is intrinsically unstable when there are significant data gaps, the splined versions for this specific application are quite stable due to explicit continuity conditions on the values and derivatives of the functionals across contiguous years, as well as a priori constraints on the predicted values vis-a-vis the assumed initial model. Our procedure allows us to robustly interpolate original unequally-spaced NDVI data with a new time series having the most-appropriate, user-defined time base. We apply this approach to the temporal behavior of vegetation in our 150 x 150 km study area. Such a small area, being so rich in vegetation diversity, is particularly useful to view in map form and by animated annual and multi-year time sequences, since the interrelation between phenology, topography and specific usage patterns becomes clear.
Community detection using Kernel Spectral Clustering with memory
NASA Astrophysics Data System (ADS)
Langone, Rocco; Suykens, Johan A. K.
2013-02-01
This work is related to the problem of community detection in dynamic scenarios, which for instance arises in the segmentation of moving objects, clustering of telephone traffic data, time-series micro-array data etc. A desirable feature of a clustering model which has to capture the evolution of communities over time is the temporal smoothness between clusters in successive time-steps. In this way the model is able to track the long-term trend and in the same time it smooths out short-term variation due to noise. We use the Kernel Spectral Clustering with Memory effect (MKSC) which allows to predict cluster memberships of new nodes via out-of-sample extension and has a proper model selection scheme. It is based on a constrained optimization formulation typical of Least Squares Support Vector Machines (LS-SVM), where the objective function is designed to explicitly incorporate temporal smoothness as a valid prior knowledge. The latter, in fact, allows the model to cluster the current data well and to be consistent with the recent history. Here we propose a generalization of the MKSC model with an arbitrary memory, not only one time-step in the past. The experiments conducted on toy problems confirm our expectations: the more memory we add to the model, the smoother over time are the clustering results. We also compare with the Evolutionary Spectral Clustering (ESC) algorithm which is a state-of-the art method, and we obtain comparable or better results.
NASA Technical Reports Server (NTRS)
Geddes, K. O.
1977-01-01
If a linear ordinary differential equation with polynomial coefficients is converted into integrated form then the formal substitution of a Chebyshev series leads to recurrence equations defining the Chebyshev coefficients of the solution function. An explicit formula is presented for the polynomial coefficients of the integrated form in terms of the polynomial coefficients of the differential form. The symmetries arising from multiplication and integration of Chebyshev polynomials are exploited in deriving a general recurrence equation from which can be derived all of the linear equations defining the Chebyshev coefficients. Procedures for deriving the general recurrence equation are specified in a precise algorithmic notation suitable for translation into any of the languages for symbolic computation. The method is algebraic and it can therefore be applied to differential equations containing indeterminates.
YORP torque as the function of shape harmonics
NASA Astrophysics Data System (ADS)
Breiter, Sławomir; Michalska, Hanna
2008-08-01
The second-order analytical approximation of the mean Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) torque components is given as an explicit function of the shape spherical harmonics coefficients for a sufficiently regular minor body. The results are based upon a new expression for the insolation function, significantly simpler than in previous works. Linearized plane-parallel model of the temperature distribution derived from the insolation function allows us to take into account a non-zero conductivity. Final expressions for the three average components of the YORP torque related with rotation period, obliquity and precession are given in a form of the Legendre series of the cosine of obliquity. The series have good numerical properties and can be easily truncated according to the degree of the Legendre polynomials or associated functions, with first two terms playing the principal role.
Modeling volatility using state space models.
Timmer, J; Weigend, A S
1997-08-01
In time series problems, noise can be divided into two categories: dynamic noise which drives the process, and observational noise which is added in the measurement process, but does not influence future values of the system. In this framework, we show that empirical volatilities (the squared relative returns of prices) exhibit a significant amount of observational noise. To model and predict their time evolution adequately, we estimate state space models that explicitly include observational noise. We obtain relaxation times for shocks in the logarithm of volatility ranging from three weeks (for foreign exchange) to three to five months (for stock indices). In most cases, a two-dimensional hidden state is required to yield residuals that are consistent with white noise. We compare these results with ordinary autoregressive models (without a hidden state) and find that autoregressive models underestimate the relaxation times by about two orders of magnitude since they do not distinguish between observational and dynamic noise. This new interpretation of the dynamics of volatility in terms of relaxators in a state space model carries over to stochastic volatility models and to GARCH models, and is useful for several problems in finance, including risk management and the pricing of derivative securities. Data sets used: Olsen & Associates high frequency DEM/USD foreign exchange rates (8 years). Nikkei 225 index (40 years). Dow Jones Industrial Average (25 years).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Guangye; Chacon, Luis; Knoll, Dana Alan
2015-07-31
A multi-rate PIC formulation was developed that employs large timesteps for slow field evolution, and small (adaptive) timesteps for particle orbit integrations. Implementation is based on a JFNK solver with nonlinear elimination and moment preconditioning. The approach is free of numerical instabilities (ω peΔt >>1, and Δx >> λ D), and requires many fewer dofs (vs. explicit PIC) for comparable accuracy in challenging problems. Significant gains (vs. conventional explicit PIC) may be possible for large scale simulations. The paper is organized as follows: Vlasov-Maxwell Particle-in-cell (PIC) methods for plasmas; Explicit, semi-implicit, and implicit time integrations; Implicit PIC formulation (Jacobian-Free Newton-Krylovmore » (JFNK) with nonlinear elimination allows different treatments of disparate scales, discrete conservation properties (energy, charge, canonical momentum, etc.)); Some numerical examples; and Summary.« less
ERIC Educational Resources Information Center
Quixal, Martí; Meurers, Detmar
2016-01-01
The paper tackles a central question in the field of Intelligent Computer-Assisted Language Learning (ICALL): How can language learning tasks be conceptualized and made explicit in a way that supports the pedagogical goals of current Foreign Language Teaching and Learning and at the same time provides an explicit characterization of the Natural…
ERIC Educational Resources Information Center
Doornwaard, Suzan M.; Bickham, David S.; Rich, Michael; ter Bogt, Tom F. M.; van den Eijnden, Regina J. J. M.
2015-01-01
Although research has repeatedly demonstrated that adolescents' use of sexually explicit Internet material (SEIM) is related to their endorsement of permissive sexual attitudes and their experience with sexual behavior, it is not clear how linkages between these constructs unfold over time. This study combined 2 types of longitudinal modeling,…
Regenerating time series from ordinal networks.
McCullough, Michael; Sakellariou, Konstantinos; Stemler, Thomas; Small, Michael
2017-03-01
Recently proposed ordinal networks not only afford novel methods of nonlinear time series analysis but also constitute stochastic approximations of the deterministic flow time series from which the network models are constructed. In this paper, we construct ordinal networks from discrete sampled continuous chaotic time series and then regenerate new time series by taking random walks on the ordinal network. We then investigate the extent to which the dynamics of the original time series are encoded in the ordinal networks and retained through the process of regenerating new time series by using several distinct quantitative approaches. First, we use recurrence quantification analysis on traditional recurrence plots and order recurrence plots to compare the temporal structure of the original time series with random walk surrogate time series. Second, we estimate the largest Lyapunov exponent from the original time series and investigate the extent to which this invariant measure can be estimated from the surrogate time series. Finally, estimates of correlation dimension are computed to compare the topological properties of the original and surrogate time series dynamics. Our findings show that ordinal networks constructed from univariate time series data constitute stochastic models which approximate important dynamical properties of the original systems.
Regenerating time series from ordinal networks
NASA Astrophysics Data System (ADS)
McCullough, Michael; Sakellariou, Konstantinos; Stemler, Thomas; Small, Michael
2017-03-01
Recently proposed ordinal networks not only afford novel methods of nonlinear time series analysis but also constitute stochastic approximations of the deterministic flow time series from which the network models are constructed. In this paper, we construct ordinal networks from discrete sampled continuous chaotic time series and then regenerate new time series by taking random walks on the ordinal network. We then investigate the extent to which the dynamics of the original time series are encoded in the ordinal networks and retained through the process of regenerating new time series by using several distinct quantitative approaches. First, we use recurrence quantification analysis on traditional recurrence plots and order recurrence plots to compare the temporal structure of the original time series with random walk surrogate time series. Second, we estimate the largest Lyapunov exponent from the original time series and investigate the extent to which this invariant measure can be estimated from the surrogate time series. Finally, estimates of correlation dimension are computed to compare the topological properties of the original and surrogate time series dynamics. Our findings show that ordinal networks constructed from univariate time series data constitute stochastic models which approximate important dynamical properties of the original systems.
Neill, Erica; Rossell, Susan Lee
2013-02-28
Semantic memory deficits in schizophrenia (SZ) are profound, yet there is no research comparing implicit and explicit semantic processing in the same participant sample. In the current study, both implicit and explicit priming are investigated using direct (LION-TIGER) and indirect (LION-STRIPES; where tiger is not displayed) stimuli comparing SZ to healthy controls. Based on a substantive review (Rossell and Stefanovic, 2007) and meta-analysis (Pomarol-Clotet et al., 2008), it was predicted that SZ would be associated with increased indirect priming implicitly. Further, it was predicted that SZ would be associated with abnormal indirect priming explicitly, replicating earlier work (Assaf et al., 2006). No specific hypotheses were made for implicit direct priming due to the heterogeneity of the literature. It was hypothesised that explicit direct priming would be intact based on the structured nature of this task. The pattern of results suggests (1) intact reaction time (RT) and error performance implicitly in the face of abnormal direct priming and (2) impaired RT and error performance explicitly. This pattern confirms general findings regarding implicit/explicit memory impairments in SZ whilst highlighting the unique pattern of performance specific to semantic priming. Finally, priming performance is discussed in relation to thought disorder and length of illness. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
GPS Position Time Series @ JPL
NASA Technical Reports Server (NTRS)
Owen, Susan; Moore, Angelyn; Kedar, Sharon; Liu, Zhen; Webb, Frank; Heflin, Mike; Desai, Shailen
2013-01-01
Different flavors of GPS time series analysis at JPL - Use same GPS Precise Point Positioning Analysis raw time series - Variations in time series analysis/post-processing driven by different users. center dot JPL Global Time Series/Velocities - researchers studying reference frame, combining with VLBI/SLR/DORIS center dot JPL/SOPAC Combined Time Series/Velocities - crustal deformation for tectonic, volcanic, ground water studies center dot ARIA Time Series/Coseismic Data Products - Hazard monitoring and response focused center dot ARIA data system designed to integrate GPS and InSAR - GPS tropospheric delay used for correcting InSAR - Caltech's GIANT time series analysis uses GPS to correct orbital errors in InSAR - Zhen Liu's talking tomorrow on InSAR Time Series analysis
Synchronization of spontaneous eyeblinks while viewing video stories
Nakano, Tamami; Yamamoto, Yoshiharu; Kitajo, Keiichi; Takahashi, Toshimitsu; Kitazawa, Shigeru
2009-01-01
Blinks are generally suppressed during a task that requires visual attention and tend to occur immediately before or after the task when the timing of its onset and offset are explicitly given. During the viewing of video stories, blinks are expected to occur at explicit breaks such as scene changes. However, given that the scene length is unpredictable, there should also be appropriate timing for blinking within a scene to prevent temporal loss of critical visual information. Here, we show that spontaneous blinks were highly synchronized between and within subjects when they viewed the same short video stories, but were not explicitly tied to the scene breaks. Synchronized blinks occurred during scenes that required less attention such as at the conclusion of an action, during the absence of the main character, during a long shot and during repeated presentations of a similar scene. In contrast, blink synchronization was not observed when subjects viewed a background video or when they listened to a story read aloud. The results suggest that humans share a mechanism for controlling the timing of blinks that searches for an implicit timing that is appropriate to minimize the chance of losing critical information while viewing a stream of visual events. PMID:19640888
NASA Astrophysics Data System (ADS)
Singh, Sarabjeet; Howard, Carl Q.; Hansen, Colin H.; Köpke, Uwe G.
2018-03-01
In this paper, numerically modelled vibration response of a rolling element bearing with a localised outer raceway line spall is presented. The results were obtained from a finite element (FE) model of the defective bearing solved using an explicit dynamics FE software package, LS-DYNA. Time domain vibration signals of the bearing obtained directly from the FE modelling were processed further to estimate time-frequency and frequency domain results, such as spectrogram and power spectrum, using standard signal processing techniques pertinent to the vibration-based monitoring of rolling element bearings. A logical approach to analyses of the numerically modelled results was developed with an aim to presenting the analytical validation of the modelled results. While the time and frequency domain analyses of the results show that the FE model generates accurate bearing kinematics and defect frequencies, the time-frequency analysis highlights the simulation of distinct low- and high-frequency characteristic vibration signals associated with the unloading and reloading of the rolling elements as they move in and out of the defect, respectively. Favourable agreement of the numerical and analytical results demonstrates the validation of the results from the explicit FE modelling of the bearing.
Contact-aware simulations of particulate Stokesian suspensions
NASA Astrophysics Data System (ADS)
Lu, Libin; Rahimian, Abtin; Zorin, Denis
2017-10-01
We present an efficient, accurate, and robust method for simulation of dense suspensions of deformable and rigid particles immersed in Stokesian fluid in two dimensions. We use a well-established boundary integral formulation for the problem as the foundation of our approach. This type of formulation, with a high-order spatial discretization and an implicit and adaptive time discretization, have been shown to be able to handle complex interactions between particles with high accuracy. Yet, for dense suspensions, very small time-steps or expensive implicit solves as well as a large number of discretization points are required to avoid non-physical contact and intersections between particles, leading to infinite forces and numerical instability. Our method maintains the accuracy of previous methods at a significantly lower cost for dense suspensions. The key idea is to ensure interference-free configuration by introducing explicit contact constraints into the system. While such constraints are unnecessary in the formulation, in the discrete form of the problem, they make it possible to eliminate catastrophic loss of accuracy by preventing contact explicitly. Introducing contact constraints results in a significant increase in stable time-step size for explicit time-stepping, and a reduction in the number of points adequate for stability.
Explicit integration of Friedmann's equation with nonlinear equations of state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Shouxin; Gibbons, Gary W.; Yang, Yisong, E-mail: chensx@henu.edu.cn, E-mail: gwg1@damtp.cam.ac.uk, E-mail: yisongyang@nyu.edu
2015-05-01
In this paper we study the integrability of the Friedmann equations, when the equation of state for the perfect-fluid universe is nonlinear, in the light of the Chebyshev theorem. A series of important, yet not previously touched, problems will be worked out which include the generalized Chaplygin gas, two-term energy density, trinomial Friedmann, Born-Infeld, two-fluid models, and Chern-Simons modified gravity theory models. With the explicit integration, we are able to understand exactly the roles of the physical parameters in various models play in the cosmological evolution which may also offer clues to a profound understanding of the problems in generalmore » settings. For example, in the Chaplygin gas universe, a few integrable cases lead us to derive a universal formula for the asymptotic exponential growth rate of the scale factor, of an explicit form, whether the Friedmann equation is integrable or not, which reveals the coupled roles played by various physical sectors and it is seen that, as far as there is a tiny presence of nonlinear matter, conventional linear matter makes contribution to the dark matter, which becomes significant near the phantom divide line. The Friedmann equations also arise in areas of physics not directly related to cosmology. We provide some examples ranging from geometric optics and central orbits to soap films and the shape of glaciated valleys to which our results may be applied.« less
The numerology of gender: gendered perceptions of even and odd numbers
Wilkie, James E. B.; Bodenhausen, Galen V.
2015-01-01
Do numbers have gender? Wilkie and Bodenhausen (2012) examined this issue in a series of experiments on perceived gender. They examined the perceived gender of baby faces and foreign names. Arbitrary numbers presented with these faces and names influenced their perceived gender. Specifically, odd numbers connoted masculinity, while even numbers connoted femininity. In two new studies (total N = 315), we further examined the gendering of numbers. The first study examined explicit ratings of 1-digit numbers. We confirmed that odd numbers seemed masculine while even numbers seemed feminine. Although both men and women showed this pattern, it was more pronounced among women. We also examined whether this pattern holds for automatic as well as deliberated reactions. Results of an Implicit Association Test showed that it did, but only among the women. The implicit and explicit patterns of numerical gender ascription were moderately correlated. The second study examined explicit perceptions of 2-digit numbers. Again, women viewed odd numbers as more masculine and less feminine than even numbers. However, men viewed 2-digit numbers as relatively masculine, regardless of whether they were even or odd. These results indicate that women and men impute gender to numbers in different ways and to different extents. We discuss possible implications for understanding how people relate to and are influenced by numbers in a variety of real-life contexts. PMID:26113839
The numerology of gender: gendered perceptions of even and odd numbers.
Wilkie, James E B; Bodenhausen, Galen V
2015-01-01
Do numbers have gender? Wilkie and Bodenhausen (2012) examined this issue in a series of experiments on perceived gender. They examined the perceived gender of baby faces and foreign names. Arbitrary numbers presented with these faces and names influenced their perceived gender. Specifically, odd numbers connoted masculinity, while even numbers connoted femininity. In two new studies (total N = 315), we further examined the gendering of numbers. The first study examined explicit ratings of 1-digit numbers. We confirmed that odd numbers seemed masculine while even numbers seemed feminine. Although both men and women showed this pattern, it was more pronounced among women. We also examined whether this pattern holds for automatic as well as deliberated reactions. Results of an Implicit Association Test showed that it did, but only among the women. The implicit and explicit patterns of numerical gender ascription were moderately correlated. The second study examined explicit perceptions of 2-digit numbers. Again, women viewed odd numbers as more masculine and less feminine than even numbers. However, men viewed 2-digit numbers as relatively masculine, regardless of whether they were even or odd. These results indicate that women and men impute gender to numbers in different ways and to different extents. We discuss possible implications for understanding how people relate to and are influenced by numbers in a variety of real-life contexts.
Lara, Juan A; Lizcano, David; Pérez, Aurora; Valente, Juan P
2014-10-01
There are now domains where information is recorded over a period of time, leading to sequences of data known as time series. In many domains, like medicine, time series analysis requires to focus on certain regions of interest, known as events, rather than analyzing the whole time series. In this paper, we propose a framework for knowledge discovery in both one-dimensional and multidimensional time series containing events. We show how our approach can be used to classify medical time series by means of a process that identifies events in time series, generates time series reference models of representative events and compares two time series by analyzing the events they have in common. We have applied our framework on time series generated in the areas of electroencephalography (EEG) and stabilometry. Framework performance was evaluated in terms of classification accuracy, and the results confirmed that the proposed schema has potential for classifying EEG and stabilometric signals. The proposed framework is useful for discovering knowledge from medical time series containing events, such as stabilometric and electroencephalographic time series. These results would be equally applicable to other medical domains generating iconographic time series, such as, for example, electrocardiography (ECG). Copyright © 2014 Elsevier Inc. All rights reserved.
Spatially explicit rangeland erosion monitoring using high-resolution digital aerial imagery
Gillan, Jeffrey K.; Karl, Jason W.; Barger, Nichole N.; Elaksher, Ahmed; Duniway, Michael C.
2016-01-01
Nearly all of the ecosystem services supported by rangelands, including production of livestock forage, carbon sequestration, and provisioning of clean water, are negatively impacted by soil erosion. Accordingly, monitoring the severity, spatial extent, and rate of soil erosion is essential for long-term sustainable management. Traditional field-based methods of monitoring erosion (sediment traps, erosion pins, and bridges) can be labor intensive and therefore are generally limited in spatial intensity and/or extent. There is a growing effort to monitor natural resources at broad scales, which is driving the need for new soil erosion monitoring tools. One remote-sensing technique that can be used to monitor soil movement is a time series of digital elevation models (DEMs) created using aerial photogrammetry methods. By geographically coregistering the DEMs and subtracting one surface from the other, an estimate of soil elevation change can be created. Such analysis enables spatially explicit quantification and visualization of net soil movement including erosion, deposition, and redistribution. We constructed DEMs (12-cm ground sampling distance) on the basis of aerial photography immediately before and 1 year after a vegetation removal treatment on a 31-ha Piñon-Juniper woodland in southeastern Utah to evaluate the use of aerial photography in detecting soil surface change. On average, we were able to detect surface elevation change of ± 8−9cm and greater, which was sufficient for the large amount of soil movement exhibited on the study area. Detecting more subtle soil erosion could be achieved using the same technique with higher-resolution imagery from lower-flying aircraft such as unmanned aerial vehicles. DEM differencing and process-focused field methods provided complementary information and a more complete assessment of soil loss and movement than any single technique alone. Photogrammetric DEM differencing could be used as a technique to quantitatively monitor surface change over time relative to management activities.
Green-Ampt approximations: A comprehensive analysis
NASA Astrophysics Data System (ADS)
Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.
2016-04-01
Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.
Explicit least squares system parameter identification for exact differential input/output models
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1993-01-01
The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.
The CFL condition for spectral approximations to hyperbolic initial-boundary value problems
NASA Technical Reports Server (NTRS)
Gottlieb, David; Tadmor, Eitan
1991-01-01
The stability of spectral approximations to scalar hyperbolic initial-boundary value problems with variable coefficients are studied. Time is discretized by explicit multi-level or Runge-Kutta methods of order less than or equal to 3 (forward Euler time differencing is included), and spatial discretizations are studied by spectral and pseudospectral approximations associated with the general family of Jacobi polynomials. It is proved that these fully explicit spectral approximations are stable provided their time-step, delta t, is restricted by the CFL-like condition, delta t less than Const. N(exp-2), where N equals the spatial number of degrees of freedom. We give two independent proofs of this result, depending on two different choices of approximate L(exp 2)-weighted norms. In both approaches, the proofs hinge on a certain inverse inequality interesting for its own sake. The result confirms the commonly held belief that the above CFL stability restriction, which is extensively used in practical implementations, guarantees the stability (and hence the convergence) of fully-explicit spectral approximations in the nonperiodic case.
The CFL condition for spectral approximations to hyperbolic initial-boundary value problems
NASA Technical Reports Server (NTRS)
Gottlieb, David; Tadmor, Eitan
1990-01-01
The stability of spectral approximations to scalar hyperbolic initial-boundary value problems with variable coefficients are studied. Time is discretized by explicit multi-level or Runge-Kutta methods of order less than or equal to 3 (forward Euler time differencing is included), and spatial discretizations are studied by spectral and pseudospectral approximations associated with the general family of Jacobi polynomials. It is proved that these fully explicit spectral approximations are stable provided their time-step, delta t, is restricted by the CFL-like condition, delta t less than Const. N(exp-2), where N equals the spatial number of degrees of freedom. We give two independent proofs of this result, depending on two different choices of approximate L(exp 2)-weighted norms. In both approaches, the proofs hinge on a certain inverse inequality interesting for its own sake. The result confirms the commonly held belief that the above CFL stability restriction, which is extensively used in practical implementations, guarantees the stability (and hence the convergence) of fully-explicit spectral approximations in the nonperiodic case.
Decision or no decision: how do patient-physician interactions end and what matters?
Tai-Seale, Ming; Bramson, Rachel; Bao, Xiaoming
2007-03-01
A clearly stated clinical decision can induce a cognitive closure in patients and is an important investment in the end of patient-physician communications. Little is known about how often explicit decisions are made in primary care visits. To use an innovative videotape analysis approach to assess physicians' propensity to state decisions explicitly, and to examine the factors influencing decision patterns. We coded topics discussed in 395 videotapes of primary care visits, noting the number of instances and the length of discussions on each topic, and how discussions ended. A regression analysis tested the relationship between explicit decisions and visit factors such as the nature of topics under discussion, instances of discussion, the amount of time the patient spoke, and competing demands from other topics. About 77% of topics ended with explicit decisions. Patients spoke for an average of 58 seconds total per topic. Patients spoke more during topics that ended with an explicit decision, (67 seconds), compared with 36 seconds otherwise. The number of instances of a topic was associated with higher odds of having an explicit decision (OR = 1.73, p < 0.01). Increases in the number of topics discussed in visits (OR = 0.95, p < .05), and topics on lifestyle and habits (OR = 0.60, p < .01) were associated with lower odds of explicit decisions. Although discussions often ended with explicit decisions, there were variations related to the content and dynamics of interactions. We recommend strengthening patients' voice and developing clinical tools, e.g., an "exit prescription," to improving decision making.
Kato expansion in quantum canonical perturbation theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nikolaev, Andrey, E-mail: Andrey.Nikolaev@rdtex.ru
2016-06-15
This work establishes a connection between canonical perturbation series in quantum mechanics and a Kato expansion for the resolvent of the Liouville superoperator. Our approach leads to an explicit expression for a generator of a block-diagonalizing Dyson’s ordered exponential in arbitrary perturbation order. Unitary intertwining of perturbed and unperturbed averaging superprojectors allows for a description of ambiguities in the generator and block-diagonalized Hamiltonian. We compare the efficiency of the corresponding computational algorithm with the efficiencies of the Van Vleck and Magnus methods for high perturbative orders.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, James B.
We report the third in a series of ’exact’ quantum Monte Carlo calculations for the potential energy of the saddle point of the barrier for the reaction H + H{sub 2} → H{sub 2} + H. The barrier heights determined are 9.61 ± 0.01 in 1992/94, 9.608 ± 0.001 in 2003, and 9.6089 ± 0.0001 in 2016 (this work), all in kcal/mole and successively a factor of ten more accurate. The new value is below the lowest value from explicitly correlated Gaussian calculations and within the estimated limits of extrapolated multireference configuration calculations.
Effective core potential calculations on small molecules containing transition metal atoms
NASA Astrophysics Data System (ADS)
Gropen, O.; Wahlgren, U.; Pettersson, L.
1982-04-01
A series of test calculations on diatomic oxides and hydrides of Sc, Ti, Cr, Ni and Zn have been carried out in order to test the reliability of some pseudopotential methods. Several different forms of some pseudopotential operators were used. Only the highest valence orbitals of each atomic symmetry were explicitly included in the calculations. The results indicate that there are problems associated with all the investigated operators particularly for the lighter transition elements. It is suggested that more reliable results may be obtained with pseudopotential methods using smaller cores.
Polarized 3-folds in a codimension 10 weighted homogeneous F4 variety
NASA Astrophysics Data System (ADS)
Qureshi, Muhammad Imran
2017-10-01
We describe the construction of a codimension 10 weighted homogeneous variety wΣF4(μ , u) corresponding to the exceptional Lie group F4 by explicit computation of its graded ring structure. We give a formula for the Hilbert series of the generic weighted wΣF4(μ , u) in terms of representation theoretic data of F4. We also construct some families of polarized 3-folds in codimension 10 whose general member is a weighted complete intersection of some wΣF4(μ , u) .
NASA Astrophysics Data System (ADS)
Vance, Colin James
This dissertation develops spatially explicit econometric models by linking Thematic Mapper (TM) satellite imagery with household survey data to test behavioral propositions of semi-subsistence farmers in the Southern Yucatan Peninsular Region (SYPR) of Mexico. Covering 22,000 km2, this agricultural frontier contains one of the largest and oldest expanses of tropical forests in the Americas outside of Amazonia. Over the past 30 years, the SYPR has undergone significant land-use change largely owing to the construction of a highway through the region's center in 1967. These landscape dynamics are modeled by exploiting a spatial database linking a time series of TM imagery with socio-economic and geo-referenced land-use data collected from a random sample of 188 farm households. The dissertation moves beyond the existing literature on deforestation in three principal respects. Theoretically, the study develops a non-separable model of land-use that relaxes the assumption of profit maximization almost exclusively invoked in studies of the deforestation issue. The model is derived from a utility-maximizing framework that explicitly incorporates the interdependency of the household's production and consumption choices as these affect the allocation of resources. Methodologically, the study assembles a spatial database that couples satellite imagery with household-level socio-economic data. The field survey protocol recorded geo-referenced land-use data through the use of a geographic positioning system and the creation of sketch maps detailing the location of different uses observed within individual plots. Empirically, the study estimates spatially explicit econometric models of land-use change using switching regressions and duration analysis. A distinguishing feature of these models is that they link the dependent and independent variables at the level of the decision unit, the land manager, thereby capturing spatial and temporal heterogeneity that is otherwise obscured in studies using data aggregated to higher scales of analysis. The empirical findings suggest the potential of various policy initiatives to impede or otherwise alter the pattern of land-cover conversions. In this regard, the study reveals that consideration of missing or thin markets is critical to understanding how farmers in the SYPR reach subsistence and commercial cropping decisions.
Detection of a sudden change of the field time series based on the Lorenz system.
Da, ChaoJiu; Li, Fang; Shen, BingLu; Yan, PengCheng; Song, Jian; Ma, DeShan
2017-01-01
We conducted an exploratory study of the detection of a sudden change of the field time series based on the numerical solution of the Lorenz system. First, the time when the Lorenz path jumped between the regions on the left and right of the equilibrium point of the Lorenz system was quantitatively marked and the sudden change time of the Lorenz system was obtained. Second, the numerical solution of the Lorenz system was regarded as a vector; thus, this solution could be considered as a vector time series. We transformed the vector time series into a time series using the vector inner product, considering the geometric and topological features of the Lorenz system path. Third, the sudden change of the resulting time series was detected using the sliding t-test method. Comparing the test results with the quantitatively marked time indicated that the method could detect every sudden change of the Lorenz path, thus the method is effective. Finally, we used the method to detect the sudden change of the pressure field time series and temperature field time series, and obtained good results for both series, which indicates that the method can apply to high-dimension vector time series. Mathematically, there is no essential difference between the field time series and vector time series; thus, we provide a new method for the detection of the sudden change of the field time series.
Waldner, François; Hansen, Matthew C; Potapov, Peter V; Löw, Fabian; Newby, Terence; Ferreira, Stefanus; Defourny, Pierre
2017-01-01
The lack of sufficient ground truth data has always constrained supervised learning, thereby hindering the generation of up-to-date satellite-derived thematic maps. This is all the more true for those applications requiring frequent updates over large areas such as cropland mapping. Therefore, we present a method enabling the automated production of spatially consistent cropland maps at the national scale, based on spectral-temporal features and outdated land cover information. Following an unsupervised approach, this method extracts reliable calibration pixels based on their labels in the outdated map and their spectral signatures. To ensure spatial consistency and coherence in the map, we first propose to generate seamless input images by normalizing the time series and deriving spectral-temporal features that target salient cropland characteristics. Second, we reduce the spatial variability of the class signatures by stratifying the country and by classifying each stratum independently. Finally, we remove speckle with a weighted majority filter accounting for per-pixel classification confidence. Capitalizing on a wall-to-wall validation data set, the method was tested in South Africa using a 16-year old land cover map and multi-sensor Landsat time series. The overall accuracy of the resulting cropland map reached 92%. A spatially explicit validation revealed large variations across the country and suggests that intensive grain-growing areas were better characterized than smallholder farming systems. Informative features in the classification process vary from one stratum to another but features targeting the minimum of vegetation as well as short-wave infrared features were consistently important throughout the country. Overall, the approach showed potential for routinely delivering consistent cropland maps over large areas as required for operational crop monitoring.
Bayesian correlated clustering to integrate multiple datasets
Kirk, Paul; Griffin, Jim E.; Savage, Richard S.; Ghahramani, Zoubin; Wild, David L.
2012-01-01
Motivation: The integration of multiple datasets remains a key challenge in systems biology and genomic medicine. Modern high-throughput technologies generate a broad array of different data types, providing distinct—but often complementary—information. We present a Bayesian method for the unsupervised integrative modelling of multiple datasets, which we refer to as MDI (Multiple Dataset Integration). MDI can integrate information from a wide range of different datasets and data types simultaneously (including the ability to model time series data explicitly using Gaussian processes). Each dataset is modelled using a Dirichlet-multinomial allocation (DMA) mixture model, with dependencies between these models captured through parameters that describe the agreement among the datasets. Results: Using a set of six artificially constructed time series datasets, we show that MDI is able to integrate a significant number of datasets simultaneously, and that it successfully captures the underlying structural similarity between the datasets. We also analyse a variety of real Saccharomyces cerevisiae datasets. In the two-dataset case, we show that MDI’s performance is comparable with the present state-of-the-art. We then move beyond the capabilities of current approaches and integrate gene expression, chromatin immunoprecipitation–chip and protein–protein interaction data, to identify a set of protein complexes for which genes are co-regulated during the cell cycle. Comparisons to other unsupervised data integration techniques—as well as to non-integrative approaches—demonstrate that MDI is competitive, while also providing information that would be difficult or impossible to extract using other methods. Availability: A Matlab implementation of MDI is available from http://www2.warwick.ac.uk/fac/sci/systemsbiology/research/software/. Contact: D.L.Wild@warwick.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23047558
Local Difference Measures between Complex Networks for Dynamical System Model Evaluation
Lange, Stefan; Donges, Jonathan F.; Volkholz, Jan; Kurths, Jürgen
2015-01-01
A faithful modeling of real-world dynamical systems necessitates model evaluation. A recent promising methodological approach to this problem has been based on complex networks, which in turn have proven useful for the characterization of dynamical systems. In this context, we introduce three local network difference measures and demonstrate their capabilities in the field of climate modeling, where these measures facilitate a spatially explicit model evaluation. Building on a recent study by Feldhoff et al. [1] we comparatively analyze statistical and dynamical regional climate simulations of the South American monsoon system. Three types of climate networks representing different aspects of rainfall dynamics are constructed from the modeled precipitation space-time series. Specifically, we define simple graphs based on positive as well as negative rank correlations between rainfall anomaly time series at different locations, and such based on spatial synchronizations of extreme rain events. An evaluation against respective networks built from daily satellite data provided by the Tropical Rainfall Measuring Mission 3B42 V7 reveals far greater differences in model performance between network types for a fixed but arbitrary climate model than between climate models for a fixed but arbitrary network type. We identify two sources of uncertainty in this respect. Firstly, climate variability limits fidelity, particularly in the case of the extreme event network; and secondly, larger geographical link lengths render link misplacements more likely, most notably in the case of the anticorrelation network; both contributions are quantified using suitable ensembles of surrogate networks. Our model evaluation approach is applicable to any multidimensional dynamical system and especially our simple graph difference measures are highly versatile as the graphs to be compared may be constructed in whatever way required. Generalizations to directed as well as edge- and node-weighted graphs are discussed. PMID:25856374
NASA Astrophysics Data System (ADS)
Gichenje, Helene; Godinho, Sergio
2017-04-01
Land degradation is a key global environment and development problem that is recognized as a priority by the international development community. The Sustainable Development Goals (SDGs) were adopted by the global community in 2015, and include a goal related to land degradation and the accompanying target to achieve a land degradation-neutral (LDN) world by 2030. The LDN concept encompasses two joint actions of reducing the rate of degradation and increasing the rate of restoration. Using Kenya as the study area, this study aims to develop and test a spatially explicit methodology for assessing and monitoring the operationalization of a land degradation neutrality scheme at the national level. Time series analysis is applied to Normalized Difference Vegetation Index (NDVI) satellite data records, based on the hypothesis that the resulting NDVI residual trend would enable successful detection of changes in vegetation photosynthetic capacity and thus serve as a proxy for land degradation and regeneration processes. Two NDVI data sets are used to identify the spatial and temporal distribution of degraded and regenerated areas: the long term coarse resolution (8km, 1982-2015) third generation Global Inventory Modeling and Mapping Studies (GIMMS) NDVI3g data record; and the shorter-term finer resolution (250m, 2001-2015) Moderate Resolution Imaging Spectroradiometer (MODIS) derived NDVI data record. Climate data (rainfall, temperature and soil moisture) are used to separate areas of human-induced vegetation productivity decline from those driven by climate dynamics. Further, weekly vegetation health (VH) indexes (4km, 1982-2015) developed by National Oceanic and Atmospheric Administration (NOAA), are assessed as indicators for early detection and monitoring of land degradation by estimating vegetation stress (moisture, thermal and combined conditions).
Local difference measures between complex networks for dynamical system model evaluation.
Lange, Stefan; Donges, Jonathan F; Volkholz, Jan; Kurths, Jürgen
2015-01-01
A faithful modeling of real-world dynamical systems necessitates model evaluation. A recent promising methodological approach to this problem has been based on complex networks, which in turn have proven useful for the characterization of dynamical systems. In this context, we introduce three local network difference measures and demonstrate their capabilities in the field of climate modeling, where these measures facilitate a spatially explicit model evaluation.Building on a recent study by Feldhoff et al. [8] we comparatively analyze statistical and dynamical regional climate simulations of the South American monsoon system [corrected]. types of climate networks representing different aspects of rainfall dynamics are constructed from the modeled precipitation space-time series. Specifically, we define simple graphs based on positive as well as negative rank correlations between rainfall anomaly time series at different locations, and such based on spatial synchronizations of extreme rain events. An evaluation against respective networks built from daily satellite data provided by the Tropical Rainfall Measuring Mission 3B42 V7 reveals far greater differences in model performance between network types for a fixed but arbitrary climate model than between climate models for a fixed but arbitrary network type. We identify two sources of uncertainty in this respect. Firstly, climate variability limits fidelity, particularly in the case of the extreme event network; and secondly, larger geographical link lengths render link misplacements more likely, most notably in the case of the anticorrelation network; both contributions are quantified using suitable ensembles of surrogate networks. Our model evaluation approach is applicable to any multidimensional dynamical system and especially our simple graph difference measures are highly versatile as the graphs to be compared may be constructed in whatever way required. Generalizations to directed as well as edge- and node-weighted graphs are discussed.
Hansen, Matthew C.; Potapov, Peter V.; Löw, Fabian; Newby, Terence; Ferreira, Stefanus; Defourny, Pierre
2017-01-01
The lack of sufficient ground truth data has always constrained supervised learning, thereby hindering the generation of up-to-date satellite-derived thematic maps. This is all the more true for those applications requiring frequent updates over large areas such as cropland mapping. Therefore, we present a method enabling the automated production of spatially consistent cropland maps at the national scale, based on spectral-temporal features and outdated land cover information. Following an unsupervised approach, this method extracts reliable calibration pixels based on their labels in the outdated map and their spectral signatures. To ensure spatial consistency and coherence in the map, we first propose to generate seamless input images by normalizing the time series and deriving spectral-temporal features that target salient cropland characteristics. Second, we reduce the spatial variability of the class signatures by stratifying the country and by classifying each stratum independently. Finally, we remove speckle with a weighted majority filter accounting for per-pixel classification confidence. Capitalizing on a wall-to-wall validation data set, the method was tested in South Africa using a 16-year old land cover map and multi-sensor Landsat time series. The overall accuracy of the resulting cropland map reached 92%. A spatially explicit validation revealed large variations across the country and suggests that intensive grain-growing areas were better characterized than smallholder farming systems. Informative features in the classification process vary from one stratum to another but features targeting the minimum of vegetation as well as short-wave infrared features were consistently important throughout the country. Overall, the approach showed potential for routinely delivering consistent cropland maps over large areas as required for operational crop monitoring. PMID:28817618
Recursive-operator method in vibration problems for rod systems
NASA Astrophysics Data System (ADS)
Rozhkova, E. V.
2009-12-01
Using linear differential equations with constant coefficients describing one-dimensional dynamical processes as an example, we show that the solutions of these equations and systems are related to the solution of the corresponding numerical recursion relations and one does not have to compute the roots of the corresponding characteristic equations. The arbitrary functions occurring in the general solution of the homogeneous equations are determined by the initial and boundary conditions or are chosen from various classes of analytic functions. The solutions of the inhomogeneous equations are constructed in the form of integro-differential series acting on the right-hand side of the equation, and the coefficients of the series are determined from the same recursion relations. The convergence of formal solutions as series of a more general recursive-operator construction was proved in [1]. In the special case where the solutions of the equation can be represented in separated variables, the power series can be effectively summed, i.e., expressed in terms of elementary functions, and coincide with the known solutions. In this case, to determine the natural vibration frequencies, one obtains algebraic rather than transcendental equations, which permits exactly determining the imaginary and complex roots of these equations without using the graphic method [2, pp. 448-449]. The correctness of the obtained formulas (differentiation formulas, explicit expressions for the series coefficients, etc.) can be verified directly by appropriate substitutions; therefore, we do not prove them here.
Duality between Time Series and Networks
Campanharo, Andriana S. L. O.; Sirer, M. Irmak; Malmgren, R. Dean; Ramos, Fernando M.; Amaral, Luís A. Nunes.
2011-01-01
Studying the interaction between a system's components and the temporal evolution of the system are two common ways to uncover and characterize its internal workings. Recently, several maps from a time series to a network have been proposed with the intent of using network metrics to characterize time series. Although these maps demonstrate that different time series result in networks with distinct topological properties, it remains unclear how these topological properties relate to the original time series. Here, we propose a map from a time series to a network with an approximate inverse operation, making it possible to use network statistics to characterize time series and time series statistics to characterize networks. As a proof of concept, we generate an ensemble of time series ranging from periodic to random and confirm that application of the proposed map retains much of the information encoded in the original time series (or networks) after application of the map (or its inverse). Our results suggest that network analysis can be used to distinguish different dynamic regimes in time series and, perhaps more importantly, time series analysis can provide a powerful set of tools that augment the traditional network analysis toolkit to quantify networks in new and useful ways. PMID:21858093
Chonggang Xu; Hong S. He; Yuanman Hu; Yu Chang; Xiuzhen Li; Rencang Bu
2005-01-01
Geostatistical stochastic simulation is always combined with Monte Carlo method to quantify the uncertainty in spatial model simulations. However, due to the relatively long running time of spatially explicit forest models as a result of their complexity, it is always infeasible to generate hundreds or thousands of Monte Carlo simulations. Thus, it is of great...
NASA Technical Reports Server (NTRS)
Gilbertsen, Noreen D.; Belytschko, Ted
1990-01-01
The implementation of a nonlinear explicit program on a vectorized, concurrent computer with shared memory is described and studied. The conflict between vectorization and concurrency is described and some guidelines are given for optimal block sizes. Several example problems are summarized to illustrate the types of speed-ups which can be achieved by reprogramming as compared to compiler optimization.
Characterization of echoes: A Dyson-series representation of individual pulses
NASA Astrophysics Data System (ADS)
Correia, Miguel R.; Cardoso, Vitor
2018-04-01
The ability to detect and scrutinize gravitational waves from the merger and coalescence of compact binaries opens up the possibility to perform tests of fundamental physics. One such test concerns the dark nature of compact objects: are they really black holes? It was recently pointed out that the absence of horizons—while keeping the external geometry very close to that of General Relativity—would manifest itself in a series of echoes in gravitational wave signals. The observation of echoes by LIGO/Virgo or upcoming facilities would likely inform us on quantum gravity effects or unseen types of matter. Detection of such signals is in principle feasible with relatively simple tools but would benefit enormously from accurate templates. Here we analytically individualize each echo waveform and show that it can be written as a Dyson series, for arbitrary effective potential and boundary conditions. We further apply the formalism to explicitly determine the echoes of a simple toy model: the Dirac delta potential. Our results allow to read off a few known features of echoes and may find application in the modeling for data analysis.
Determination of Orbital Parameters for Visual Binary Stars Using a Fourier-Series Approach
NASA Astrophysics Data System (ADS)
Brown, D. E.; Prager, J. R.; DeLeo, G. G.; McCluskey, G. E., Jr.
2001-12-01
We expand on the Fourier transform method of Monet (ApJ 234, 275, 1979) to infer the orbital parameters of visual binary stars, and we present results for several systems, both simulated and real. Although originally developed to address binary systems observed through at least one complete period, we have extended the method to deal explicitly with cases where the orbital data is less complete. This is especially useful in cases where the period is so long that only a fragment of the orbit has been recorded. We utilize Fourier-series fitting methods appropriate to data sets covering less than one period and containing random measurement errors. In so doing, we address issues of over-determination in fitting the data and the reduction of other deleterious Fourier-series artifacts. We developed our algorithm using the MAPLE mathematical software code, and tested it on numerous "synthetic" systems, and several real binaries, including Xi Boo, 24 Aqr, and Bu 738. This work was supported at Lehigh University by the Delaware Valley Space Grant Consortium and by NSF-REU grant PHY-9820301.
Explicit and implicit assessment of gender roles.
Fernández, Juan; Quiroga, M Ángeles; Escorial, Sergio; Privado, Jesús
2014-05-01
Gender roles have been assessed by explicit measures and, recently, by implicit measures. In the former case, the theoretical assumptions have been questioned by empirical results. To solve this contradiction, we carried out two concatenated studies based on a relatively well-founded theoretical and empirical approach. The first study was designed to obtain a sample of genderized activities of the domestic sphere by means of an explicit assessment. Forty-two raters (22 women and 20 men, balanced on age, sex, and level of education) took part as raters. In the second study, an implicit assessment of gender roles was carried out, focusing on the response time given to the sample activities obtained from the first study. A total of 164 adults (90 women and 74 men, mean age = 43), with experience in living with a partner and balanced on age, sex, and level of education, participated. Taken together, results show that explicit and implicit assessment converge. The current social reality shows that there is still no equity in some gender roles in the domestic sphere. These consistent results show considerable theoretical and empirical robustness, due to the double implicit and explicit assessment.
Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code
NASA Astrophysics Data System (ADS)
Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.
2015-12-01
WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be presented. These simulations highlight the code features included in the latest release of WEC-Sim (v1.2), including: wave directionality, nonlinear hydrostatics and hydrodynamics, user-defined wave elevation time-series, state space radiation, and WEC-Sim compatibility with BEMIO (open source AQWA/WAMI/NEMOH coefficient parser).
Consider the source: persuasion of implicit evaluations is moderated by source credibility.
Smith, Colin Tucker; De Houwer, Jan; Nosek, Brian A
2013-02-01
The long history of persuasion research shows how to change explicit, self-reported evaluations through direct appeals. At the same time, research on how to change implicit evaluations has focused almost entirely on techniques of retraining existing evaluations or manipulating contexts. In five studies, we examined whether direct appeals can change implicit evaluations in the same way as they do explicit evaluations. In five studies, both explicit and implicit evaluations showed greater evidence of persuasion following information presented by a highly credible source than a source low in credibility. Whereas cognitive load did not alter the effect of source credibility on explicit evaluations, source credibility had an effect on the persuasion of implicit evaluations only when participants were encouraged and able to consider information about the source. Our findings reveal the relevance of persuasion research for changing implicit evaluations and provide new ideas about the processes underlying both types of evaluation.
O'Boyle, D J; Moore, C E; Poliakoff, E; Butterworth, R; Sutton, A; Cody, F W
2001-06-01
In Experiment 1, normal subjects' ability to localize tactile stimuli (locognosia) delivered to the upper arm was significantly higher when they were instructed explicitly to direct their attention selectively to that segment than when they were instructed explicitly to distribute their attention across the whole arm. This elevation of acuity was eliminated when subjects' attentional resources were divided by superimposition of an effortful, secondary task during stimulation. In Experiment 2, in the absence of explicit attentional instruction, subjects' locognosic acuity on one of three arm segments was significantly higher when stimulation of that segment was 2.5 times more probable than that of stimulation of the other two segments. We surmise that the attentional mechanisms responsible for such modulations of locognosic acuity in normal subjects may contribute to the elevated sensory acuity observed on the stumps of amputees.
Single molecule force spectroscopy at high data acquisition: A Bayesian nonparametric analysis
NASA Astrophysics Data System (ADS)
Sgouralis, Ioannis; Whitmore, Miles; Lapidus, Lisa; Comstock, Matthew J.; Pressé, Steve
2018-03-01
Bayesian nonparametrics (BNPs) are poised to have a deep impact in the analysis of single molecule data as they provide posterior probabilities over entire models consistent with the supplied data, not just model parameters of one preferred model. Thus they provide an elegant and rigorous solution to the difficult problem encountered when selecting an appropriate candidate model. Nevertheless, BNPs' flexibility to learn models and their associated parameters from experimental data is a double-edged sword. Most importantly, BNPs are prone to increasing the complexity of the estimated models due to artifactual features present in time traces. Thus, because of experimental challenges unique to single molecule methods, naive application of available BNP tools is not possible. Here we consider traces with time correlations and, as a specific example, we deal with force spectroscopy traces collected at high acquisition rates. While high acquisition rates are required in order to capture dwells in short-lived molecular states, in this setup, a slow response of the optical trap instrumentation (i.e., trapped beads, ambient fluid, and tethering handles) distorts the molecular signals introducing time correlations into the data that may be misinterpreted as true states by naive BNPs. Our adaptation of BNP tools explicitly takes into consideration these response dynamics, in addition to drift and noise, and makes unsupervised time series analysis of correlated single molecule force spectroscopy measurements possible, even at acquisition rates similar to or below the trap's response times.
NASA Technical Reports Server (NTRS)
Yefet, Amir; Petropoulos, Peter G.
1999-01-01
We consider a divergence-free non-dissipative fourth-order explicit staggered finite difference scheme for the hyperbolic Maxwell's equations. Special one-sided difference operators are derived in order to implement the scheme near metal boundaries and dielectric interfaces. Numerical results show the scheme is long-time stable, and is fourth-order convergent over complex domains that include dielectric interfaces and perfectly conducting surfaces. We also examine the scheme's behavior near metal surfaces that are not aligned with the grid axes, and compare its accuracy to that obtained by the Yee scheme.
Ballesteros, Soledad; Reales, José M; García, Eulalio; Carrasco, Marisa
2006-02-01
Three experiments investigated the effects of two variables -selective attention during encoding and delay between study and test- on implicit (picture fragment completion and object naming) and explicit (free recall and recognition) memory tests. Experiments 1 and 2 consistently indicated that (a) at all delays (immediate to 1 month), picture-fragment identification threshold was lower for the attended than the unattended pictures; (b) the attended pictures were recalled and recognized better than the unattended; and (c) attention and delay interacted in both memory tests. For implicit memory, performance decreased as delay increased for both attended and unattended pictures, but priming was more pronounced and lasted longer for the attended pictures; it was still present after a 1-month delay. For explicit memory, performance decreased as delay increased for attended pictures, but for unattended pictures performance was consistent throughout delay. By using a perceptual object naming task, Experiment 3 showed reliable implicit and explicit memory for attended but not for unattended pictures. This study indicates that picture repetition priming requires attention at the time of study and that neither delay nor attention dissociate performance in explicit and implicit memory tests; both types of memory require attention, but explicit memory does so to a larger degree.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berkel, M. van; Fellow of the Japan Society for the Promotion of Science; FOM Institute DIFFER-Dutch Institute for Fundamental Energy Research, Association EURATOM-FOM, Trilateral Euregio Cluster, P.O. Box 1207, 3430 BE Nieuwegein
In this paper, a number of new explicit approximations are introduced to estimate the perturbative diffusivity (χ), convectivity (V), and damping (τ) in cylindrical geometry. For this purpose, the harmonic components of heat waves induced by localized deposition of modulated power are used. The approximations are based on the heat equation in cylindrical geometry using the symmetry (Neumann) boundary condition at the plasma center. This means that the approximations derived here should be used only to estimate transport coefficients between the plasma center and the off-axis perturbative source. If the effect of cylindrical geometry is small, it is also possiblemore » to use semi-infinite domain approximations presented in Part I and Part II of this series. A number of new approximations are derived in this part, Part III, based upon continued fractions of the modified Bessel function of the first kind and the confluent hypergeometric function of the first kind. These approximations together with the approximations based on semi-infinite domains are compared for heat waves traveling towards the center. The relative error for the different derived approximations is presented for different values of the frequency, transport coefficients, and dimensionless radius. Moreover, it is shown how combinations of different explicit formulas can be used to estimate the transport coefficients over a large parameter range for cases without convection and damping, cases with damping only, and cases with convection and damping. The relative error between the approximation and its underlying model is below 2% for the case, where only diffusivity and damping are considered. If also convectivity is considered, the diffusivity can be estimated well in a large region, but there is also a large region in which no suitable approximation is found. This paper is the third part (Part III) of a series of three papers. In Part I, the semi-infinite slab approximations have been treated. In Part II, cylindrical approximations are treated for heat waves traveling towards the plasma edge assuming a semi-infinite domain.« less
Detection of a sudden change of the field time series based on the Lorenz system
Li, Fang; Shen, BingLu; Yan, PengCheng; Song, Jian; Ma, DeShan
2017-01-01
We conducted an exploratory study of the detection of a sudden change of the field time series based on the numerical solution of the Lorenz system. First, the time when the Lorenz path jumped between the regions on the left and right of the equilibrium point of the Lorenz system was quantitatively marked and the sudden change time of the Lorenz system was obtained. Second, the numerical solution of the Lorenz system was regarded as a vector; thus, this solution could be considered as a vector time series. We transformed the vector time series into a time series using the vector inner product, considering the geometric and topological features of the Lorenz system path. Third, the sudden change of the resulting time series was detected using the sliding t-test method. Comparing the test results with the quantitatively marked time indicated that the method could detect every sudden change of the Lorenz path, thus the method is effective. Finally, we used the method to detect the sudden change of the pressure field time series and temperature field time series, and obtained good results for both series, which indicates that the method can apply to high-dimension vector time series. Mathematically, there is no essential difference between the field time series and vector time series; thus, we provide a new method for the detection of the sudden change of the field time series. PMID:28141832
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi N.; Hixon, Duane
1992-01-01
The development of efficient iterative solution methods for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations is discussed. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. In this work, another approach based on the classical conjugate gradient method, known as the Generalized Minimum Residual (GMRES) algorithm is investigated. The GMRES algorithm has been used in the past by a number of researchers for solving steady viscous and inviscid flow problems. Here, we investigate the suitability of this algorithm for solving the system of non-linear equations that arise in unsteady Navier-Stokes solvers at each time step.
Volume 2: Explicit, multistage upwind schemes for Euler and Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Elmiligui, Alaa; Ash, Robert L.
1992-01-01
The objective of this study was to develop a high-resolution-explicit-multi-block numerical algorithm, suitable for efficient computation of the three-dimensional, time-dependent Euler and Navier-Stokes equations. The resulting algorithm has employed a finite volume approach, using monotonic upstream schemes for conservation laws (MUSCL)-type differencing to obtain state variables at cell interface. Variable interpolations were written in the k-scheme formulation. Inviscid fluxes were calculated via Roe's flux-difference splitting, and van Leer's flux-vector splitting techniques, which are considered state of the art. The viscous terms were discretized using a second-order, central-difference operator. Two classes of explicit time integration has been investigated for solving the compressible inviscid/viscous flow problems--two-state predictor-corrector schemes, and multistage time-stepping schemes. The coefficients of the multistage time-stepping schemes have been modified successfully to achieve better performance with upwind differencing. A technique was developed to optimize the coefficients for good high-frequency damping at relatively high CFL numbers. Local time-stepping, implicit residual smoothing, and multigrid procedure were added to the explicit time stepping scheme to accelerate convergence to steady-state. The developed algorithm was implemented successfully in a multi-block code, which provides complete topological and geometric flexibility. The only requirement is C degree continuity of the grid across the block interface. The algorithm has been validated on a diverse set of three-dimensional test cases of increasing complexity. The cases studied were: (1) supersonic corner flow; (2) supersonic plume flow; (3) laminar and turbulent flow over a flat plate; (4) transonic flow over an ONERA M6 wing; and (5) unsteady flow of a compressible jet impinging on a ground plane (with and without cross flow). The emphasis of the test cases was validation of code, and assessment of performance, as well as demonstration of flexibility.
Hard Real-Time: C++ Versus RTSJ
NASA Technical Reports Server (NTRS)
Dvorak, Daniel L.; Reinholtz, William K.
2004-01-01
In the domain of hard real-time systems, which language is better: C++ or the Real-Time Specification for Java (RTSJ)? Although ordinary Java provides a more productive programming environment than C++ due to its automatic memory management, that benefit does not apply to RTSJ when using NoHeapRealtimeThread and non-heap memory areas. As a result, RTSJ programmers must manage non-heap memory explicitly. While that's not a deterrent for veteran real-time programmers-where explicit memory management is common-the lack of certain language features in RTSJ (and Java) makes that manual memory management harder to accomplish safely than in C++. This paper illustrates the problem for practitioners in the context of moving data and managing memory in a real-time producer/consumer pattern. The relative ease of implementation and safety of the C++ programming model suggests that RTSJ has a struggle ahead in the domain of hard real-time applications, despite its other attractive features.
Semi-implicit time integration of atmospheric flows with characteristic-based flux partitioning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Debojyoti; Constantinescu, Emil M.
2016-06-23
Here, this paper presents a characteristic-based flux partitioning for the semi-implicit time integration of atmospheric flows. Nonhydrostatic models require the solution of the compressible Euler equations. The acoustic time scale is significantly faster than the advective scale, yet it is typically not relevant to atmospheric and weather phenomena. The acoustic and advective components of the hyperbolic flux are separated in the characteristic space. High-order, conservative additive Runge-Kutta methods are applied to the partitioned equations so that the acoustic component is integrated in time implicitly with an unconditionally stable method, while the advective component is integrated explicitly. The time step ofmore » the overall algorithm is thus determined by the advective scale. Benchmark flow problems are used to demonstrate the accuracy, stability, and convergence of the proposed algorithm. The computational cost of the partitioned semi-implicit approach is compared with that of explicit time integration.« less
Multiple Indicator Stationary Time Series Models.
ERIC Educational Resources Information Center
Sivo, Stephen A.
2001-01-01
Discusses the propriety and practical advantages of specifying multivariate time series models in the context of structural equation modeling for time series and longitudinal panel data. For time series data, the multiple indicator model specification improves on classical time series analysis. For panel data, the multiple indicator model…
Becoming a vampire without being bitten: the narrative collective-assimilation hypothesis.
Gabriel, Shira; Young, Ariana F
2011-08-01
We propose the narrative collective-assimilation hypothesis--that experiencing a narrative leads one to psychologically become a part of the collective described within the narrative. In a test of this hypothesis, participants read passages from either a book about wizards (from the Harry Potter series) or a book about vampires (from the Twilight series). Both implicit and explicit measures revealed that participants who read about wizards psychologically became wizards, whereas those who read about vampires psychologically became vampires. The results also suggested that narrative collective assimilation is psychologically meaningful and relates to the basic human need for connection. Specifically, the tendency to fulfill belongingness needs through group affiliation moderated the extent to which narrative collective assimilation occurred, and narrative collective assimilation led to increases in life satisfaction and positive mood, two primary outcomes of belonging. The implications for the importance of narratives, the need to belong to groups, and social surrogacy are discussed.
Löw, Fabian; Waldner, François; Latchininsky, Alexandre; Biradar, Chandrashekhar; Bolkart, Maximilian; Colditz, René R
2016-12-01
The Asian Migratory locust (Locusta migratoria migratoria L.) is a pest that continuously threatens crops in the Amudarya River delta near the Aral Sea in Uzbekistan, Central Asia. Its development coincides with the growing period of its main food plant, a tall reed grass (Phragmites australis), which represents the predominant vegetation in the delta and which cover vast areas of the former Aral Sea, which is desiccating since the 1960s. Current locust survey methods and control practices would tremendously benefit from accurate and timely spatially explicit information on the potential locust habitat distribution. To that aim, satellite observation from the MODIS Terra/Aqua satellites and in-situ observations were combined to monitor potential locust habitats according to their corresponding risk of infestations along the growing season. A Random Forest (RF) algorithm was applied for classifying time series of MODIS enhanced vegetation index (EVI) from 2003 to 2014 at an 8-day interval. Based on an independent ground truth data set, classification accuracies of reeds posing a medium or high risk of locust infestation exceeded 89% on average. For the 12-year period covered in this study, an average of 7504 km 2 (28% of the observed area) was flagged as potential locust habitat and 5% represents a permanent high risk of locust infestation. Results are instrumental for predicting potential locust outbreaks and developing well-targeted management plans. The method offers positive perspectives for locust management and treatment of infested sites because it is able to deliver risk maps in near real time, with an accuracy of 80% in April-May which coincides with both locust hatching and the first control surveys. Such maps could help in rapid decision-making regarding control interventions against the initial locust congregations, and thus the efficiency of survey teams and the chemical treatments could be increased, thus potentially reducing environmental pollution while avoiding areas where treatments are most likely to cause environmental degradation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Adarkwah, Charles Christian; Sadoghi, Amirhossein; Gandjour, Afschin
2016-02-01
There has been a debate on whether cost-effectiveness analysis should consider the cost of consumption and leisure time activities when using the quality-adjusted life year as a measure of health outcome under a societal perspective. The purpose of this study was to investigate whether the effects of ill health on consumptive activities are spontaneously considered in a health state valuation exercise and how much this matters. The survey enrolled patients with inflammatory bowel disease in Germany (n = 104). Patients were randomized to explicit and no explicit instruction for the consideration of consumption and leisure effects in a time trade-off (TTO) exercise. Explicit instruction to consider non-health-related utility in TTO exercises did not influence TTO scores. However, spontaneous consideration of non-health-related utility in patients without explicit instruction (60% of respondents) led to significantly lower TTO scores. Results suggest an inclusion of consumption costs in the numerator of the cost-effectiveness ratio, at least for those respondents who spontaneously consider non-health-related utility from treatment. Results also suggest that exercises eliciting health valuations from the general public may include a description of the impact of disease on consumptive activities. Copyright © 2015 John Wiley & Sons, Ltd.
Efficient Algorithms for Segmentation of Item-Set Time Series
NASA Astrophysics Data System (ADS)
Chundi, Parvathi; Rosenkrantz, Daniel J.
We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.
Ricca, Mark A.; Van Vuren, Dirk H.; Weckerly, Floyd W.; Williams, Jeffrey C.; Miles, A. Keith
2014-01-01
Large mammalian herbivores introduced to islands without predators are predicted to undergo irruptive population and spatial dynamics, but only a few well-documented case studies support this paradigm. We used the Riney-Caughley model as a framework to test predictions of irruptive population growth and spatial expansion of caribou (Rangifer tarandus granti) introduced to Adak Island in the Aleutian archipelago of Alaska in 1958 and 1959. We utilized a time series of spatially explicit counts conducted on this population intermittently over a 54-year period. Population size increased from 23 released animals to approximately 2900 animals in 2012. Population dynamics were characterized by two distinct periods of irruptive growth separated by a long time period of relative stability, and the catalyst for the initial irruption was more likely related to annual variation in hunting pressure than weather conditions. An unexpected pattern resembling logistic population growth occurred between the peak of the second irruption in 2005 and the next survey conducted seven years later in 2012. Model simulations indicated that an increase in reported harvest alone could not explain the deceleration in population growth, yet high levels of unreported harvest combined with increasing density-dependent feedbacks on fecundity and survival were the most plausible explanation for the observed population trend. No studies of introduced island Rangifer have measured a time series of spatial use to the extent described in this study. Spatial use patterns during the post-calving season strongly supported Riney-Caughley model predictions, whereby high-density core areas expanded outwardly as population size increased. During the calving season, caribou displayed marked site fidelity across the full range of population densities despite availability of other suitable habitats for calving. Finally, dispersal and reproduction on neighboring Kagalaska Island represented a new dispersal front for irruptive dynamics and a new challenge for resource managers. The future demography of caribou on both islands is far from certain, yet sustained and significant hunting pressure should be a vital management tool.
Katriel, G.; Yaari, R.; Huppert, A.; Roll, U.; Stone, L.
2011-01-01
This paper presents new computational and modelling tools for studying the dynamics of an epidemic in its initial stages that use both available incidence time series and data describing the population's infection network structure. The work is motivated by data collected at the beginning of the H1N1 pandemic outbreak in Israel in the summer of 2009. We formulated a new discrete-time stochastic epidemic SIR (susceptible-infected-recovered) model that explicitly takes into account the disease's specific generation-time distribution and the intrinsic demographic stochasticity inherent to the infection process. Moreover, in contrast with many other modelling approaches, the model allows direct analytical derivation of estimates for the effective reproductive number (Re) and of their credible intervals, by maximum likelihood and Bayesian methods. The basic model can be extended to include age–class structure, and a maximum likelihood methodology allows us to estimate the model's next-generation matrix by combining two types of data: (i) the incidence series of each age group, and (ii) infection network data that provide partial information of ‘who-infected-who’. Unlike other approaches for estimating the next-generation matrix, the method developed here does not require making a priori assumptions about the structure of the next-generation matrix. We show, using a simulation study, that even a relatively small amount of information about the infection network greatly improves the accuracy of estimation of the next-generation matrix. The method is applied in practice to estimate the next-generation matrix from the Israeli H1N1 pandemic data. The tools developed here should be of practical importance for future investigations of epidemics during their initial stages. However, they require the availability of data which represent a random sample of the real epidemic process. We discuss the conditions under which reporting rates may or may not influence our estimated quantities and the effects of bias. PMID:21247949
A Spatially Distinct History of the Development of California Groundfish Fisheries
Miller, Rebecca R.; Field, John C.; Santora, Jarrod A.; Schroeder, Isaac D.; Huff, David D.; Key, Meisha; Pearson, Don E.; MacCall, Alec D.
2014-01-01
During the past century, commercial fisheries have expanded from small vessels fishing in shallow, coastal habitats to a broad suite of vessels and gears that fish virtually every marine habitat on the globe. Understanding how fisheries have developed in space and time is critical for interpreting and managing the response of ecosystems to the effects of fishing, however time series of spatially explicit data are typically rare. Recently, the 1933–1968 portion of the commercial catch dataset from the California Department of Fish and Wildlife was recovered and digitized, completing the full historical series for both commercial and recreational datasets from 1933–2010. These unique datasets include landing estimates at a coarse 10 by 10 minute “grid-block” spatial resolution and extends the entire length of coastal California up to 180 kilometers from shore. In this study, we focus on the catch history of groundfish which were mapped for each grid-block using the year at 50% cumulative catch and total historical catch per habitat area. We then constructed generalized linear models to quantify the relationship between spatiotemporal trends in groundfish catches, distance from ports, depth, percentage of days with wind speed over 15 knots, SST and ocean productivity. Our results indicate that over the history of these fisheries, catches have taken place in increasingly deeper habitat, at a greater distance from ports, and in increasingly inclement weather conditions. Understanding spatial development of groundfish fisheries and catches in California are critical for improving population models and for evaluating whether implicit stock assessment model assumptions of relative homogeneity of fisheries removals over time and space are reasonable. This newly reconstructed catch dataset and analysis provides a comprehensive appreciation for the development of groundfish fisheries with respect to commonly assumed trends of global fisheries patterns that are typically constrained by a lack of long-term spatial datasets. PMID:24967973
NASA Astrophysics Data System (ADS)
Elgohary, T.; Kim, D.; Turner, J.; Junkins, J.
2014-09-01
Several methods exist for integrating the motion in high order gravity fields. Some recent methods use an approximate starting orbit, and an efficient method is needed for generating warm starts that account for specific low order gravity approximations. By introducing two scalar Lagrange-like invariants and employing Leibniz product rule, the perturbed motion is integrated by a novel recursive formulation. The Lagrange-like invariants allow exact arbitrary order time derivatives. Restricting attention to the perturbations due to the zonal harmonics J2 through J6, we illustrate an idea. The recursively generated vector-valued time derivatives for the trajectory are used to develop a continuation series-based solution for propagating position and velocity. Numerical comparisons indicate performance improvements of ~ 70X over existing explicit Runge-Kutta methods while maintaining mm accuracy for the orbit predictions. The Modified Chebyshev Picard Iteration (MCPI) is an iterative path approximation method to solve nonlinear ordinary differential equations. The MCPI utilizes Picard iteration with orthogonal Chebyshev polynomial basis functions to recursively update the states. The key advantages of the MCPI are as follows: 1) Large segments of a trajectory can be approximated by evaluating the forcing function at multiple nodes along the current approximation during each iteration. 2) It can readily handle general gravity perturbations as well as non-conservative forces. 3) Parallel applications are possible. The Picard sequence converges to the solution over large time intervals when the forces are continuous and differentiable. According to the accuracy of the starting solutions, however, the MCPI may require significant number of iterations and function evaluations compared to other integrators. In this work, we provide an efficient methodology to establish good starting solutions from the continuation series method; this warm start improves the performance of the MCPI significantly and will likely be useful for other applications where efficiently computed approximate orbit solutions are needed.
NASA Astrophysics Data System (ADS)
Clark, E.; Wood, A.; Nijssen, B.; Newman, A. J.; Mendoza, P. A.
2016-12-01
The System for Hydrometeorological Applications, Research and Prediction (SHARP), developed at the National Center for Atmospheric Research (NCAR), University of Washington, U.S. Army Corps of Engineers, and U.S. Bureau of Reclamation, is a fully automated ensemble prediction system for short-term to seasonal applications. It incorporates uncertainty in initial hydrologic conditions (IHCs) and in hydrometeorological predictions. In this implementation, IHC uncertainty is estimated by propagating an ensemble of 100 plausible temperature and precipitation time series through the Sacramento/Snow-17 model. The forcing ensemble explicitly accounts for measurement and interpolation uncertainties in the development of gridded meteorological forcing time series. The resulting ensemble of derived IHCs exhibits a broad range of possible soil moisture and snow water equivalent (SWE) states. To select the IHCs that are most consistent with the observations, we employ a particle filter (PF) that weights IHC ensemble members based on observations of streamflow and SWE. These particles are then used to initialize ensemble precipitation and temperature forecasts downscaled from the Global Ensemble Forecast System (GEFS), generating a streamflow forecast ensemble. We test this method in two basins in the Pacific Northwest that are important for water resources management: 1) the Green River upstream of Howard Hanson Dam, and 2) the South Fork Flathead River upstream of Hungry Horse Dam. The first of these is characterized by mixed snow and rain, while the second is snow-dominated. The PF-based forecasts are compared to forecasts based on a single IHC (corresponding to median streamflow) paired with the full GEFS ensemble, and 2) the full IHC ensemble, without filtering, paired with the full GEFS ensemble. In addition to assessing improvements in the spread of IHCs, we perform a hindcast experiment to evaluate the utility of PF-based data assimilation on streamflow forecasts at 1- to 7-day lead times.