NASA Astrophysics Data System (ADS)
Wohlmuth, Johannes; Andersen, Jørgen Vitting
2006-05-01
We use agent-based models to study the competition among investors who use trading strategies with different amount of information and with different time scales. We find that mixing agents that trade on the same time scale but with different amount of information has a stabilizing impact on the large and extreme fluctuations of the market. Traders with the most information are found to be more likely to arbitrage traders who use less information in the decision making. On the other hand, introducing investors who act on two different time scales has a destabilizing effect on the large and extreme price movements, increasing the volatility of the market. Closeness in time scale used in the decision making is found to facilitate the creation of local trends. The larger the overlap in commonly shared information the more the traders in a mixed system with different time scales are found to profit from the presence of traders acting at another time scale than themselves.
Process connectivity in a naturally prograding river delta
NASA Astrophysics Data System (ADS)
Sendrowski, Alicia; Passalacqua, Paola
2017-03-01
River deltas are lowland systems that can display high hydrological connectivity. This connectivity can be structural (morphological connections), functional (control of fluxes), and process connectivity (information flow from system drivers to sinks). In this work, we quantify hydrological process connectivity in Wax Lake Delta, coastal Louisiana, by analyzing couplings among external drivers (discharge, tides, and wind) and water levels recorded at five islands and one channel over summer 2014. We quantify process connections with information theory, a branch of mathematics concerned with the communication of information. We represent process connections as a network; variables serve as network nodes and couplings as network links describing the strength, direction, and time scale of information flow. Comparing process connections at long (105 days) and short (10 days) time scales, we show that tides exhibit daily synchronization with water level, with decreasing strength from downstream to upstream, and that tides transfer information as tides transition from spring to neap. Discharge synchronizes with water level and the time scale of its information transfer compares well to physical travel times through the system, computed with a hydrodynamic model. Information transfer and physical transport show similar spatial patterns, although information transfer time scales are larger than physical travel times. Wind events associated with water level setup lead to increased process connectivity with highly variable information transfer time scales. We discuss the information theory results in the context of the hydrologic behavior of the delta, the role of vegetation as a connector/disconnector on islands, and the applicability of process networks as tools for delta modeling results.
NASA Astrophysics Data System (ADS)
Taousser, Fatima; Defoort, Michael; Djemai, Mohamed
2016-01-01
This paper investigates the consensus problem for linear multi-agent system with fixed communication topology in the presence of intermittent communication using the time-scale theory. Since each agent can only obtain relative local information intermittently, the proposed consensus algorithm is based on a discontinuous local interaction rule. The interaction among agents happens at a disjoint set of continuous-time intervals. The closed-loop multi-agent system can be represented using mixed linear continuous-time and linear discrete-time models due to intermittent information transmissions. The time-scale theory provides a powerful tool to combine continuous-time and discrete-time cases and study the consensus protocol under a unified framework. Using this theory, some conditions are derived to achieve exponential consensus under intermittent information transmissions. Simulations are performed to validate the theoretical results.
Correlations of stock price fluctuations under multi-scale and multi-threshold scenarios
NASA Astrophysics Data System (ADS)
Sui, Guo; Li, Huajiao; Feng, Sida; Liu, Xueyong; Jiang, Meihui
2018-01-01
The multi-scale method is widely used in analyzing time series of financial markets and it can provide market information for different economic entities who focus on different periods. Through constructing multi-scale networks of price fluctuation correlation in the stock market, we can detect the topological relationship between each time series. Previous research has not addressed the problem that the original fluctuation correlation networks are fully connected networks and more information exists within these networks that is currently being utilized. Here we use listed coal companies as a case study. First, we decompose the original stock price fluctuation series into different time scales. Second, we construct the stock price fluctuation correlation networks at different time scales. Third, we delete the edges of the network based on thresholds and analyze the network indicators. Through combining the multi-scale method with the multi-threshold method, we bring to light the implicit information of fully connected networks.
Network features of sector indexes spillover effects in China: A multi-scale view
NASA Astrophysics Data System (ADS)
Feng, Sida; Huang, Shupei; Qi, Yabin; Liu, Xueyong; Sun, Qingru; Wen, Shaobo
2018-04-01
The spillover effects among sectors are of concern for distinct market participants, who are in distinct investment horizons and concerned with the information in different time scales. In order to uncover the hidden spillover information in multi-time scales in the rapidly changing stock market and thereby offer guidance to different investors concerning distinct time scales from a system perspective, this paper constructed directional spillover effect networks for the economic sectors in distinct time scales. The results are as follows: (1) The "2-4 days" scale is the most risky scale, and the "8-16 days" scale is the least risky one. (2) The most influential and sensitive sectors are distinct in different time scales. (3) Although two sectors in the same community may not have direct spillover relations, the volatility of one sector will have a relatively strong influence on the other through indirect relations.
Information transfer across the scales of climate data variability
NASA Astrophysics Data System (ADS)
Palus, Milan; Jajcay, Nikola; Hartman, David; Hlinka, Jaroslav
2015-04-01
Multitude of scales characteristic of the climate system variability requires innovative approaches in analysis of instrumental time series. We present a methodology which starts with a wavelet decomposition of a multi-scale signal into quasi-oscillatory modes of a limited band-with, described using their instantaneous phases and amplitudes. Then their statistical associations are tested in order to search for interactions across time scales. In particular, an information-theoretic formulation of the generalized, nonlinear Granger causality is applied together with surrogate data testing methods [1]. The method [2] uncovers causal influence (in the Granger sense) and information transfer from large-scale modes of climate variability with characteristic time scales from years to almost a decade to regional temperature variability on short time scales. In analyses of daily mean surface air temperature from various European locations an information transfer from larger to smaller scales has been observed as the influence of the phase of slow oscillatory phenomena with periods around 7-8 years on amplitudes of the variability characterized by smaller temporal scales from a few months to annual and quasi-biennial scales [3]. In sea surface temperature data from the tropical Pacific area an influence of quasi-oscillatory phenomena with periods around 4-6 years on the variability on and near the annual scale has been observed. This study is supported by the Ministry of Education, Youth and Sports of the Czech Republic within the Program KONTAKT II, Project No. LH14001. [1] M. Palus, M. Vejmelka, Phys. Rev. E 75, 056211 (2007) [2] M. Palus, Entropy 16(10), 5263-5289 (2014) [3] M. Palus, Phys. Rev. Lett. 112, 078702 (2014)
Coevolution of strategy-selection time scale and cooperation in spatial prisoner's dilemma game
NASA Astrophysics Data System (ADS)
Rong, Zhihai; Wu, Zhi-Xi; Chen, Guanrong
2013-06-01
In this paper, we investigate a networked prisoner's dilemma game where individuals' strategy-selection time scale evolves based on their historical learning information. We show that the more times the current strategy of an individual is learnt by his neighbors, the longer time he will stick on the successful behavior by adaptively adjusting the lifetime of the adopted strategy. Through characterizing the extent of success of the individuals with normalized payoffs, we show that properly using the learned information can form a positive feedback mechanism between cooperative behavior and its lifetime, which can boost cooperation on square lattices and scale-free networks.
2017-11-01
magnitude, intensity, and seasonality of climate. For infrastructure projects, relevant design life often exceeds 30 years—a period of time of...uncertainty about future statistical properties of climate at time and spatial scales required for planning and design purposes. Information...about future statistical properties of climate at time and spatial scales required for planning and design , and for assessing future operational
The Role of Fractality in Perceptual Learning: Exploration in Dynamic Touch
ERIC Educational Resources Information Center
Stephen, Damian G.; Arzamarski, Ryan; Michaels, Claire F.
2010-01-01
Perceptual systems must learn to explore and to use the resulting information to hone performance. Optimal performance depends on using information available at many time scales, from the near instantaneous values of variables underlying perception (i.e., detection), to longer term information about appropriate scaling (i.e., calibration), to yet…
NASA Astrophysics Data System (ADS)
Velten, Andreas
2017-05-01
Light scattering is a primary obstacle to optical imaging in a variety of different environments and across many size and time scales. Scattering complicates imaging on large scales when imaging through the atmosphere when imaging from airborne or space borne platforms, through marine fog, or through fog and dust in vehicle navigation, for example in self driving cars. On smaller scales, scattering is the major obstacle when imaging through human tissue in biomedical applications. Despite the large variety of participating materials and size scales, light transport in all these environments is usually described with very similar scattering models that are defined by the same small set of parameters, including scattering and absorption length and phase function. We attempt a study of scattering and methods of imaging through scattering across different scales and media, particularly with respect to the use of time of flight information. We can show that using time of flight, in addition to spatial information, provides distinct advantages in scattering environments. By performing a comparative study of scattering across scales and media, we are able to suggest scale models for scattering environments to aid lab research. We also can transfer knowledge and methodology between different fields.
Catchment dynamics and social response during flash floods
NASA Astrophysics Data System (ADS)
Creutin, J. D.; Lutoff, C.; Ruin, I.; Scolobig, A.; Créton-Cazanave, L.
2009-04-01
The objective of this study is to examine how the current techniques for flash-flood monitoring and forecasting can meet the requirements of the population at risk to evaluate the severity of the flood and anticipate its danger. To this end, we identify the social response time for different social actions in the course of two well studied flash flood events which occurred in France and Italy. We introduce a broad characterization of the event management activities into three types according to their main objective (information, organisation and protection). The activities are also classified into three other types according to the scale and nature of the human group involved (individuals, communities and institutions). The conclusions reached relate to i) the characterisation of the social responses according to watershed scale and to the information available, and ii) to the appropriateness of the existing surveillance and forecasting tools to support the social responses. Our results suggest that representing the dynamics of the social response with just one number representing the average time for warning a population is an oversimplification. It appears that the social response time exhibits a parallel with the hydrological response time, by diminishing in time with decreasing size of the relevant watershed. A second result is that the human groups have different capabilities of anticipation apparently based on the nature of information they use. Comparing watershed response times and social response times shows clearly that at scales of less than 100 km2, a number of actions were taken with response times comparable to the catchment response time. The implications for adapting the warning processes to social scales (individual or organisational scales) are considerable. At small scales and for the implied anticipation times, the reliable and high-resolution description of the actual rainfall field becomes the major source of information for decision-making processes such as deciding between evacuations or advising to stay home. This points to the need to improve the accuracy and quality control of real time radar rainfall data, especially for extreme flash flood generating storms.
Temporal coding of reward-guided choice in the posterior parietal cortex
Hawellek, David J.; Wong, Yan T.; Pesaran, Bijan
2016-01-01
Making a decision involves computations across distributed cortical and subcortical networks. How such distributed processing is performed remains unclear. We test how the encoding of choice in a key decision-making node, the posterior parietal cortex (PPC), depends on the temporal structure of the surrounding population activity. We recorded spiking and local field potential (LFP) activity in the PPC while two rhesus macaques performed a decision-making task. We quantified the mutual information that neurons carried about an upcoming choice and its dependence on LFP activity. The spiking of PPC neurons was correlated with LFP phases at three distinct time scales in the theta, beta, and gamma frequency bands. Importantly, activity at these time scales encoded upcoming decisions differently. Choice information contained in neural firing varied with the phase of beta and gamma activity. For gamma activity, maximum choice information occurred at the same phase as the maximum spike count. However, for beta activity, choice information and spike count were greatest at different phases. In contrast, theta activity did not modulate the encoding properties of PPC units directly but was correlated with beta and gamma activity through cross-frequency coupling. We propose that the relative timing of local spiking and choice information reveals temporal reference frames for computations in either local or large-scale decision networks. Differences between the timing of task information and activity patterns may be a general signature of distributed processing across large-scale networks. PMID:27821752
Young Children's Memory for the Times of Personal Past Events
Pathman, Thanujeni; Larkina, Marina; Burch, Melissa; Bauer, Patricia J.
2012-01-01
Remembering the temporal information associated with personal past events is critical for autobiographical memory, yet we know relatively little about the development of this capacity. In the present research, we investigated temporal memory for naturally occurring personal events in 4-, 6-, and 8-year-old children. Parents recorded unique events in which their children participated during a 4-month period. At test, children made relative recency judgments and estimated the time of each event using conventional time-scales (time of day, day of week, month of year, and season). Children also were asked to provide justifications for their time-scale judgments. Six- and 8-year-olds, but not 4-year-olds, accurately judged the order of two distinct events. There were age-related improvements in children's estimation of the time of events using conventional time-scales. Older children provided more justifications for their time-scale judgments compared to younger children. Relations between correct responding on the time-scale judgments and provision of meaningful justifications suggest that children may use that information to reconstruct the times associated with past events. The findings can be used to chart a developmental trajectory of performance in temporal memory for personal past events, and have implications for our understanding of autobiographical memory development. PMID:23687467
NASA Astrophysics Data System (ADS)
Palus, Milan; Jajcay, Nikola; Hlinka, Jaroslav; Kravtsov, Sergey; Tsonis, Anastasios
2016-04-01
Complexity of the climate system stems not only from the fact that it is variable over a huge range of spatial and temporal scales, but also from the nonlinear character of the climate system that leads to interactions of dynamics across scales. The dynamical processes on large time scales influence variability on shorter time scales. This nonlinear phenomenon of cross-scale causal interactions can be observed due to the recently introduced methodology [1] which starts with a wavelet decomposition of a multi-scale signal into quasi-oscillatory modes of a limited bandwidth, described using their instantaneous phases and amplitudes. Then their statistical associations are tested in order to search for interactions across time scales. An information-theoretic formulation of the generalized, nonlinear Granger causality [2] uncovers causal influence and information transfer from large-scale modes of climate variability with characteristic time scales from years to almost a decade to regional temperature variability on short time scales. In analyses of air temperature records from various European locations, a quasioscillatory phenomenon with the period around 7-8 years has been identified as the factor influencing variability of surface air temperature (SAT) on shorter time scales. Its influence on the amplitude of the SAT annual cycle was estimated in the range 0.7-1.4 °C and the effect on the overall variability of the SAT anomalies (SATA) leads to the changes 1.5-1.7 °C in the annual SATA means. The strongest effect of the 7-8 year cycle was observed in the winter SATA means where it reaches 4-5 °C in central European station and reanalysis data [3]. This study is supported by the Ministry of Education, Youth and Sports of the Czech Republic within the Program KONTAKT II, Project No. LH14001. [1] M. Palus, Phys. Rev. Lett. 112 078702 (2014) [2] M. Palus, M. Vejmelka, Phys. Rev. E 75, 056211 (2007) [3] N. Jajcay, J. Hlinka, S. Kravtsov, A. A. Tsonis, M. Palus, Time-scales of the European surface air temperature variability: The role of the 7-8 year cycle. Geophys. Res. Lett., in press, DOI: 10.1002/2015GL067325
NASA Technical Reports Server (NTRS)
Schubert, Siegfried
2011-01-01
Drought is fundamentally the result of an extended period of reduced precipitation lasting anywhere from a few weeks to decades and even longer. As such, addressing drought predictability and prediction in a changing climate requires foremost that we make progress on the ability to predict precipitation anomalies on subseasonal and longer time scales. From the perspective of the users of drought forecasts and information, drought is however most directly viewed through its impacts (e.g., on soil moisture, streamflow, crop yields). As such, the question of the predictability of drought must extend to those quantities as well. In order to make progress on these issues, the WCRP drought information group (DIG), with the support of WCRP, the Catalan Institute of Climate Sciences, the La Caixa Foundation, the National Aeronautics and Space Administration, the National Oceanic and Atmospheric Administration, and the National Science Foundation, has organized a workshop to focus on: 1. User requirements for drought prediction information on sub-seasonal to centennial time scales 2. Current understanding of the mechanisms and predictability of drought on sub-seasonal to centennial time scales 3. Current drought prediction/projection capabilities on sub-seasonal to centennial time scales 4. Advancing regional drought prediction capabilities for variables and scales most relevant to user needs on sub-seasonal to centennial time scales. This introductory talk provides an overview of these goals, and outlines the occurrence and mechanisms of drought world-wide.
Epidemic mitigation via awareness propagation in communication networks: the role of time scales
NASA Astrophysics Data System (ADS)
Wang, Huijuan; Chen, Chuyi; Qu, Bo; Li, Daqing; Havlin, Shlomo
2017-07-01
The participation of individuals in multi-layer networks allows for feedback between network layers, opening new possibilities to mitigate epidemic spreading. For instance, the spread of a biological disease such as Ebola in a physical contact network may trigger the propagation of the information related to this disease in a communication network, e.g. an online social network. The information propagated in the communication network may increase the awareness of some individuals, resulting in them avoiding contact with their infected neighbors in the physical contact network, which might protect the population from the infection. In this work, we aim to understand how the time scale γ of the information propagation (speed that information is spread and forgotten) in the communication network relative to that of the epidemic spread (speed that an epidemic is spread and cured) in the physical contact network influences such mitigation using awareness information. We begin by proposing a model of the interaction between information propagation and epidemic spread, taking into account the relative time scale γ. We analytically derive the average fraction of infected nodes in the meta-stable state for this model (i) by developing an individual-based mean-field approximation (IBMFA) method and (ii) by extending the microscopic Markov chain approach (MMCA). We show that when the time scale γ of the information spread relative to the epidemic spread is large, our IBMFA approximation is better compared to MMCA near the epidemic threshold, whereas MMCA performs better when the prevalence of the epidemic is high. Furthermore, we find that an optimal mitigation exists that leads to a minimal fraction of infected nodes. The optimal mitigation is achieved at a non-trivial relative time scale γ, which depends on the rate at which an infected individual becomes aware. Contrary to our intuition, information spread too fast in the communication network could reduce the mitigation effect. Finally, our finding has been validated in the real-world two-layer network obtained from the location-based social network Brightkite.
Extracting information in spike time patterns with wavelets and information theory.
Lopes-dos-Santos, Vítor; Panzeri, Stefano; Kayser, Christoph; Diamond, Mathew E; Quian Quiroga, Rodrigo
2015-02-01
We present a new method to assess the information carried by temporal patterns in spike trains. The method first performs a wavelet decomposition of the spike trains, then uses Shannon information to select a subset of coefficients carrying information, and finally assesses timing information in terms of decoding performance: the ability to identify the presented stimuli from spike train patterns. We show that the method allows: 1) a robust assessment of the information carried by spike time patterns even when this is distributed across multiple time scales and time points; 2) an effective denoising of the raster plots that improves the estimate of stimulus tuning of spike trains; and 3) an assessment of the information carried by temporally coordinated spikes across neurons. Using simulated data, we demonstrate that the Wavelet-Information (WI) method performs better and is more robust to spike time-jitter, background noise, and sample size than well-established approaches, such as principal component analysis, direct estimates of information from digitized spike trains, or a metric-based method. Furthermore, when applied to real spike trains from monkey auditory cortex and from rat barrel cortex, the WI method allows extracting larger amounts of spike timing information. Importantly, the fact that the WI method incorporates multiple time scales makes it robust to the choice of partly arbitrary parameters such as temporal resolution, response window length, number of response features considered, and the number of available trials. These results highlight the potential of the proposed method for accurate and objective assessments of how spike timing encodes information. Copyright © 2015 the American Physiological Society.
A wavelet based approach to measure and manage contagion at different time scales
NASA Astrophysics Data System (ADS)
Berger, Theo
2015-10-01
We decompose financial return series of US stocks into different time scales with respect to different market regimes. First, we examine dependence structure of decomposed financial return series and analyze the impact of the current financial crisis on contagion and changing interdependencies as well as upper and lower tail dependence for different time scales. Second, we demonstrate to which extent the information of different time scales can be used in the context of portfolio management. As a result, minimizing the variance of short-run noise outperforms a portfolio that minimizes the variance of the return series.
NASA Astrophysics Data System (ADS)
Nogueira, M.
2017-10-01
Monthly-to-decadal variability of the regional precipitation over Intertropical Convergence Zone and north-Atlantic and north-Pacific storm tracks was investigated using ERA-20C reanalysis. Satellite-based precipitation (
Topographic mapping of a hierarchy of temporal receptive windows using a narrated story
Lerner, Y.; Honey, C.J.; Silbert, L.J.; Hasson, U.
2011-01-01
Real life activities, such as watching a movie or engaging in conversation, unfold over many minutes. In the course of such activities the brain has to integrate information over multiple time scales. We recently proposed that the brain uses similar strategies for integrating information across space and over time. Drawing a parallel with spatial receptive fields (SRF), we defined the temporal receptive window(TRW) of a cortical microcircuit as the length of time prior to a response during which sensory information may affect that response. Our previous findings in the visual system are consistent with the hypothesis that TRWs become larger when moving from low-level sensory to high-level perceptual and cognitive areas. In this study, we mapped TRWs in auditory and language areas by measuring fMRI activity in subjects listening to a real life story scrambled at the time scales of words, sentences and paragraphs. Our results revealed a hierarchical topography of TRWs. In early auditory cortices (A1+), brain responses were driven mainly by the momentary incoming input and were similarly reliable across all scrambling conditions. In areas with an intermediate TRW, coherent information at the sentence time scale or longer was necessary to evoke reliable responses. At the apex of the TRW hierarchy we found parietal and frontal areas which responded reliably only when intact paragraphs were heard in a meaningful sequence. These results suggest that the time scale of processing is a functional property that may provide a general organizing principle for the human cerebral cortex. PMID:21414912
NASA Astrophysics Data System (ADS)
Raghib, Michael; Levin, Simon; Kevrekidis, Ioannis
2010-05-01
Self-propelled particle models (SPP's) are a class of agent-based simulations that have been successfully used to explore questions related to various flavors of collective motion, including flocking, swarming, and milling. These models typically consist of particle configurations, where each particle moves with constant speed, but changes its orientation in response to local averages of the positions and orientations of its neighbors found within some interaction region. These local averages are based on `social interactions', which include avoidance of collisions, attraction, and polarization, that are designed to generate configurations that move as a single object. Errors made by the individuals in the estimates of the state of the local configuration are modeled as a random rotation of the updated orientation resulting from the social rules. More recently, SPP's have been introduced in the context of collective decision-making, where the main innovation consists of dividing the population into naïve and `informed' individuals. Whereas naïve individuals follow the classical collective motion rules, members of the informed sub-population update their orientations according to a weighted average of the social rules and a fixed `preferred' direction, shared by all the informed individuals. Collective decision-making is then understood in terms of the ability of the informed sub-population to steer the whole group along the preferred direction. Summary statistics of collective decision-making are defined in terms of the stochastic properties of the random walk followed by the centroid of the configuration as the particles move about, in particular the scaling behavior of the mean squared displacement (msd). For the region of parameters where the group remains coherent , we note that there are two characteristic time scales, first there is an anomalous transient shared by both purely naïve and informed configurations, i.e. the scaling exponent lies between 1 and 2. The long-time behavior of the msd of the centroid walk scales linearly with time for naïve groups (diffusion), but shows a sharp transition to quadratic scaling (advection) for informed ones. These observations suggest that the mesoscopic variables of interest are the magnitude of the drift, the diffusion coefficient and the time-scales at which the anomalous and the asymptotic behavior respectively dominate transport, the latter being linked to the time scale at which the group reaches a decision. In order to estimate these summary statistics from the msd, we assumed that the configuration centroid follows an uncoupled Continuous Time Random Walk (CTRW) with smooth jump and waiting time pdf's. The mesoscopic transport equation for this type of random walk corresponds to an Advection-Diffusion Equation with Memory (ADEM). The introduction of the memory, and thus non-Markovian effects, is necessary in order to correctly account for the two time scales present. Although we were not able to calculate the memory directly from the individual-level rules, we show that it can estimated from a single, relatively short, simulation run using a Mittag-Leffler function as template. With this function it is possible to predict accurately the behavior of the msd, as well as the full pdf for the position of the centroid. The resulting ADEM is self-consistent in the sense that transport parameters estimated from the memory via a Kubo relationship coincide with those estimated from the moments of the jump size pdf of the associated CTRW for a large number of group sizes, proportions of informed individuals, and degrees of bias along the preferred direction. We also discuss the phase diagrams for the transport coefficients estimated from this method, where we notice velocity-precision trade-offs, where precision is a measure of the deviation of realized group orientations with respect to the informed direction. We also note that the time scale to collective decision is invariant with respect to group size, and depends only on the proportion of informed individuals and the strength of the coupling along the informed direction.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Voltage Variation A.3.4Short Time Power Reduction A.3.5Bursts A.3.6Electrostatic Discharge A.3... time of the test. 2.2.1.2Zero Load Tests. For zero load tests conducted in a laboratory or on a scale... other material weighed on the scale; and vi. The date and time the information is printed. b. For the...
Code of Federal Regulations, 2012 CFR
2012-10-01
... Voltage Variation A.3.4Short Time Power Reduction A.3.5Bursts A.3.6Electrostatic Discharge A.3... time of the test. 2.2.1.2Zero Load Tests. For zero load tests conducted in a laboratory or on a scale... other material weighed on the scale; and vi. The date and time the information is printed. b. For the...
Code of Federal Regulations, 2011 CFR
2011-10-01
... Voltage Variation A.3.4Short Time Power Reduction A.3.5Bursts A.3.6Electrostatic Discharge A.3... time of the test. 2.2.1.2Zero Load Tests. For zero load tests conducted in a laboratory or on a scale... other material weighed on the scale; and vi. The date and time the information is printed. b. For the...
Code of Federal Regulations, 2010 CFR
2010-10-01
... Voltage Variation A.3.4Short Time Power Reduction A.3.5Bursts A.3.6Electrostatic Discharge A.3... time of the test. 2.2.1.2Zero Load Tests. For zero load tests conducted in a laboratory or on a scale... other material weighed on the scale; and vi. The date and time the information is printed. b. For the...
Code of Federal Regulations, 2013 CFR
2013-10-01
... Voltage Variation A.3.4Short Time Power Reduction A.3.5Bursts A.3.6Electrostatic Discharge A.3... time of the test. 2.2.1.2Zero Load Tests. For zero load tests conducted in a laboratory or on a scale... other material weighed on the scale; and vi. The date and time the information is printed. b. For the...
Motor control by precisely timed spike patterns
Srivastava, Kyle H.; Holmes, Caroline M.; Vellema, Michiel; Pack, Andrea R.; Elemans, Coen P. H.; Nemenman, Ilya; Sober, Samuel J.
2017-01-01
A fundamental problem in neuroscience is understanding how sequences of action potentials (“spikes”) encode information about sensory signals and motor outputs. Although traditional theories assume that this information is conveyed by the total number of spikes fired within a specified time interval (spike rate), recent studies have shown that additional information is carried by the millisecond-scale timing patterns of action potentials (spike timing). However, it is unknown whether or how subtle differences in spike timing drive differences in perception or behavior, leaving it unclear whether the information in spike timing actually plays a role in brain function. By examining the activity of individual motor units (the muscle fibers innervated by a single motor neuron) and manipulating patterns of activation of these neurons, we provide both correlative and causal evidence that the nervous system uses millisecond-scale variations in the timing of spikes within multispike patterns to control a vertebrate behavior—namely, respiration in the Bengalese finch, a songbird. These findings suggest that a fundamental assumption of current theories of motor coding requires revision. PMID:28100491
Application of Wavelet Filters in an Evaluation of Photochemical Model Performance
Air quality model evaluation can be enhanced with time-scale specific comparisons of outputs and observations. For example, high-frequency (hours to one day) time scale information in observed ozone is not well captured by deterministic models and its incorporation into model pe...
Fast Coding of Orientation in Primary Visual Cortex
Shriki, Oren; Kohn, Adam; Shamir, Maoz
2012-01-01
Understanding how populations of neurons encode sensory information is a major goal of systems neuroscience. Attempts to answer this question have focused on responses measured over several hundred milliseconds, a duration much longer than that frequently used by animals to make decisions about the environment. How reliably sensory information is encoded on briefer time scales, and how best to extract this information, is unknown. Although it has been proposed that neuronal response latency provides a major cue for fast decisions in the visual system, this hypothesis has not been tested systematically and in a quantitative manner. Here we use a simple ‘race to threshold’ readout mechanism to quantify the information content of spike time latency of primary visual (V1) cortical cells to stimulus orientation. We find that many V1 cells show pronounced tuning of their spike latency to stimulus orientation and that almost as much information can be extracted from spike latencies as from firing rates measured over much longer durations. To extract this information, stimulus onset must be estimated accurately. We show that the responses of cells with weak tuning of spike latency can provide a reliable onset detector. We find that spike latency information can be pooled from a large neuronal population, provided that the decision threshold is scaled linearly with the population size, yielding a processing time of the order of a few tens of milliseconds. Our results provide a novel mechanism for extracting information from neuronal populations over the very brief time scales in which behavioral judgments must sometimes be made. PMID:22719237
NASA Astrophysics Data System (ADS)
Pan, Feng; Pachepsky, Yakov A.; Guber, Andrey K.; McPherson, Brian J.; Hill, Robert L.
2012-01-01
SummaryUnderstanding streamflow patterns in space and time is important for improving flood and drought forecasting, water resources management, and predictions of ecological changes. Objectives of this work include (a) to characterize the spatial and temporal patterns of streamflow using information theory-based measures at two thoroughly-monitored agricultural watersheds located in different hydroclimatic zones with similar land use, and (b) to elucidate and quantify temporal and spatial scale effects on those measures. We selected two USDA experimental watersheds to serve as case study examples, including the Little River experimental watershed (LREW) in Tifton, Georgia and the Sleepers River experimental watershed (SREW) in North Danville, Vermont. Both watersheds possess several nested sub-watersheds and more than 30 years of continuous data records of precipitation and streamflow. Information content measures (metric entropy and mean information gain) and complexity measures (effective measure complexity and fluctuation complexity) were computed based on the binary encoding of 5-year streamflow and precipitation time series data. We quantified patterns of streamflow using probabilities of joint or sequential appearances of the binary symbol sequences. Results of our analysis illustrate that information content measures of streamflow time series are much smaller than those for precipitation data, and the streamflow data also exhibit higher complexity, suggesting that the watersheds effectively act as filters of the precipitation information that leads to the observed additional complexity in streamflow measures. Correlation coefficients between the information-theory-based measures and time intervals are close to 0.9, demonstrating the significance of temporal scale effects on streamflow patterns. Moderate spatial scale effects on streamflow patterns are observed with absolute values of correlation coefficients between the measures and sub-watershed area varying from 0.2 to 0.6 in the two watersheds. We conclude that temporal effects must be evaluated and accounted for when the information theory-based methods are used for performance evaluation and comparison of hydrological models.
Multiscale analysis of information dynamics for linear multivariate processes.
Faes, Luca; Montalto, Alessandro; Stramaglia, Sebastiano; Nollo, Giandomenico; Marinazzo, Daniele
2016-08-01
In the study of complex physical and physiological systems represented by multivariate time series, an issue of great interest is the description of the system dynamics over a range of different temporal scales. While information-theoretic approaches to the multiscale analysis of complex dynamics are being increasingly used, the theoretical properties of the applied measures are poorly understood. This study introduces for the first time a framework for the analytical computation of information dynamics for linear multivariate stochastic processes explored at different time scales. After showing that the multiscale processing of a vector autoregressive (VAR) process introduces a moving average (MA) component, we describe how to represent the resulting VARMA process using statespace (SS) models and how to exploit the SS model parameters to compute analytical measures of information storage and information transfer for the original and rescaled processes. The framework is then used to quantify multiscale information dynamics for simulated unidirectionally and bidirectionally coupled VAR processes, showing that rescaling may lead to insightful patterns of information storage and transfer but also to potentially misleading behaviors.
Temporal scaling in information propagation.
Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi
2014-06-18
For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.
Temporal scaling in information propagation
NASA Astrophysics Data System (ADS)
Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi
2014-06-01
For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.
ERIC Educational Resources Information Center
Gerritts, Mary
1975-01-01
Describes construction of a Geologic Time Scale on a 100 foot roll of paper and suggests activities concerning its use. Includes information about fossils and suggestions for conducting a fossil field trip with students. (BR)
Minimum entropy density method for the time series analysis
NASA Astrophysics Data System (ADS)
Lee, Jeong Won; Park, Joongwoo Brian; Jo, Hang-Hyun; Yang, Jae-Suk; Moon, Hie-Tae
2009-01-01
The entropy density is an intuitive and powerful concept to study the complicated nonlinear processes derived from physical systems. We develop the minimum entropy density method (MEDM) to detect the structure scale of a given time series, which is defined as the scale in which the uncertainty is minimized, hence the pattern is revealed most. The MEDM is applied to the financial time series of Standard and Poor’s 500 index from February 1983 to April 2006. Then the temporal behavior of structure scale is obtained and analyzed in relation to the information delivery time and efficient market hypothesis.
Brédart, Anne; Kop, Jean-Luc; Fiszer, Chavie; Sigal-Zafrani, Brigitte; Dolbeault, Sylvie
2015-12-01
Information is a care priority in most breast cancer survivors (BCS). We assessed whether BCS information needs at 8 months after hospital cancer treatment could be related to their age, education level, perceived medical communication competence, satisfaction with care, attachment style, and self-esteem. Of 426 BCS approached during the last week of treatment (T1), 85% completed the Medical Communication Competence Scale, European Organisation for Research and Treatment of Cancer Satisfaction with Care Questionnaire, Rosenberg's Self-Esteem Scale and Experiences in Close Relationships Scale. The Hospital Anxiety and Depression Scale and the Supportive Care Needs Survey were completed at T1 and again 8 months later (T2) with a 66% (n = 283) response rate. Baseline respondents' median (range) age was 56 years (23-86 years). Information needs decreased over time, although some persisted. Multivariate regression analyses evidenced overall higher information needs at T2 in younger BCS and in those dissatisfied with the information provided at T1. Specifically, in younger BCS, higher information needs were related to lower satisfaction with doctors' availability, and in older BCS, they were related to higher self-perceived competence in information giving, lower self-perceived competence in information seeking, and lower satisfaction with doctors' information provision. Psychological distress was strongly related to information needs. Education, BCS attachment style, and self-esteem were not associated with information needs. In order to enhance supportive care for BCS, younger BCS should be provided with more time to address all their concerns and older BCS should be encouraged to express their specific desires for information. Copyright © 2015 John Wiley & Sons, Ltd.
Time of flight imaging through scattering environments (Conference Presentation)
NASA Astrophysics Data System (ADS)
Le, Toan H.; Breitbach, Eric C.; Jackson, Jonathan A.; Velten, Andreas
2017-02-01
Light scattering is a primary obstacle to imaging in many environments. On small scales in biomedical microscopy and diffuse tomography scenarios scattering is caused by tissue. On larger scales scattering from dust and fog provide challenges to vision systems for self driving cars and naval remote imaging systems. We are developing scale models for scattering environments and investigation methods for improved imaging particularly using time of flight transient information. With the emergence of Single Photon Avalanche Diode detectors and fast semiconductor lasers, illumination and capture on picosecond timescales are becoming possible in inexpensive, compact, and robust devices. This opens up opportunities for new computational imaging techniques that make use of photon time of flight. Time of flight or range information is used in remote imaging scenarios in gated viewing and in biomedical imaging in time resolved diffuse tomography. In addition spatial filtering is popular in biomedical scenarios with structured illumination and confocal microscopy. We are presenting a combination analytical, computational, and experimental models that allow us develop and test imaging methods across scattering scenarios and scales. This framework will be used for proof of concept experiments to evaluate new computational imaging methods.
Silva, Luiz Eduardo Virgilio; Lataro, Renata Maria; Castania, Jaci Airton; Silva, Carlos Alberto Aguiar; Salgado, Helio Cesar; Fazan, Rubens; Porta, Alberto
2017-08-01
Heart rate variability (HRV) has been extensively explored by traditional linear approaches (e.g., spectral analysis); however, several studies have pointed to the presence of nonlinear features in HRV, suggesting that linear tools might fail to account for the complexity of the HRV dynamics. Even though the prevalent notion is that HRV is nonlinear, the actual presence of nonlinear features is rarely verified. In this study, the presence of nonlinear dynamics was checked as a function of time scales in three experimental models of rats with different impairment of the cardiac control: namely, rats with heart failure (HF), spontaneously hypertensive rats (SHRs), and sinoaortic denervated (SAD) rats. Multiscale entropy (MSE) and refined MSE (RMSE) were chosen as the discriminating statistic for the surrogate test utilized to detect nonlinearity. Nonlinear dynamics is less present in HF animals at both short and long time scales compared with controls. A similar finding was found in SHR only at short time scales. SAD increased the presence of nonlinear dynamics exclusively at short time scales. Those findings suggest that a working baroreflex contributes to linearize HRV and to reduce the likelihood to observe nonlinear components of the cardiac control at short time scales. In addition, an increased sympathetic modulation seems to be a source of nonlinear dynamics at long time scales. Testing nonlinear dynamics as a function of the time scales can provide a characterization of the cardiac control complementary to more traditional markers in time, frequency, and information domains. NEW & NOTEWORTHY Although heart rate variability (HRV) dynamics is widely assumed to be nonlinear, nonlinearity tests are rarely used to check this hypothesis. By adopting multiscale entropy (MSE) and refined MSE (RMSE) as the discriminating statistic for the nonlinearity test, we show that nonlinear dynamics varies with time scale and the type of cardiac dysfunction. Moreover, as complexity metrics and nonlinearities provide complementary information, we strongly recommend using the test for nonlinearity as an additional index to characterize HRV. Copyright © 2017 the American Physiological Society.
50 CFR 680.23 - Equipment and operational requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... (882 lb) of crab or an alternative material supplied by the scale manufacturer on the scale under test... bottom of the hopper unless an alternative testing method is approved by NMFS. The MPE for the daily at... delivery. The scale operator may write this information on the scale printout in ink at the time of landing...
In recent years the applications of regional air quality models are continuously being extended to address atmospheric pollution phenomenon from local to hemispheric spatial scales over time scales ranging from episodic to annual. The need to represent interactions between physic...
Earth History databases and visualization - the TimeScale Creator system
NASA Astrophysics Data System (ADS)
Ogg, James; Lugowski, Adam; Gradstein, Felix
2010-05-01
The "TimeScale Creator" team (www.tscreator.org) and the Subcommission on Stratigraphic Information (stratigraphy.science.purdue.edu) of the International Commission on Stratigraphy (www.stratigraphy.org) has worked with numerous geoscientists and geological surveys to prepare reference datasets for global and regional stratigraphy. All events are currently calibrated to Geologic Time Scale 2004 (Gradstein et al., 2004, Cambridge Univ. Press) and Concise Geologic Time Scale (Ogg et al., 2008, Cambridge Univ. Press); but the array of intercalibrations enable dynamic adjustment to future numerical age scales and interpolation methods. The main "global" database contains over 25,000 events/zones from paleontology, geomagnetics, sea-level and sequence stratigraphy, igneous provinces, bolide impacts, plus several stable isotope curves and image sets. Several regional datasets are provided in conjunction with geological surveys, with numerical ages interpolated using a similar flexible inter-calibration procedure. For example, a joint program with Geoscience Australia has compiled an extensive Australian regional biostratigraphy and a full array of basin lithologic columns with each formation linked to public lexicons of all Proterozoic through Phanerozoic basins - nearly 500 columns of over 9,000 data lines plus hot-curser links to oil-gas reference wells. Other datapacks include New Zealand biostratigraphy and basin transects (ca. 200 columns), Russian biostratigraphy, British Isles regional stratigraphy, Gulf of Mexico biostratigraphy and lithostratigraphy, high-resolution Neogene stable isotope curves and ice-core data, human cultural episodes, and Circum-Arctic stratigraphy sets. The growing library of datasets is designed for viewing and chart-making in the free "TimeScale Creator" JAVA package. This visualization system produces a screen display of the user-selected time-span and the selected columns of geologic time scale information. The user can change the vertical-scale, column widths, fonts, colors, titles, ordering, range chart options and many other features. Mouse-activated pop-ups provide additional information on columns and events; including links to external Internet sites. The graphics can be saved as SVG (scalable vector graphics) or PDF files for direct import into Adobe Illustrator or other common drafting software. Users can load additional regional datapacks, and create and upload their own datasets. The "Pro" version has additional dataset-creation tools, output options and the ability to edit and re-save merged datasets. The databases and visualization package are envisioned as a convenient reference tool, chart-production assistant, and a window into the geologic history of our planet.
Quantifying Stock Return Distributions in Financial Markets
Botta, Federico; Moat, Helen Susannah; Stanley, H. Eugene; Preis, Tobias
2015-01-01
Being able to quantify the probability of large price changes in stock markets is of crucial importance in understanding financial crises that affect the lives of people worldwide. Large changes in stock market prices can arise abruptly, within a matter of minutes, or develop across much longer time scales. Here, we analyze a dataset comprising the stocks forming the Dow Jones Industrial Average at a second by second resolution in the period from January 2008 to July 2010 in order to quantify the distribution of changes in market prices at a range of time scales. We find that the tails of the distributions of logarithmic price changes, or returns, exhibit power law decays for time scales ranging from 300 seconds to 3600 seconds. For larger time scales, we find that the distributions tails exhibit exponential decay. Our findings may inform the development of models of market behavior across varying time scales. PMID:26327593
Quantifying Stock Return Distributions in Financial Markets.
Botta, Federico; Moat, Helen Susannah; Stanley, H Eugene; Preis, Tobias
2015-01-01
Being able to quantify the probability of large price changes in stock markets is of crucial importance in understanding financial crises that affect the lives of people worldwide. Large changes in stock market prices can arise abruptly, within a matter of minutes, or develop across much longer time scales. Here, we analyze a dataset comprising the stocks forming the Dow Jones Industrial Average at a second by second resolution in the period from January 2008 to July 2010 in order to quantify the distribution of changes in market prices at a range of time scales. We find that the tails of the distributions of logarithmic price changes, or returns, exhibit power law decays for time scales ranging from 300 seconds to 3600 seconds. For larger time scales, we find that the distributions tails exhibit exponential decay. Our findings may inform the development of models of market behavior across varying time scales.
Linear and Non-linear Information Flows In Rainfall Field
NASA Astrophysics Data System (ADS)
Molini, A.; La Barbera, P.; Lanza, L. G.
The rainfall process is the result of a complex framework of non-linear dynamical in- teractions between the different components of the atmosphere. It preserves the com- plexity and the intermittent features of the generating system in space and time as well as the strong dependence of these properties on the scale of observations. The understanding and quantification of how the non-linearity of the generating process comes to influence the single rain events constitute relevant research issues in the field of hydro-meteorology, especially in those applications where a timely and effective forecasting of heavy rain events is able to reduce the risk of failure. This work focuses on the characterization of the non-linear properties of the observed rain process and on the influence of these features on hydrological models. Among the goals of such a survey is the research of regular structures of the rainfall phenomenon and the study of the information flows within the rain field. The research focuses on three basic evo- lution directions for the system: in time, in space and between the different scales. In fact, the information flows that force the system to evolve represent in general a connection between the different locations in space, the different instants in time and, unless assuming the hypothesis of scale invariance is verified "a priori", the different characteristic scales. A first phase of the analysis is carried out by means of classic statistical methods, then a survey of the information flows within the field is devel- oped by means of techniques borrowed from the Information Theory, and finally an analysis of the rain signal in the time and frequency domains is performed, with par- ticular reference to its intermittent structure. The methods adopted in this last part of the work are both the classic techniques of statistical inference and a few procedures for the detection of non-linear and non-stationary features within the process starting from measured data.
Multiscale recurrence quantification analysis of order recurrence plots
NASA Astrophysics Data System (ADS)
Xu, Mengjia; Shang, Pengjian; Lin, Aijing
2017-03-01
In this paper, we propose a new method of multiscale recurrence quantification analysis (MSRQA) to analyze the structure of order recurrence plots. The MSRQA is based on order patterns over a range of time scales. Compared with conventional recurrence quantification analysis (RQA), the MSRQA can show richer and more recognizable information on the local characteristics of diverse systems which successfully describes their recurrence properties. Both synthetic series and stock market indexes exhibit their properties of recurrence at large time scales that quite differ from those at a single time scale. Some systems present more accurate recurrence patterns under large time scales. It demonstrates that the new approach is effective for distinguishing three similar stock market systems and showing some inherent differences.
Local active information storage as a tool to understand distributed neural information processing
Wibral, Michael; Lizier, Joseph T.; Vögler, Sebastian; Priesemann, Viola; Galuske, Ralf
2013-01-01
Every act of information processing can in principle be decomposed into the component operations of information storage, transfer, and modification. Yet, while this is easily done for today's digital computers, the application of these concepts to neural information processing was hampered by the lack of proper mathematical definitions of these operations on information. Recently, definitions were given for the dynamics of these information processing operations on a local scale in space and time in a distributed system, and the specific concept of local active information storage was successfully applied to the analysis and optimization of artificial neural systems. However, no attempt to measure the space-time dynamics of local active information storage in neural data has been made to date. Here we measure local active information storage on a local scale in time and space in voltage sensitive dye imaging data from area 18 of the cat. We show that storage reflects neural properties such as stimulus preferences and surprise upon unexpected stimulus change, and in area 18 reflects the abstract concept of an ongoing stimulus despite the locally random nature of this stimulus. We suggest that LAIS will be a useful quantity to test theories of cortical function, such as predictive coding. PMID:24501593
High-resolution time-frequency representation of EEG data using multi-scale wavelets
NASA Astrophysics Data System (ADS)
Li, Yang; Cui, Wei-Gang; Luo, Mei-Lin; Li, Ke; Wang, Lina
2017-09-01
An efficient time-varying autoregressive (TVAR) modelling scheme that expands the time-varying parameters onto the multi-scale wavelet basis functions is presented for modelling nonstationary signals and with applications to time-frequency analysis (TFA) of electroencephalogram (EEG) signals. In the new parametric modelling framework, the time-dependent parameters of the TVAR model are locally represented by using a novel multi-scale wavelet decomposition scheme, which can allow the capability to capture the smooth trends as well as track the abrupt changes of time-varying parameters simultaneously. A forward orthogonal least square (FOLS) algorithm aided by mutual information criteria are then applied for sparse model term selection and parameter estimation. Two simulation examples illustrate that the performance of the proposed multi-scale wavelet basis functions outperforms the only single-scale wavelet basis functions or Kalman filter algorithm for many nonstationary processes. Furthermore, an application of the proposed method to a real EEG signal demonstrates the new approach can provide highly time-dependent spectral resolution capability.
Comparison of detrending methods for fluctuation analysis in hydrology
NASA Astrophysics Data System (ADS)
Zhang, Qiang; Zhou, Yu; Singh, Vijay P.; Chen, Yongqin David
2011-03-01
SummaryTrends within a hydrologic time series can significantly influence the scaling results of fluctuation analysis, such as rescaled range (RS) analysis and (multifractal) detrended fluctuation analysis (MF-DFA). Therefore, removal of trends is important in the study of scaling properties of the time series. In this study, three detrending methods, including adaptive detrending algorithm (ADA), Fourier-based method, and average removing technique, were evaluated by analyzing numerically generated series and observed streamflow series with obvious relative regular periodic trend. Results indicated that: (1) the Fourier-based detrending method and ADA were similar in detrending practices, and given proper parameters, these two methods can produce similarly satisfactory results; (2) detrended series by Fourier-based detrending method and ADA lose the fluctuation information at larger time scales, and the location of crossover points is heavily impacted by the chosen parameters of these two methods; and (3) the average removing method has an advantage over the other two methods, i.e., the fluctuation information at larger time scales is kept well-an indication of relatively reliable performance in detrending. In addition, the average removing method performed reasonably well in detrending a time series with regular periods or trends. In this sense, the average removing method should be preferred in the study of scaling properties of the hydrometeorolgical series with relative regular periodic trend using MF-DFA.
NASA Astrophysics Data System (ADS)
Wang, Jun; Zhao, Jianlin; Di, Jianglei; Jiang, Biqiang
2015-04-01
A scheme for recording fast process at nanosecond scale by using digital holographic interferometry with continuous wave (CW) laser is described and demonstrated experimentally, which employs delayed-time fibers and angular multiplexing technique and can realize the variable temporal resolution at nanosecond scale and different measured depths of object field at certain temporal resolution. The actual delay-time is controlled by two delayed-time fibers with different lengths. The object field information in two different states can be simultaneously recorded in a composite hologram. This scheme is also suitable for recording fast process at picosecond scale, by using an electro-optic modulator.
The Development of the Marital Satisfaction Scale (MSS)
ERIC Educational Resources Information Center
Canel, Azize Nilgun
2013-01-01
In this study, the process of developing the Marital Satisfaction Scale (MSS) aiming to support studies in the field of marital satisfaction and to obtain information about couples in a short time through psychological counseling is discussed. The scale including 101 yes-no items aiming to reveal couples' opinions about their marriages was…
Multilevel Item Response Modeling: Applications to Large-Scale Assessment of Academic Achievement
ERIC Educational Resources Information Center
Zheng, Xiaohui
2009-01-01
The call for standards-based reform and educational accountability has led to increased attention to large-scale assessments. Over the past two decades, large-scale assessments have been providing policymakers and educators with timely information about student learning and achievement to facilitate their decisions regarding schools, teachers and…
Time-Varying, Multi-Scale Adaptive System Reliability Analysis of Lifeline Infrastructure Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gearhart, Jared Lee; Kurtz, Nolan Scot
2014-09-01
The majority of current societal and economic needs world-wide are met by the existing networked, civil infrastructure. Because the cost of managing such infrastructure is high and increases with time, risk-informed decision making is essential for those with management responsibilities for these systems. To address such concerns, a methodology that accounts for new information, deterioration, component models, component importance, group importance, network reliability, hierarchical structure organization, and efficiency concerns has been developed. This methodology analyzes the use of new information through the lens of adaptive Importance Sampling for structural reliability problems. Deterioration, multi-scale bridge models, and time-variant component importance aremore » investigated for a specific network. Furthermore, both bridge and pipeline networks are studied for group and component importance, as well as for hierarchical structures in the context of specific networks. Efficiency is the primary driver throughout this study. With this risk-informed approach, those responsible for management can address deteriorating infrastructure networks in an organized manner.« less
Information transfer during the universal gravitational decoherence
NASA Astrophysics Data System (ADS)
Korbicz, J. K.; Tuziemski, J.
2017-12-01
Recently Pikovski et al. (Nat Phys 11:668, 2015) have proposed in an intriguing universal decoherence mechanism, suggesting that gravitation may play a conceptually important role in the quantum-to-classical transition, albeit vanishingly small in everyday situations. Here we analyze information transfer induced by this mechanism. We show that generically on short time-scales, gravitational decoherence leads to a redundant information encoding, which results in a form of objectivization of the center-of-mass position in the gravitational field. We derive the relevant time-scales of this process, given in terms of energy dispersion and quantum Fisher information. As an example we study thermal coherent states and show certain robustness of the effect with the temperature. Finally, we draw an analogy between our objectivization mechanism and the fundamental problem of point individuation in General Relativity as emphasized by the Einstein's Hole argument.
APPENDIX C - ADDITIONAL INFORMATION ON FLUSHING IN ESTUARIES
Water residence time is an important determinant of the sensitivity of the response of estuaries and other water bodies to nutrient loading. A variety of terms such as residence time, flushing time, transit time, turnover time, and age are used to describe time scales for transpo...
Using analogy to learn about phenomena at scales outside human perception.
Resnick, Ilyse; Davatzes, Alexandra; Newcombe, Nora S; Shipley, Thomas F
2017-01-01
Understanding and reasoning about phenomena at scales outside human perception (for example, geologic time) is critical across science, technology, engineering, and mathematics. Thus, devising strong methods to support acquisition of reasoning at such scales is an important goal in science, technology, engineering, and mathematics education. In two experiments, we examine the use of analogical principles in learning about geologic time. Across both experiments we find that using a spatial analogy (for example, a time line) to make multiple alignments, and keeping all unrelated components of the analogy held constant (for example, keep the time line the same length), leads to better understanding of the magnitude of geologic time. Effective approaches also include hierarchically and progressively aligning scale information (Experiment 1) and active prediction in making alignments paired with immediate feedback (Experiments 1 and 2).
Intramolecular stable isotope distributions detect plant metabolic responses on century time scales
NASA Astrophysics Data System (ADS)
Schleucher, Jürgen; Ehlers, Ina; Augusti, Angela; Betson, Tatiana
2014-05-01
Plants respond to environmental changes on a vast range of time scales, and plant gas exchanges constitute important feedback mechanisms in the global C cycle. Responses on time scales of decades to centuries are most important for climate models, for prediction of crop productivity, and for adaptation to climate change. Unfortunately, responses on these timescale are least understood. We argue that the knowledge gap on intermediate time scales is due to a lack of adequate methods that can bridge between short-term manipulative experiments (e.g. FACE) and paleo research. Manipulative experiments in plant ecophysiology give information on metabolism on time scales up to years. However, this information cannot be linked to results from retrospective studies in paleo research, because little metabolic information can be derived from paleo archives. Stable isotopes are prominent tools in plant ecophysiology, biogeochemistry and in paleo research, but in all applications to date, isotope ratios of whole molecules are measured. However, it is well established that stable isotope abundance varies among intramolecular groups of biochemical metabolites, that is each so-called "isotopomer" has a distinct abundance. This intramolecular variation carries information on metabolic regulation, which can even be traced to individual enzymes (Schleucher et al., Plant, Cell Environ 1999). Here, we apply intramolecular isotope distributions to study the metabolic response of plants to increasing atmospheric [CO2] during the past century. Greenhouse experiments show that the deuterium abundance among the two positions in the C6H2 group of photosynthetic glucose depends on [CO2] during growth. This is observed for all plants using C3 photosynthesis, and reflects the metabolic flux ratio between photorespiration and photosynthesis. Photorespiration is a major C flux that limits assimilation in C3 plants, which encompass the overwhelming fraction of terrestrial photosynthesis and the vast majority of crop species. To access century time scales, we traced this metabolic signal in historic material of two crop species during the past 100 years and find the same response as predicted from the greenhouse experiments. This allows estimating how much photorespiration has been reduced due to the anthropogenic CO2 emission during the 20th century, and shows that plants have not acclimated to increasing [CO2] during more than 100 generations. In summary, we demonstrate that metabolic responses of plants to environmental changes create intramolecular isotope signals. These signals can be identified in manipulation experiments and can be retrieved from plant archives. The isotope abundance of each intramolecular position is set by specific isotope fractionations, such as enzyme isotope effects or hydrogen exchange with xylem water (Augusti et al., Chem. Geol. 2008). Therefore it may be possible to simultaneously reconstruct several physiologic or climate signals from an archive of a single molecule. The principles governing intramolecular isotope distributions are general for all metabolites and isotopes (D, 13C), therefore intramolecular isotope distributions can multiply the information content of paleo archives. In particular, they allow extraction of metabolic information on long time scales, thereby connecting plant physiology with paleo research.
Geometric structure and information change in phase transitions
NASA Astrophysics Data System (ADS)
Kim, Eun-jin; Hollerbach, Rainer
2017-06-01
We propose a toy model for a cyclic order-disorder transition and introduce a geometric methodology to understand stochastic processes involved in transitions. Specifically, our model consists of a pair of forward and backward processes (FPs and BPs) for the emergence and disappearance of a structure in a stochastic environment. We calculate time-dependent probability density functions (PDFs) and the information length L , which is the total number of different states that a system undergoes during the transition. Time-dependent PDFs during transient relaxation exhibit strikingly different behavior in FPs and BPs. In particular, FPs driven by instability undergo the broadening of the PDF with a large increase in fluctuations before the transition to the ordered state accompanied by narrowing the PDF width. During this stage, we identify an interesting geodesic solution accompanied by the self-regulation between the growth and nonlinear damping where the time scale τ of information change is constant in time, independent of the strength of the stochastic noise. In comparison, BPs are mainly driven by the macroscopic motion due to the movement of the PDF peak. The total information length L between initial and final states is much larger in BPs than in FPs, increasing linearly with the deviation γ of a control parameter from the critical state in BPs while increasing logarithmically with γ in FPs. L scales as |lnD | and D-1 /2 in FPs and BPs, respectively, where D measures the strength of the stochastic forcing. These differing scalings with γ and D suggest a great utility of L in capturing different underlying processes, specifically, diffusion vs advection in phase transition by geometry. We discuss physical origins of these scalings and comment on implications of our results for bistable systems undergoing repeated order-disorder transitions (e.g., fitness).
Geometric structure and information change in phase transitions.
Kim, Eun-Jin; Hollerbach, Rainer
2017-06-01
We propose a toy model for a cyclic order-disorder transition and introduce a geometric methodology to understand stochastic processes involved in transitions. Specifically, our model consists of a pair of forward and backward processes (FPs and BPs) for the emergence and disappearance of a structure in a stochastic environment. We calculate time-dependent probability density functions (PDFs) and the information length L, which is the total number of different states that a system undergoes during the transition. Time-dependent PDFs during transient relaxation exhibit strikingly different behavior in FPs and BPs. In particular, FPs driven by instability undergo the broadening of the PDF with a large increase in fluctuations before the transition to the ordered state accompanied by narrowing the PDF width. During this stage, we identify an interesting geodesic solution accompanied by the self-regulation between the growth and nonlinear damping where the time scale τ of information change is constant in time, independent of the strength of the stochastic noise. In comparison, BPs are mainly driven by the macroscopic motion due to the movement of the PDF peak. The total information length L between initial and final states is much larger in BPs than in FPs, increasing linearly with the deviation γ of a control parameter from the critical state in BPs while increasing logarithmically with γ in FPs. L scales as |lnD| and D^{-1/2} in FPs and BPs, respectively, where D measures the strength of the stochastic forcing. These differing scalings with γ and D suggest a great utility of L in capturing different underlying processes, specifically, diffusion vs advection in phase transition by geometry. We discuss physical origins of these scalings and comment on implications of our results for bistable systems undergoing repeated order-disorder transitions (e.g., fitness).
Covariant information-density cutoff in curved space-time.
Kempf, Achim
2004-06-04
In information theory, the link between continuous information and discrete information is established through well-known sampling theorems. Sampling theory explains, for example, how frequency-filtered music signals are reconstructible perfectly from discrete samples. In this Letter, sampling theory is generalized to pseudo-Riemannian manifolds. This provides a new set of mathematical tools for the study of space-time at the Planck scale: theories formulated on a differentiable space-time manifold can be equivalent to lattice theories. There is a close connection to generalized uncertainty relations which have appeared in string theory and other studies of quantum gravity.
NASA Astrophysics Data System (ADS)
Krumholz, Mark R.; Ting, Yuan-Sen
2018-04-01
The distributions of a galaxy's gas and stars in chemical space encode a tremendous amount of information about that galaxy's physical properties and assembly history. However, present methods for extracting information from chemical distributions are based either on coarse averages measured over galactic scales (e.g. metallicity gradients) or on searching for clusters in chemical space that can be identified with individual star clusters or gas clouds on ˜1 pc scales. These approaches discard most of the information, because in galaxies gas and young stars are observed to be distributed fractally, with correlations on all scales, and the same is likely to be true of metals. In this paper we introduce a first theoretical model, based on stochastically forced diffusion, capable of predicting the multiscale statistics of metal fields. We derive the variance, correlation function, and power spectrum of the metal distribution from first principles, and determine how these quantities depend on elements' astrophysical origin sites and on the large-scale properties of galaxies. Among other results, we explain for the first time why the typical abundance scatter observed in the interstellar media of nearby galaxies is ≈0.1 dex, and we predict that this scatter will be correlated on spatial scales of ˜0.5-1 kpc, and over time-scales of ˜100-300 Myr. We discuss the implications of our results for future chemical tagging studies.
The scientific targets of the SCOPE mission
NASA Astrophysics Data System (ADS)
Fujimoto, M.; Saito, Y.; Tsuda, Y.; Shinohara, I.; Kojima, H.
Future Japanese magnetospheric mission "SCOPE" is now under study (planned to be launched in 2012). The main purpose of this mission is to investigate the dynamic behaviors of plasmas in the Earth's magnetosphere from the view-point of cross-scale coupling. Dynamical collisionless space plasma phenomena, be they large scale as a whole, are chracterized by coupling over various time and spatial scales. The best example would be the magnetic reconnection process, which is a large scale energy conversion process but has a small key region at the heart of its engine. Inside the key region, electron scale dynamics plays the key role in liberating the frozen-in constraint, by which reconnection is allowed to proceed. The SCOPE mission is composed of one large mother satellite and four small daughter satellites. The mother spacecraft will be equiped with the electron detector that has 10 msec time resolution so that scales down to the electron's will be resolved. Three of the four daughter satellites surround the mother satellite 3-dimensionally with the mutual distances between several km and several thousand km, which are varied during the mission. Plasma measurements on these spacecrafts will have 1 sec resolution and will provide information on meso-scale plasma structure. The fourth daughter satellite stays near the mother satellite with the distance less than 100km. By correlation between the two plasma wave instruments on the daughter and the mother spacecrafts, propagation of the waves and the information on the electron scale dynamics will be obtained. By this strategy, both meso- and micro-scale information on dynamics are obtained, that will enable us to investigate the physics of the space plasma from the cross-scale coupling point of view.
Imaging the Subsurface of the Thuringian Basin (Germany) on Different Spatial Scales
NASA Astrophysics Data System (ADS)
Goepel, A.; Krause, M.; Methe, P.; Kukowski, N.
2014-12-01
Understanding the coupled dynamics of near surface and deep fluid flow patterns is essential to characterize the properties of sedimentary basins, to identify the processes of compaction, diagenesis, and transport of mass and energy. The multidisciplinary project INFLUINS (Integrated FLUid dynamics IN Sedimentary basins) aims for investigating the behavior of fluids in the Thuringian Basin, a small intra-continental sedimentary basin in Germany, at different spatial scales, ranging from the pore scale to the extent of the entire basin. As hydraulic properties often significantly vary with spatial scales, e.g. seismic data using different frequencies are required to gain information about the spatial variability of elastic and hydraulic subsurface properties. For the Thuringian Basin, we use seismic and borehole data acquired in the framework of INFLUINS. Basin-wide structural imaging data are available from 2D reflection seismic profiles as well as 2.5D and 3D seismic travel time tomography. Further, core material from a 1,179 m deep drill hole completed in 2013 is available for laboratory seismic experiments on mm- to cm-scale. The data are complemented with logging data along the entire drill hole. This campaign yielded e.g. sonic and density logs allowing the estimation of in-situ P-velocity and acoustic impedance with a spatial resolution on the cm-scale and provides improved information about petrologic and stratigraphic variability at different scales. Joint interpretation of basin scale structural and elastic properties data with laboratory scale data from ultrasound experiments using core samples enables a detailed and realistic imaging of the subsurface properties on different spatial scales. Combining seismic travel time tomography with stratigraphic interpretation provides useful information of variations in the elastic properties for certain geological units and therefore gives indications for changes in hydraulic properties.
Chen, Feng; Chen, Suren; Ma, Xiaoxiang
2016-01-01
Traffic and environmental conditions (e.g., weather conditions), which frequently change with time, have a significant impact on crash occurrence. Traditional crash frequency models with large temporal scales and aggregated variables are not sufficient to capture the time-varying nature of driving environmental factors, causing significant loss of critical information on crash frequency modeling. This paper aims at developing crash frequency models with refined temporal scales for complex driving environments, with such an effort providing more detailed and accurate crash risk information which can allow for more effective and proactive traffic management and law enforcement intervention. Zero-inflated, negative binomial (ZINB) models with site-specific random effects are developed with unbalanced panel data to analyze hourly crash frequency on highway segments. The real-time driving environment information, including traffic, weather and road surface condition data, sourced primarily from the Road Weather Information System, is incorporated into the models along with site-specific road characteristics. The estimation results of unbalanced panel data ZINB models suggest there are a number of factors influencing crash frequency, including time-varying factors (e.g., visibility and hourly traffic volume) and site-varying factors (e.g., speed limit). The study confirms the unique significance of the real-time weather, road surface condition and traffic data to crash frequency modeling. PMID:27322306
King, Adam C; Newell, Karl M
2015-10-01
The experiment investigated the effect of selectively augmenting faster time scales of visual feedback information on the learning and transfer of continuous isometric force tracking tasks to test the generality of the self-organization of 1/f properties of force output. Three experimental groups tracked an irregular target pattern either under a standard fixed gain condition or with selectively enhancement in the visual feedback display of intermediate (4-8 Hz) or high (8-12 Hz) frequency components of the force output. All groups reduced tracking error over practice, with the error lowest in the intermediate scaling condition followed by the high scaling and fixed gain conditions, respectively. Selective visual scaling induced persistent changes across the frequency spectrum, with the strongest effect in the intermediate scaling condition and positive transfer to novel feedback displays. The findings reveal an interdependence of the timescales in the learning and transfer of isometric force output frequency structures consistent with 1/f process models of the time scales of motor output variability.
The Cost of Commonality: Assessing Value in Joint Programs
2015-12-01
this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data ...sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden...economies of scale in order to provide cheaper goods. When those economies of scale are not realized, as was the case with the U.S. auto market “ Big Three
NASA Astrophysics Data System (ADS)
Coppola, A.; Comegna, V.; de Simone, L.
2009-04-01
Non-point source (NPS) pollution in the vadose zone is a global environmental problem. The knowledge and information required to address the problem of NPS pollutants in the vadose zone cross several technological and sub disciplinary lines: spatial statistics, geographic information systems (GIS), hydrology, soil science, and remote sensing. The main issues encountered by NPS groundwater vulnerability assessment, as discussed by Stewart [2001], are the large spatial scales, the complex processes that govern fluid flow and solute transport in the unsaturated zone, the absence of unsaturated zone measurements of diffuse pesticide concentrations in 3-D regional-scale space as these are difficult, time consuming, and prohibitively costly, and the computational effort required for solving the nonlinear equations for physically-based modeling of regional scale, heterogeneous applications. As an alternative solution, here is presented an approach that is based on coupling of transfer function and GIS modeling that: a) is capable of solute concentration estimation at a depth of interest within a known error confidence class; b) uses available soil survey, climatic, and irrigation information, and requires minimal computational cost for application; c) can dynamically support decision making through thematic mapping and 3D scenarios This result was pursued through 1) the design and building of a spatial database containing environmental and physical information regarding the study area, 2) the development of the transfer function procedure for layered soils, 3) the final representation of results through digital mapping and 3D visualization. One side GIS modeled environmental data in order to characterize, at regional scale, soil profile texture and depth, land use, climatic data, water table depth, potential evapotranspiration; on the other side such information was implemented in the up-scaling procedure of the Jury's TFM resulting in a set of texture based travel time probability density functions for layered soils each describing a characteristic leaching behavior for soil profiles with similar hydraulic properties. Such behavior, in terms of solute travel time to water table, was then imported back into GIS and finally estimation groundwater vulnerability for each soil unit was represented into a map as well as visualized in 3D.
Autocorrelation and cross-correlation in time series of homicide and attempted homicide
NASA Astrophysics Data System (ADS)
Machado Filho, A.; da Silva, M. F.; Zebende, G. F.
2014-04-01
We propose in this paper to establish the relationship between homicides and attempted homicides by a non-stationary time-series analysis. This analysis will be carried out by Detrended Fluctuation Analysis (DFA), Detrended Cross-Correlation Analysis (DCCA), and DCCA cross-correlation coefficient, ρ(n). Through this analysis we can identify a positive cross-correlation between homicides and attempted homicides. At the same time, looked at from the point of view of autocorrelation (DFA), this analysis can be more informative depending on time scale. For short scale (days), we cannot identify auto-correlations, on the scale of weeks DFA presents anti-persistent behavior, and for long time scales (n>90 days) DFA presents a persistent behavior. Finally, the application of this new type of statistical analysis proved to be efficient and, in this sense, this paper can contribute to a more accurate descriptive statistics of crime.
Heidari, Zahra; Roe, Daniel R; Galindo-Murillo, Rodrigo; Ghasemi, Jahan B; Cheatham, Thomas E
2016-07-25
Long time scale molecular dynamics (MD) simulations of biological systems are becoming increasingly commonplace due to the availability of both large-scale computational resources and significant advances in the underlying simulation methodologies. Therefore, it is useful to investigate and develop data mining and analysis techniques to quickly and efficiently extract the biologically relevant information from the incredible amount of generated data. Wavelet analysis (WA) is a technique that can quickly reveal significant motions during an MD simulation. Here, the application of WA on well-converged long time scale (tens of μs) simulations of a DNA helix is described. We show how WA combined with a simple clustering method can be used to identify both the physical and temporal locations of events with significant motion in MD trajectories. We also show that WA can not only distinguish and quantify the locations and time scales of significant motions, but by changing the maximum time scale of WA a more complete characterization of these motions can be obtained. This allows motions of different time scales to be identified or ignored as desired.
Camera, Stefano; Santos, Mário G; Ferreira, Pedro G; Ferramacho, Luís
2013-10-25
The large-scale structure of the Universe supplies crucial information about the physical processes at play at early times. Unresolved maps of the intensity of 21 cm emission from neutral hydrogen HI at redshifts z=/~1-5 are the best hope of accessing the ultralarge-scale information, directly related to the early Universe. A purpose-built HI intensity experiment may be used to detect the large scale effects of primordial non-Gaussianity, placing stringent bounds on different models of inflation. We argue that it may be possible to place tight constraints on the non-Gaussianity parameter f(NL), with an error close to σ(f(NL))~1.
NASA Astrophysics Data System (ADS)
Abdelmonem, M. S.; Abdel-Hady, Afaf; Nasser, I.
2017-07-01
The scaling laws are given for the entropies in the information theory, including the Shannon's entropy, its power, the Fisher's information and the Fisher-Shannon product, using the exponential-cosine screened Coulomb potential. The scaling laws are specified, in the r-space, as a function of |μ - μc, nℓ|, where μ is the screening parameter and μc, nℓ its critical value for the specific quantum numbers n and ℓ. Scaling laws for other physical quantities, such as energy eigenvalues, the moments, static polarisability, transition probabilities, etc. are also given. Some of these are reported for the first time. The outcome is compared with the available literatures' results.
NASA Astrophysics Data System (ADS)
Goodwell, Allison E.; Kumar, Praveen
2017-07-01
In an ecohydrologic system, components of atmospheric, vegetation, and root-soil subsystems participate in forcing and feedback interactions at varying time scales and intensities. The structure of this network of complex interactions varies in terms of connectivity, strength, and time scale due to perturbations or changing conditions such as rainfall, drought, or land use. However, characterization of these interactions is difficult due to multivariate and weak dependencies in the presence of noise, nonlinearities, and limited data. We introduce a framework for Temporal Information Partitioning Networks (TIPNets), in which time-series variables are viewed as nodes, and lagged multivariate mutual information measures are links. These links are partitioned into synergistic, unique, and redundant information components, where synergy is information provided only jointly, unique information is only provided by a single source, and redundancy is overlapping information. We construct TIPNets from 1 min weather station data over several hour time windows. From a comparison of dry, wet, and rainy conditions, we find that information strengths increase when solar radiation and surface moisture are present, and surface moisture and wind variability are redundant and synergistic influences, respectively. Over a growing season, network trends reveal patterns that vary with vegetation and rainfall patterns. The framework presented here enables us to interpret process connectivity in a multivariate context, which can lead to better inference of behavioral shifts due to perturbations in ecohydrologic systems. This work contributes to more holistic characterizations of system behavior, and can benefit a wide variety of studies of complex systems.
Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming
2018-01-01
There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L0 gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements. PMID:29414893
Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming
2018-02-07
There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L ₀ gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements.
Exploring the History of Time in an Integrated System: the Ramifications for Water
NASA Astrophysics Data System (ADS)
Green, M. B.; Adams, L. E.; Allen, T. L.; Arrigo, J. S.; Bain, D. J.; Bray, E. N.; Duncan, J. M.; Hermans, C. M.; Pastore, C.; Schlosser, C. A.; Vorosmarty, C. J.; Witherell, B. B.; Wollheim, W. M.; Wreschnig, A. J.
2009-12-01
Characteristic time scales are useful and simple descriptors of geophysical and socio-economic system dynamics. Focusing on the integrative nature of the hydrologic cycle, new insights into system couplings can be gained by compiling characteristic time scales of important processes driving these systems. There are many examples of changing characteristic time scales. Human life expectancy has increased over the recent history of medical advancement. The transport time of goods has decreased with the progression from horse to rail to car to plane. The transport time of information changed with the progression from letter to telegraph to telephone to networked computing. Soil residence time (pedogenesis to estuary deposition) has been influenced by changing agricultural technology, urbanization, and forest practices. Surface water residence times have varied as beaver dams have disappeared and been replaced with modern reservoirs, flood control works, and channelization. These dynamics raise the question of how these types of time scales interact with each other to form integrated Earth system dynamics? Here we explore the coupling of geophysical and socio-economic systems in the northeast United States over the 1600 to 2010 period by examining characteristic time scales. This visualization of many time scales serves as an exploratory analysis, producing new hypotheses about how the integrated system dynamics have evolved over the last 400 years. Specifically, exponential population growth and the evolving strategies to maintain that population appears as fundamental to many of the time scales.
NASA Astrophysics Data System (ADS)
ten Veldhuis, Marie-Claire; Schleiss, Marc
2017-04-01
In this study, we introduced an alternative approach for analysis of hydrological flow time series, using an adaptive sampling framework based on inter-amount times (IATs). The main difference with conventional flow time series is the rate at which low and high flows are sampled: the unit of analysis for IATs is a fixed flow amount, instead of a fixed time window. We analysed statistical distributions of flows and IATs across a wide range of sampling scales to investigate sensitivity of statistical properties such as quantiles, variance, skewness, scaling parameters and flashiness indicators to the sampling scale. We did this based on streamflow time series for 17 (semi)urbanised basins in North Carolina, US, ranging from 13 km2 to 238 km2 in size. Results showed that adaptive sampling of flow time series based on inter-amounts leads to a more balanced representation of low flow and peak flow values in the statistical distribution. While conventional sampling gives a lot of weight to low flows, as these are most ubiquitous in flow time series, IAT sampling gives relatively more weight to high flow values, when given flow amounts are accumulated in shorter time. As a consequence, IAT sampling gives more information about the tail of the distribution associated with high flows, while conventional sampling gives relatively more information about low flow periods. We will present results of statistical analyses across a range of subdaily to seasonal scales and will highlight some interesting insights that can be derived from IAT statistics with respect to basin flashiness and impact urbanisation on hydrological response.
Complete information acquisition in scanning probe microscopy
Belianinov, Alex; Kalinin, Sergei V.; Jesse, Stephen
2015-03-13
In the last three decades, scanning probe microscopy (SPM) has emerged as a primary tool for exploring and controlling the nanoworld. A critical part of the SPM measurements is the information transfer from the tip-surface junction to a macroscopic measurement system. This process reduces the many degrees of freedom of a vibrating cantilever to relatively few parameters recorded as images. Similarly, the details of dynamic cantilever response at sub-microsecond time scales of transients, higher-order eigenmodes and harmonics are averaged out by transitioning to millisecond time scale of pixel acquisition. Hence, the amount of information available to the external observer ismore » severely limited, and its selection is biased by the chosen data processing method. Here, we report a fundamentally new approach for SPM imaging based on information theory-type analysis of the data stream from the detector. This approach allows full exploration of complex tip-surface interactions, spatial mapping of multidimensional variability of material s properties and their mutual interactions, and SPM imaging at the information channel capacity limit.« less
The Pacific Northwest Hydrologic Landscapes (PNW HL) at the assessment unit scale has provided a solid conceptual classification framework to relate and transfer hydrologically meaningful information between watersheds without access to streamflow time series. A collection of tec...
Most studies addressing relationships between salmonids and factors that affect their freshwater production have focused on small areas and short time frames. Limits of understanding gained at fine spatiotemporal scales have become obvious, and aggregating fine-scale information ...
Gething, Peter W; Noor, Abdisalan M; Gikandi, Priscilla W; Ogara, Esther A A; Hay, Simon I; Nixon, Mark S; Snow, Robert W; Atkinson, Peter M
2006-06-01
Reliable and timely information on disease-specific treatment burdens within a health system is critical for the planning and monitoring of service provision. Health management information systems (HMIS) exist to address this need at national scales across Africa but are failing to deliver adequate data because of widespread underreporting by health facilities. Faced with this inadequacy, vital public health decisions often rely on crudely adjusted regional and national estimates of treatment burdens. This study has taken the example of presumed malaria in outpatients within the largely incomplete Kenyan HMIS database and has defined a geostatistical modelling framework that can predict values for all data that are missing through space and time. The resulting complete set can then be used to define treatment burdens for presumed malaria at any level of spatial and temporal aggregation. Validation of the model has shown that these burdens are quantified to an acceptable level of accuracy at the district, provincial, and national scale. The modelling framework presented here provides, to our knowledge for the first time, reliable information from imperfect HMIS data to support evidence-based decision-making at national and sub-national levels.
Gething, Peter W; Noor, Abdisalan M; Gikandi, Priscilla W; Ogara, Esther A. A; Hay, Simon I; Nixon, Mark S; Snow, Robert W; Atkinson, Peter M
2006-01-01
Background Reliable and timely information on disease-specific treatment burdens within a health system is critical for the planning and monitoring of service provision. Health management information systems (HMIS) exist to address this need at national scales across Africa but are failing to deliver adequate data because of widespread underreporting by health facilities. Faced with this inadequacy, vital public health decisions often rely on crudely adjusted regional and national estimates of treatment burdens. Methods and Findings This study has taken the example of presumed malaria in outpatients within the largely incomplete Kenyan HMIS database and has defined a geostatistical modelling framework that can predict values for all data that are missing through space and time. The resulting complete set can then be used to define treatment burdens for presumed malaria at any level of spatial and temporal aggregation. Validation of the model has shown that these burdens are quantified to an acceptable level of accuracy at the district, provincial, and national scale. Conclusions The modelling framework presented here provides, to our knowledge for the first time, reliable information from imperfect HMIS data to support evidence-based decision-making at national and sub-national levels. PMID:16719557
ERIC Educational Resources Information Center
Knudson, Joel
2014-01-01
This report documents the history and evolution of the Stuart Foundation California Leaders in Education (SCALE) Initiative through 2014. It tells the story of how the work began, what it entails, and how it has developed across time. The report also identifies lessons learned from the SCALE experience. These lessons can inform the participants of…
Application of Wavelet Filters in an Evaluation of ...
Air quality model evaluation can be enhanced with time-scale specific comparisons of outputs and observations. For example, high-frequency (hours to one day) time scale information in observed ozone is not well captured by deterministic models and its incorporation into model performance metrics lead one to devote resources to stochastic variations in model outputs. In this analysis, observations are compared with model outputs at seasonal, weekly, diurnal and intra-day time scales. Filters provide frequency specific information that can be used to compare the strength (amplitude) and timing (phase) of observations and model estimates. The National Exposure Research Laboratory′s (NERL′s) Atmospheric Modeling and Analysis Division (AMAD) conducts research in support of EPA′s mission to protect human health and the environment. AMAD′s research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the Nation′s air quality and for assessing changes in air quality and air pollutant exposures, as affected by changes in ecosystem management and regulatory decisions. AMAD is responsible for providing a sound scientific and technical basis for regulatory policies based on air quality models to improve ambient air quality. The models developed by AMAD are being used by EPA, NOAA, and the air pollution community in understanding and forecasting not only the magnitude of the air pollu
NASA Astrophysics Data System (ADS)
Faes, Luca; Nollo, Giandomenico; Stramaglia, Sebastiano; Marinazzo, Daniele
2017-10-01
In the study of complex physical and biological systems represented by multivariate stochastic processes, an issue of great relevance is the description of the system dynamics spanning multiple temporal scales. While methods to assess the dynamic complexity of individual processes at different time scales are well established, multiscale analysis of directed interactions has never been formalized theoretically, and empirical evaluations are complicated by practical issues such as filtering and downsampling. Here we extend the very popular measure of Granger causality (GC), a prominent tool for assessing directed lagged interactions between joint processes, to quantify information transfer across multiple time scales. We show that the multiscale processing of a vector autoregressive (AR) process introduces a moving average (MA) component, and describe how to represent the resulting ARMA process using state space (SS) models and to combine the SS model parameters for computing exact GC values at arbitrarily large time scales. We exploit the theoretical formulation to identify peculiar features of multiscale GC in basic AR processes, and demonstrate with numerical simulations the much larger estimation accuracy of the SS approach compared to pure AR modeling of filtered and downsampled data. The improved computational reliability is exploited to disclose meaningful multiscale patterns of information transfer between global temperature and carbon dioxide concentration time series, both in paleoclimate and in recent years.
A scaling theory for linear systems
NASA Technical Reports Server (NTRS)
Brockett, R. W.; Krishnaprasad, P. S.
1980-01-01
A theory of scaling for rational (transfer) functions in terms of transformation groups is developed. Two different four-parameter scaling groups which play natural roles in studying linear systems are identified and the effect of scaling on Fisher information and related statistical measures in system identification are studied. The scalings considered include change of time scale, feedback, exponential scaling, magnitude scaling, etc. The scaling action of the groups studied is tied to the geometry of transfer functions in a rather strong way as becomes apparent in the examination of the invariants of scaling. As a result, the scaling process also provides new insight into the parameterization question for rational functions.
NASA Astrophysics Data System (ADS)
Rotta, Davide; Sebastiano, Fabio; Charbon, Edoardo; Prati, Enrico
2017-06-01
Even the quantum simulation of an apparently simple molecule such as Fe2S2 requires a considerable number of qubits of the order of 106, while more complex molecules such as alanine (C3H7NO2) require about a hundred times more. In order to assess such a multimillion scale of identical qubits and control lines, the silicon platform seems to be one of the most indicated routes as it naturally provides, together with qubit functionalities, the capability of nanometric, serial, and industrial-quality fabrication. The scaling trend of microelectronic devices predicting that computing power would double every 2 years, known as Moore's law, according to the new slope set after the 32-nm node of 2009, suggests that the technology roadmap will achieve the 3-nm manufacturability limit proposed by Kelly around 2020. Today, circuital quantum information processing architectures are predicted to take advantage from the scalability ensured by silicon technology. However, the maximum amount of quantum information per unit surface that can be stored in silicon-based qubits and the consequent space constraints on qubit operations have never been addressed so far. This represents one of the key parameters toward the implementation of quantum error correction for fault-tolerant quantum information processing and its dependence on the features of the technology node. The maximum quantum information per unit surface virtually storable and controllable in the compact exchange-only silicon double quantum dot qubit architecture is expressed as a function of the complementary metal-oxide-semiconductor technology node, so the size scale optimizing both physical qubit operation time and quantum error correction requirements is assessed by reviewing the physical and technological constraints. According to the requirements imposed by the quantum error correction method and the constraints given by the typical strength of the exchange coupling, we determine the workable operation frequency range of a silicon complementary metal-oxide-semiconductor quantum processor to be within 1 and 100 GHz. Such constraint limits the feasibility of fault-tolerant quantum information processing with complementary metal-oxide-semiconductor technology only to the most advanced nodes. The compatibility with classical complementary metal-oxide-semiconductor control circuitry is discussed, focusing on the cryogenic complementary metal-oxide-semiconductor operation required to bring the classical controller as close as possible to the quantum processor and to enable interfacing thousands of qubits on the same chip via time-division, frequency-division, and space-division multiplexing. The operation time range prospected for cryogenic control electronics is found to be compatible with the operation time expected for qubits. By combining the forecast of the development of scaled technology nodes with operation time and classical circuitry constraints, we derive a maximum quantum information density for logical qubits of 2.8 and 4 Mqb/cm2 for the 10 and 7-nm technology nodes, respectively, for the Steane code. The density is one and two orders of magnitude less for surface codes and for concatenated codes, respectively. Such values provide a benchmark for the development of fault-tolerant quantum algorithms by circuital quantum information based on silicon platforms and a guideline for other technologies in general.
NASA Astrophysics Data System (ADS)
Lee, Minsuk; Won, Youngjae; Park, Byungjun; Lee, Seungrag
2017-02-01
Not only static characteristics but also dynamic characteristics of the red blood cell (RBC) contains useful information for the blood diagnosis. Quantitative phase imaging (QPI) can capture sample images with subnanometer scale depth resolution and millisecond scale temporal resolution. Various researches have been used QPI for the RBC diagnosis, and recently many researches has been developed to decrease the process time of RBC information extraction using QPI by the parallel computing algorithm, however previous studies are interested in the static parameters such as morphology of the cells or simple dynamic parameters such as root mean square (RMS) of the membrane fluctuations. Previously, we presented a practical blood test method using the time series correlation analysis of RBC membrane flickering with QPI. However, this method has shown that there is a limit to the clinical application because of the long computation time. In this study, we present an accelerated time series correlation analysis of RBC membrane flickering using the parallel computing algorithm. This method showed consistent fractal scaling exponent results of the surrounding medium and the normal RBC with our previous research.
The progress on time & frequency during the past 5 decades
NASA Astrophysics Data System (ADS)
Wang, Zheng-Ming
2002-06-01
The number and variety of applications using precise timing are astounding and increasing along with the new technology in communication, computer science, space science as well as in other fields. The world has evolved into the information age, and precise timing is at the heart of managing the flow of that information, which prompts the progress on precise timing itself rapidly. The development of time scales, UT1 determination, frequency standards, time transfer and the time dissemination for the past half century in the world and in China are described in this paper. The expectation in this field is discussed.
Estimating time-dependent connectivity in marine systems
Defne, Zafer; Ganju, Neil K.; Aretxabaleta, Alfredo
2016-01-01
Hydrodynamic connectivity describes the sources and destinations of water parcels within a domain over a given time. When combined with biological models, it can be a powerful concept to explain the patterns of constituent dispersal within marine ecosystems. However, providing connectivity metrics for a given domain is a three-dimensional problem: two dimensions in space to define the sources and destinations and a time dimension to evaluate connectivity at varying temporal scales. If the time scale of interest is not predefined, then a general approach is required to describe connectivity over different time scales. For this purpose, we have introduced the concept of a “retention clock” that highlights the change in connectivity through time. Using the example of connectivity between protected areas within Barnegat Bay, New Jersey, we show that a retention clock matrix is an informative tool for multitemporal analysis of connectivity.
Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.; ...
2016-10-20
Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.
Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics.« less
Sippel, Sebastian; Mahecha, Miguel D.; Hauhs, Michael; Bodesheim, Paul; Kaminski, Thomas; Gans, Fabian; Rosso, Osvaldo A.
2016-01-01
Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observed and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. We demonstrate here that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics. PMID:27764187
This paper explores the potential of time-frequency wavelet analysis in resolving beach bacteria concentration and possible explanatory variables across multiple time scales with temporal information still preserved. The wavelet scalograms of E. coli concentrations and the explan...
ERIC Educational Resources Information Center
Manzari, Laura
2013-01-01
This prestige study surveyed full-time faculty of American Library Association (ALA)-accredited programs in library and information studies regarding library and information science (LIS) journals. Faculty were asked to rate a list of eighty-nine LIS journals on a scale from 1 to 5 based on each journal's importance to their research and teaching.…
Time-Ordered Networks Reveal Limitations to Information Flow in Ant Colonies
Blonder, Benjamin; Dornhaus, Anna
2011-01-01
Background An important function of many complex networks is to inhibit or promote the transmission of disease, resources, or information between individuals. However, little is known about how the temporal dynamics of individual-level interactions affect these networks and constrain their function. Ant colonies are a model comparative system for understanding general principles linking individual-level interactions to network-level functions because interactions among individuals enable integration of multiple sources of information to collectively make decisions, and allocate tasks and resources. Methodology/Findings Here we show how the temporal and spatial dynamics of such individual interactions provide upper bounds to rates of colony-level information flow in the ant Temnothorax rugatulus. We develop a general framework for analyzing dynamic networks and a mathematical model that predicts how information flow scales with individual mobility and group size. Conclusions/Significance Using thousands of time-stamped interactions between uniquely marked ants in four colonies of a range of sizes, we demonstrate that observed maximum rates of information flow are always slower than predicted, and are constrained by regulation of individual mobility and contact rate. By accounting for the ordering and timing of interactions, we can resolve important difficulties with network sampling frequency and duration, enabling a broader understanding of interaction network functioning across systems and scales. PMID:21625450
Energy and time determine scaling in biological and computer designs
Bezerra, George; Edwards, Benjamin; Brown, James; Forrest, Stephanie
2016-01-01
Metabolic rate in animals and power consumption in computers are analogous quantities that scale similarly with size. We analyse vascular systems of mammals and on-chip networks of microprocessors, where natural selection and human engineering, respectively, have produced systems that minimize both energy dissipation and delivery times. Using a simple network model that simultaneously minimizes energy and time, our analysis explains empirically observed trends in the scaling of metabolic rate in mammals and power consumption and performance in microprocessors across several orders of magnitude in size. Just as the evolutionary transitions from unicellular to multicellular animals in biology are associated with shifts in metabolic scaling, our model suggests that the scaling of power and performance will change as computer designs transition to decentralized multi-core and distributed cyber-physical systems. More generally, a single energy–time minimization principle may govern the design of many complex systems that process energy, materials and information. This article is part of the themed issue ‘The major synthetic evolutionary transitions’. PMID:27431524
Energy and time determine scaling in biological and computer designs.
Moses, Melanie; Bezerra, George; Edwards, Benjamin; Brown, James; Forrest, Stephanie
2016-08-19
Metabolic rate in animals and power consumption in computers are analogous quantities that scale similarly with size. We analyse vascular systems of mammals and on-chip networks of microprocessors, where natural selection and human engineering, respectively, have produced systems that minimize both energy dissipation and delivery times. Using a simple network model that simultaneously minimizes energy and time, our analysis explains empirically observed trends in the scaling of metabolic rate in mammals and power consumption and performance in microprocessors across several orders of magnitude in size. Just as the evolutionary transitions from unicellular to multicellular animals in biology are associated with shifts in metabolic scaling, our model suggests that the scaling of power and performance will change as computer designs transition to decentralized multi-core and distributed cyber-physical systems. More generally, a single energy-time minimization principle may govern the design of many complex systems that process energy, materials and information.This article is part of the themed issue 'The major synthetic evolutionary transitions'. © 2016 The Author(s).
Satellite orbit and data sampling requirements
NASA Technical Reports Server (NTRS)
Rossow, William
1993-01-01
Climate forcings and feedbacks vary over a wide range of time and space scales. The operation of non-linear feedbacks can couple variations at widely separated time and space scales and cause climatological phenomena to be intermittent. Consequently, monitoring of global, decadal changes in climate requires global observations that cover the whole range of space-time scales and are continuous over several decades. The sampling of smaller space-time scales must have sufficient statistical accuracy to measure the small changes in the forcings and feedbacks anticipated in the next few decades, while continuity of measurements is crucial for unambiguous interpretation of climate change. Shorter records of monthly and regional (500-1000 km) measurements with similar accuracies can also provide valuable information about climate processes, when 'natural experiments' such as large volcanic eruptions or El Ninos occur. In this section existing satellite datasets and climate model simulations are used to test the satellite orbits and sampling required to achieve accurate measurements of changes in forcings and feedbacks at monthly frequency and 1000 km (regional) scale.
Fluctuation scaling of quotation activities in the foreign exchange market
NASA Astrophysics Data System (ADS)
Sato, Aki-Hiro; Nishimura, Maiko; Hołyst, Janusz A.
2010-07-01
We study the scaling behavior of quotation activities for various currency pairs in the foreign exchange market. The components’ centrality is estimated from multiple time series and visualized as a currency pair network. The power-law relationship between a mean of quotation activity and its standard deviation for each currency pair is found. The scaling exponent α and the ratio between common and specific fluctuations η increase with the length of the observation time window Δt. The result means that although for Δt=1 (min), the market dynamics are governed by specific processes, and at a longer time scale Δt>100 (min) the common information flow becomes more important. We point out that quotation activities are not independently Poissonian for Δt=1 (min), and temporally or mutually correlated activities of quotations can happen even at this time scale. A stochastic model for the foreign exchange market based on a bipartite graph representation is proposed.
NASA Astrophysics Data System (ADS)
Liu, W.; Ning, T.; Shen, H.; Li, Z.
2017-12-01
Vegetation, climate seasonality and topography are the main impact factors controlling the water and heat balance over a catchment, and they are usually empirically formulated into the controlling parameter in Budyko model. However, their interactions on different time scales have not been fully addressed. Taking 30 catchments in China's Loess Plateau as an example, on annual scale, vegetation coverage was found poorly correlated with climate seasonality index; therefore, they could be both parameterized into the Budyko model. On the long-term scale, vegetation coverage tended to have close relationships with topographic conditions and climate seasonality, which was confirmed by the multi-collinearity problems; in that sense, vegetation information could fit the controlling parameter exclusively. Identifying the dominant controlling factors over different time scales, this study simplified the empirical parameterization of the Budyko formula. Though the above relationships further investigation over the other regions/catchments.
NASA Astrophysics Data System (ADS)
OświÈ©cimka, Paweł; Livi, Lorenzo; DroŻdŻ, Stanisław
2016-10-01
We investigate the scaling of the cross-correlations calculated for two-variable time series containing vertex properties in the context of complex networks. Time series of such observables are obtained by means of stationary, unbiased random walks. We consider three vertex properties that provide, respectively, short-, medium-, and long-range information regarding the topological role of vertices in a given network. In order to reveal the relation between these quantities, we applied the multifractal cross-correlation analysis technique, which provides information about the nonlinear effects in coupling of time series. We show that the considered network models are characterized by unique multifractal properties of the cross-correlation. In particular, it is possible to distinguish between Erdös-Rényi, Barabási-Albert, and Watts-Strogatz networks on the basis of fractal cross-correlation. Moreover, the analysis of protein contact networks reveals characteristics shared with both scale-free and small-world models.
Predicting Regional Drought on Sub-Seasonal to Decadal Time Scales
NASA Technical Reports Server (NTRS)
Schubert, Siegfried; Wang, Hailan; Suarez, Max; Koster, Randal
2011-01-01
Drought occurs on a wide range of time scales, and within a variety of different types of regional climates. It is driven foremost by an extended period of reduced precipitation, but it is the impacts on such quantities as soil moisture, streamflow and crop yields that are often most important from a users perspective. While recognizing that different users have different needs for drought information, it is nevertheless important to understand that progress in predicting drought and satisfying such user needs, largely hinges on our ability to improve predictions of precipitation. This talk reviews our current understanding of the physical mechanisms that drive precipitation variations on subseasonal to decadal time scales, and the implications for predictability and prediction skill. Examples are given highlighting the phenomena and mechanisms controlling precipitation on monthly (e.g., stationary Rossby waves, soil moisture), seasonal (ENSO) and decadal time scales (PD and AMO).
Rapp, Thomas; Lacey, Loretto; Ousset, Pierre-Jean; Cowppli-Bony, Pascale; Vellas, Bruno; Orgogozo, Jean-Marc
2015-07-01
It is crucial to define health policies that target patients with the highest needs. In France, public financial support is provided to dependent patients: it can be used to finance informal care time and nonmedical care use. Eligibility for public subsidies and reimbursement of costs is associated with a specific tool: the autonomie gérontologie groupes iso-ressources (AGGIR) scale score. Our objective was to explore whether patients with Alzheimer's disease who are eligible for public financial support have greater needs than do noneligible patients. Using data from the Dépendance des patients atteints de la maladie d'Alzheimer en France study, we calculated nonmedical care expenditures (in €) using microcosting methods and informal care time demand (hours/month) using the Resource Use in Dementia questionnaire. We measured the burden associated with informal care provision with Zarit Burden Interview. We used a modified two-part model to explore the correlation between public financial support eligibility and these three variables. We find evidence of higher informal care use, higher informal caregivers' burden, and higher care expenditures when patients have an AGGIR scale score corresponding to public financial support eligibility. The AGGIR scale is useful to target patients with the highest costs and needs. Given our results, public subsidies could be used to further sustain informal caregivers networks by financing programs dedicated to lowering informal caregivers' burden. Copyright © 2015 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Blume, T.; Zehe, E.; Bronstert, A.
2009-07-01
Spatial patterns as well as temporal dynamics of soil moisture have a major influence on runoff generation. The investigation of these dynamics and patterns can thus yield valuable information on hydrological processes, especially in data scarce or previously ungauged catchments. The combination of spatially scarce but temporally high resolution soil moisture profiles with episodic and thus temporally scarce moisture profiles at additional locations provides information on spatial as well as temporal patterns of soil moisture at the hillslope transect scale. This approach is better suited to difficult terrain (dense forest, steep slopes) than geophysical techniques and at the same time less cost-intensive than a high resolution grid of continuously measuring sensors. Rainfall simulation experiments with dye tracers while continuously monitoring soil moisture response allows for visualization of flow processes in the unsaturated zone at these locations. Data was analyzed at different spacio-temporal scales using various graphical methods, such as space-time colour maps (for the event and plot scale) and binary indicator maps (for the long-term and hillslope scale). Annual dynamics of soil moisture and decimeter-scale variability were also investigated. The proposed approach proved to be successful in the investigation of flow processes in the unsaturated zone and showed the importance of preferential flow in the Malalcahuello Catchment, a data-scarce catchment in the Andes of Southern Chile. Fast response times of stream flow indicate that preferential flow observed at the plot scale might also be of importance at the hillslope or catchment scale. Flow patterns were highly variable in space but persistent in time. The most likely explanation for preferential flow in this catchment is a combination of hydrophobicity, small scale heterogeneity in rainfall due to redistribution in the canopy and strong gradients in unsaturated conductivities leading to self-reinforcing flow paths.
Speed of Information Processing and Individual Differences in Intelligence.
1986-06-01
years of age. As criteria, the students were given the Vocabulary and Block Design subtests of the Wechsler Adult Intelligence Scale --Revised (WAIS-R... Wechsler Adult Intelligence Scale (WAIS) and inspection time (Nettelbeck & Lally, 1976), most subsequent investigations found a less spectacular, but...Design sdbtests of the Wechsler Adult Intelligence Scale , Revised (WAIS-R) and the Cognitive Laterality Battery (Gordon, 1983). Visual Processing Tasks
Vakorin, Vasily A.; Mišić, Bratislav; Krakovska, Olga; McIntosh, Anthony Randal
2011-01-01
Variability in source dynamics across the sources in an activated network may be indicative of how the information is processed within a network. Information-theoretic tools allow one not only to characterize local brain dynamics but also to describe interactions between distributed brain activity. This study follows such a framework and explores the relations between signal variability and asymmetry in mutual interdependencies in a data-driven pipeline of non-linear analysis of neuromagnetic sources reconstructed from human magnetoencephalographic (MEG) data collected as a reaction to a face recognition task. Asymmetry in non-linear interdependencies in the network was analyzed using transfer entropy, which quantifies predictive information transfer between the sources. Variability of the source activity was estimated using multi-scale entropy, quantifying the rate of which information is generated. The empirical results are supported by an analysis of synthetic data based on the dynamics of coupled systems with time delay in coupling. We found that the amount of information transferred from one source to another was correlated with the difference in variability between the dynamics of these two sources, with the directionality of net information transfer depending on the time scale at which the sample entropy was computed. The results based on synthetic data suggest that both time delay and strength of coupling can contribute to the relations between variability of brain signals and information transfer between them. Our findings support the previous attempts to characterize functional organization of the activated brain, based on a combination of non-linear dynamics and temporal features of brain connectivity, such as time delay. PMID:22131968
Option pricing from wavelet-filtered financial series
NASA Astrophysics Data System (ADS)
de Almeida, V. T. X.; Moriconi, L.
2012-10-01
We perform wavelet decomposition of high frequency financial time series into large and small time scale components. Taking the FTSE100 index as a case study, and working with the Haar basis, it turns out that the small scale component defined by most (≃99.6%) of the wavelet coefficients can be neglected for the purpose of option premium evaluation. The relevance of the hugely compressed information provided by low-pass wavelet-filtering is related to the fact that the non-gaussian statistical structure of the original financial time series is essentially preserved for expiration times which are larger than just one trading day.
Information recall using relative spike timing in a spiking neural network.
Sterne, Philip
2012-08-01
We present a neural network that is capable of completing and correcting a spiking pattern given only a partial, noisy version. It operates in continuous time and represents information using the relative timing of individual spikes. The network is capable of correcting and recalling multiple patterns simultaneously. We analyze the network's performance in terms of information recall. We explore two measures of the capacity of the network: one that values the accurate recall of individual spike times and another that values only the presence or absence of complete patterns. Both measures of information are found to scale linearly in both the number of neurons and the period of the patterns, suggesting these are natural measures of network information. We show a smooth transition from encodings that provide precise spike times to flexible encodings that can encode many scenes. This makes it plausible that many diverse tasks could be learned with such an encoding.
A scale-invariant internal representation of time.
Shankar, Karthik H; Howard, Marc W
2012-01-01
We propose a principled way to construct an internal representation of the temporal stimulus history leading up to the present moment. A set of leaky integrators performs a Laplace transform on the stimulus function, and a linear operator approximates the inversion of the Laplace transform. The result is a representation of stimulus history that retains information about the temporal sequence of stimuli. This procedure naturally represents more recent stimuli more accurately than less recent stimuli; the decrement in accuracy is precisely scale invariant. This procedure also yields time cells that fire at specific latencies following the stimulus with a scale-invariant temporal spread. Combined with a simple associative memory, this representation gives rise to a moment-to-moment prediction that is also scale invariant in time. We propose that this scale-invariant representation of temporal stimulus history could serve as an underlying representation accessible to higher-level behavioral and cognitive mechanisms. In order to illustrate the potential utility of this scale-invariant representation in a variety of fields, we sketch applications using minimal performance functions to problems in classical conditioning, interval timing, scale-invariant learning in autoshaping, and the persistence of the recency effect in episodic memory across timescales.
Investigation of aquifer-estuary interaction using wavelet analysis of fiber-optic temperature data
Henderson, R.D.; Day-Lewis, Frederick D.; Harvey, Charles F.
2009-01-01
Fiber-optic distributed temperature sensing (FODTS) provides sub-minute temporal and meter-scale spatial resolution over kilometer-long cables. Compared to conventional thermistor or thermocouple-based technologies, which measure temperature at discrete (and commonly sparse) locations, FODTS offers nearly continuous spatial coverage, thus providing hydrologic information at spatiotemporal scales previously impossible. Large and information-rich FODTS datasets, however, pose challenges for data exploration and analysis. To date, FODTS analyses have focused on time-series variance as the means to discriminate between hydrologic phenomena. Here, we demonstrate the continuous wavelet transform (CWT) and cross-wavelet transform (XWT) to analyze FODTS in the context of related hydrologic time series. We apply the CWT and XWT to data from Waquoit Bay, Massachusetts to identify the location and timing of tidal pumping of submarine groundwater.
A graph-based approach to detect spatiotemporal dynamics in satellite image time series
NASA Astrophysics Data System (ADS)
Guttler, Fabio; Ienco, Dino; Nin, Jordi; Teisseire, Maguelonne; Poncelet, Pascal
2017-08-01
Enhancing the frequency of satellite acquisitions represents a key issue for Earth Observation community nowadays. Repeated observations are crucial for monitoring purposes, particularly when intra-annual process should be taken into account. Time series of images constitute a valuable source of information in these cases. The goal of this paper is to propose a new methodological framework to automatically detect and extract spatiotemporal information from satellite image time series (SITS). Existing methods dealing with such kind of data are usually classification-oriented and cannot provide information about evolutions and temporal behaviors. In this paper we propose a graph-based strategy that combines object-based image analysis (OBIA) with data mining techniques. Image objects computed at each individual timestamp are connected across the time series and generates a set of evolution graphs. Each evolution graph is associated to a particular area within the study site and stores information about its temporal evolution. Such information can be deeply explored at the evolution graph scale or used to compare the graphs and supply a general picture at the study site scale. We validated our framework on two study sites located in the South of France and involving different types of natural, semi-natural and agricultural areas. The results obtained from a Landsat SITS support the quality of the methodological approach and illustrate how the framework can be employed to extract and characterize spatiotemporal dynamics.
Real-Time Monitoring System for a Utility-Scale Photovoltaic Power Plant.
Moreno-Garcia, Isabel M; Palacios-Garcia, Emilio J; Pallares-Lopez, Victor; Santiago, Isabel; Gonzalez-Redondo, Miguel J; Varo-Martinez, Marta; Real-Calvo, Rafael J
2016-05-26
There is, at present, considerable interest in the storage and dispatchability of photovoltaic (PV) energy, together with the need to manage power flows in real-time. This paper presents a new system, PV-on time, which has been developed to supervise the operating mode of a Grid-Connected Utility-Scale PV Power Plant in order to ensure the reliability and continuity of its supply. This system presents an architecture of acquisition devices, including wireless sensors distributed around the plant, which measure the required information. It is also equipped with a high-precision protocol for synchronizing all data acquisition equipment, something that is necessary for correctly establishing relationships among events in the plant. Moreover, a system for monitoring and supervising all of the distributed devices, as well as for the real-time treatment of all the registered information, is presented. Performances were analyzed in a 400 kW transformation center belonging to a 6.1 MW Utility-Scale PV Power Plant. In addition to monitoring the performance of all of the PV plant's components and detecting any failures or deviations in production, this system enables users to control the power quality of the signal injected and the influence of the installation on the distribution grid.
Stable functional networks exhibit consistent timing in the human brain.
Chapeton, Julio I; Inati, Sara K; Zaghloul, Kareem A
2017-03-01
Despite many advances in the study of large-scale human functional networks, the question of timing, stability, and direction of communication between cortical regions has not been fully addressed. At the cellular level, neuronal communication occurs through axons and dendrites, and the time required for such communication is well defined and preserved. At larger spatial scales, however, the relationship between timing, direction, and communication between brain regions is less clear. Here, we use a measure of effective connectivity to identify connections between brain regions that exhibit communication with consistent timing. We hypothesized that if two brain regions are communicating, then knowledge of the activity in one region should allow an external observer to better predict activity in the other region, and that such communication involves a consistent time delay. We examine this question using intracranial electroencephalography captured from nine human participants with medically refractory epilepsy. We use a coupling measure based on time-lagged mutual information to identify effective connections between brain regions that exhibit a statistically significant increase in average mutual information at a consistent time delay. These identified connections result in sparse, directed functional networks that are stable over minutes, hours, and days. Notably, the time delays associated with these connections are also highly preserved over multiple time scales. We characterize the anatomic locations of these connections, and find that the propagation of activity exhibits a preferred posterior to anterior temporal lobe direction, consistent across participants. Moreover, networks constructed from connections that reliably exhibit consistent timing between anatomic regions demonstrate features of a small-world architecture, with many reliable connections between anatomically neighbouring regions and few long range connections. Together, our results demonstrate that cortical regions exhibit functional relationships with well-defined and consistent timing, and the stability of these relationships over multiple time scales suggests that these stable pathways may be reliably and repeatedly used for large-scale cortical communication. Published by Oxford University Press on behalf of the Guarantors of Brain 2017. This work is written by US Government employees and is in the public domain in the United States.
Tait, Alan R; Voepel-Lewis, Terri; Chetcuti, Stanley J; Brennan-Martinez, Colleen; Levine, Robert
2014-05-01
Standard print and verbal information provided to patients undergoing treatments are often difficult to understand and may impair their ability to be truly informed. This study examined the effect of an interactive multimedia informational program with in-line exercises and corrected feedback on patients' real-time understanding of their cardiac catheterization procedure. 151 adult patients scheduled for diagnostic cardiac catheterization were randomized to receive information about their procedure using either the standard institutional verbal and written information (SI) or an interactive iPad-based informational program (IPI). Subject understanding was evaluated using semi-structured interviews at baseline, immediately following catheterization, and 2 weeks after the procedure. In addition, for those randomized to the IPI, the ability to respond correctly to several in-line exercises was recorded. Subjects' perceptions of, and preferences for the information delivery were also elicited. Subjects randomized to the IPI program had significantly better understanding following the intervention compared with those randomized to the SI group (8.3±2.4 vs 7.4±2.5, respectively, 0-12 scale where 12=complete understanding, P<0.05). First-time correct responses to the in-line exercises ranged from 24.3% to 100%. Subjects reported that the in-line exercises were very helpful (9.1±1.7, 0-10 scale, where 10=extremely helpful) and the iPad program very easy to use (9.0±1.6, 0-10 scale, where 10=extremely easy) suggesting good clinical utility. Results demonstrated the ability of an interactive multimedia program to enhance patients' understanding of their medical procedure. Importantly, the incorporation of in-line exercises permitted identification of knowledge deficits, provided corrected feedback, and confirmed the patients' understanding of treatment information in real-time when consent was sought. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Critical Song Features for Auditory Pattern Recognition in Crickets
Meckenhäuser, Gundula; Hennig, R. Matthias; Nawrot, Martin P.
2013-01-01
Many different invertebrate and vertebrate species use acoustic communication for pair formation. In the cricket Gryllus bimaculatus, females recognize their species-specific calling song and localize singing males by positive phonotaxis. The song pattern of males has a clear structure consisting of brief and regular pulses that are grouped into repetitive chirps. Information is thus present on a short and a long time scale. Here, we ask which structural features of the song critically determine the phonotactic performance. To this end we employed artificial neural networks to analyze a large body of behavioral data that measured females’ phonotactic behavior under systematic variation of artificially generated song patterns. In a first step we used four non-redundant descriptive temporal features to predict the female response. The model prediction showed a high correlation with the experimental results. We used this behavioral model to explore the integration of the two different time scales. Our result suggested that only an attractive pulse structure in combination with an attractive chirp structure reliably induced phonotactic behavior to signals. In a further step we investigated all feature sets, each one consisting of a different combination of eight proposed temporal features. We identified feature sets of size two, three, and four that achieve highest prediction power by using the pulse period from the short time scale plus additional information from the long time scale. PMID:23437054
Effects of the 7-8-year cycle in daily mean air temperature as a cross-scale information transfer
NASA Astrophysics Data System (ADS)
Jajcay, Nikola; Hlinka, Jaroslav; Paluš, Milan
2015-04-01
Using a novel nonlinear time-series analysis method, an information transfer from larger to smaller scales of the air temperature variability has been observed in daily mean surface air temperature (SAT) data from European stations as the influence of the phase of slow oscillatory phenomena with periods around 6-11 years on amplitudes of the variability characterized by smaller temporal scales from a few months to 4-5 years [1]. The strongest effect is exerted by an oscillatory mode with the period close to 8 years and its influence can be seen in 1-2 °C differences of the conditional SAT means taken conditionally on the phase of the 8-year cycle. The size of this effect, however, changes in space and time. The changes in time are studied using sliding window technique, showing that the effect evolves in time, and during the last decades the effect is stronger and significant. Sliding window technique was used along with seasonal division of the data, and it has been found that the cycle is most pronounced in the winter season. Different types of surrogate data are applied in order to establish statistical significance and distinguish the effect of the 7-8-yr cycle from climate variability on shorter time scales. [1] M. Palus, Phys. Rev. Lett. 112 078702 (2014) This study is supported by the Ministry of Education, Youth and Sports of the Czech Republic within the Program KONTAKT II, Project No. LH14001.
Efficient Type Representation in TAL
NASA Technical Reports Server (NTRS)
Chen, Juan
2009-01-01
Certifying compilers generate proofs for low-level code that guarantee safety properties of the code. Type information is an essential part of safety proofs. But the size of type information remains a concern for certifying compilers in practice. This paper demonstrates type representation techniques in a large-scale compiler that achieves both concise type information and efficient type checking. In our 200,000-line certifying compiler, the size of type information is about 36% of the size of pure code and data for our benchmarks, the best result to the best of our knowledge. The type checking time is about 2% of the compilation time.
NASA Astrophysics Data System (ADS)
Kar, Soummya; Moura, José M. F.
2011-08-01
The paper considers gossip distributed estimation of a (static) distributed random field (a.k.a., large scale unknown parameter vector) observed by sparsely interconnected sensors, each of which only observes a small fraction of the field. We consider linear distributed estimators whose structure combines the information \\emph{flow} among sensors (the \\emph{consensus} term resulting from the local gossiping exchange among sensors when they are able to communicate) and the information \\emph{gathering} measured by the sensors (the \\emph{sensing} or \\emph{innovations} term.) This leads to mixed time scale algorithms--one time scale associated with the consensus and the other with the innovations. The paper establishes a distributed observability condition (global observability plus mean connectedness) under which the distributed estimates are consistent and asymptotically normal. We introduce the distributed notion equivalent to the (centralized) Fisher information rate, which is a bound on the mean square error reduction rate of any distributed estimator; we show that under the appropriate modeling and structural network communication conditions (gossip protocol) the distributed gossip estimator attains this distributed Fisher information rate, asymptotically achieving the performance of the optimal centralized estimator. Finally, we study the behavior of the distributed gossip estimator when the measurements fade (noise variance grows) with time; in particular, we consider the maximum rate at which the noise variance can grow and still the distributed estimator being consistent, by showing that, as long as the centralized estimator is consistent, the distributed estimator remains consistent.
NASA Astrophysics Data System (ADS)
Krell, N.; Evans, T. P.; Estes, L. D.; Caylor, K. K.
2017-12-01
While international metrics of food security and water availability are generated as spatial averages at the regional to national levels, climate variability impacts are differentially felt at the household level. This project investigated scales of variability of climate impacts on smallholder farmers using social and environmental data in central Kenya. Using sub-daily real-time environmental measurements to monitor smallholder agriculture, we investigated how changes in seasonal precipitation affected food security around Laikipia county from September 2015 to present. We also conducted SMS-based surveys of over 700 farmers to understand farmers' decision-making within the growing season. Our results highlight field-scale heterogeneity in biophysical and social factors governing crop yields using locally sensed real-time environmental data and weekly farmer-reported information about planting, harvesting, irrigation, and crop yields. Our preliminary results show relationships between changes in seasonal precipitation, NDVI, and soil moisture related to crop yields and decision-making at several scales. These datasets present a unique opportunity to collect highly spatially and temporally resolved information from data-poor regions at the household level.
NASA Astrophysics Data System (ADS)
Blume, T.; Zehe, E.; Bronstert, A.
2007-08-01
Spatial patterns as well as temporal dynamics of soil moisture have a major influence on runoff generation. The investigation of these dynamics and patterns can thus yield valuable information on hydrological processes, especially in data scarce or previously ungauged catchments. The combination of spatially scarce but temporally high resolution soil moisture profiles with episodic and thus temporally scarce moisture profiles at additional locations provides information on spatial as well as temporal patterns of soil moisture at the hillslope transect scale. This approach is better suited to difficult terrain (dense forest, steep slopes) than geophysical techniques and at the same time less cost-intensive than a high resolution grid of continuously measuring sensors. Rainfall simulation experiments with dye tracers while continuously monitoring soil moisture response allows for visualization of flow processes in the unsaturated zone at these locations. Data was analyzed at different spacio-temporal scales using various graphical methods, such as space-time colour maps (for the event and plot scale) and indicator maps (for the long-term and hillslope scale). Annual dynamics of soil moisture and decimeter-scale variability were also investigated. The proposed approach proved to be successful in the investigation of flow processes in the unsaturated zone and showed the importance of preferential flow in the Malalcahuello Catchment, a data-scarce catchment in the Andes of Southern Chile. Fast response times of stream flow indicate that preferential flow observed at the plot scale might also be of importance at the hillslope or catchment scale. Flow patterns were highly variable in space but persistent in time. The most likely explanation for preferential flow in this catchment is a combination of hydrophobicity, small scale heterogeneity in rainfall due to redistribution in the canopy and strong gradients in unsaturated conductivities leading to self-reinforcing flow paths.
Quantum gravity as an information network self-organization of a 4D universe
NASA Astrophysics Data System (ADS)
Trugenberger, Carlo A.
2015-10-01
I propose a quantum gravity model in which the fundamental degrees of freedom are information bits for both discrete space-time points and links connecting them. The Hamiltonian is a very simple network model consisting of a ferromagnetic Ising model for space-time vertices and an antiferromagnetic Ising model for the links. As a result of the frustration between these two terms, the ground state self-organizes as a new type of low-clustering graph with finite Hausdorff dimension 4. The spectral dimension is lower than the Hausdorff dimension: it coincides with the Hausdorff dimension 4 at a first quantum phase transition corresponding to an IR fixed point, while at a second quantum phase transition describing small scales space-time dissolves into disordered information bits. The large-scale dimension 4 of the universe is related to the upper critical dimension 4 of the Ising model. At finite temperatures the universe graph emerges without a big bang and without singularities from a ferromagnetic phase transition in which space-time itself forms out of a hot soup of information bits. When the temperature is lowered the universe graph unfolds and expands by lowering its connectivity, a mechanism I have called topological expansion. The model admits topological black hole excitations corresponding to graphs containing holes with no space-time inside and with "Schwarzschild-like" horizons with a lower spectral dimension.
Spatio-temporal Granger causality: a new framework
Luo, Qiang; Lu, Wenlian; Cheng, Wei; Valdes-Sosa, Pedro A.; Wen, Xiaotong; Ding, Mingzhou; Feng, Jianfeng
2015-01-01
That physiological oscillations of various frequencies are present in fMRI signals is the rule, not the exception. Herein, we propose a novel theoretical framework, spatio-temporal Granger causality, which allows us to more reliably and precisely estimate the Granger causality from experimental datasets possessing time-varying properties caused by physiological oscillations. Within this framework, Granger causality is redefined as a global index measuring the directed information flow between two time series with time-varying properties. Both theoretical analyses and numerical examples demonstrate that Granger causality is a monotonically increasing function of the temporal resolution used in the estimation. This is consistent with the general principle of coarse graining, which causes information loss by smoothing out very fine-scale details in time and space. Our results confirm that the Granger causality at the finer spatio-temporal scales considerably outperforms the traditional approach in terms of an improved consistency between two resting-state scans of the same subject. To optimally estimate the Granger causality, the proposed theoretical framework is implemented through a combination of several approaches, such as dividing the optimal time window and estimating the parameters at the fine temporal and spatial scales. Taken together, our approach provides a novel and robust framework for estimating the Granger causality from fMRI, EEG, and other related data. PMID:23643924
USDA-ARS?s Scientific Manuscript database
The separate components of evapotranspiration (ET) provide critical information about the pathways and time scales over which water is returned to the atmosphere, but ecosystem-scale measurements of transpiration (T) and evaporation (E) remain elusive. We propose a novel determination of average E a...
Nash, Denis; Elul, Batya; Rabkin, Miriam; Tun, May; Saito, Suzue; Becker, Mark; Nuwagaba-Biribonwoha, Harriet
2009-11-01
Program monitoring and evaluation (M&E) has the potential to be a cornerstone of health systems strengthening and of evidence-informed implementation and scale-up of HIV-related services in resource-limited settings. We discuss common challenges to M&E systems used in the rapid scale-up of HIV services as well as innovations that may have relevance to systems used to monitor, evaluate, and inform health systems strengthening. These include (1) Web-based applications with decentralized data entry and real-time access to summary reporting; (2) timely feedback of information to site and district staff; (3) site-level integration of traditionally siloed program area indicators; (4) longitudinal tracking of program and site characteristics; (5) geographic information systems; and (6) use of routinely collected aggregate data for epidemiologic analysis and operations research. Although conventionally used in the context of vertical programs, these approaches can form a foundation on which data relevant to other health services and systems can be layered, including prevention services, primary care, maternal-child health, and chronic disease management. Guiding principles for sustainable national M&E systems include country-led development and ownership, support for national programs and policies, interoperability, and employment of an open-source approach to software development.
NASA Astrophysics Data System (ADS)
Tzanis, Andreas
2013-02-01
The Ground Probing Radar (GPR) is a valuable tool for near surface geological, geotechnical, engineering, environmental, archaeological and other work. GPR images of the subsurface frequently contain geometric information (constant or variable-dip reflections) from various structures such as bedding, cracks, fractures, etc. Such features are frequently the target of the survey; however, they are usually not good reflectors and they are highly localized in time and in space. Their scale is therefore a factor significantly affecting their detectability. At the same time, the GPR method is very sensitive to broadband noise from buried small objects, electromagnetic anthropogenic activity and systemic factors, which frequently blurs the reflections from such targets. This paper introduces a method to de-noise GPR data and extract geometric information from scale-and-dip dependent structural features, based on one-dimensional B-Spline Wavelets, two-dimensional directional B-Spline Wavelet (BSW) Filters and two-dimensional Gabor Filters. A directional BSW Filter is built by sidewise arranging s identical one-dimensional wavelets of length L, tapering the s-parallel direction (span) with a suitable window function and rotating the resulting matrix to the desired orientation. The length L of the wavelet defines the temporal and spatial scale to be isolated and the span determines the length over which to smooth (spatial resolution). The Gabor Filter is generated by multiplying an elliptical Gaussian by a complex plane wave; at any orientation the temporal or spatial scale(s) to be isolated are determined by the wavelength. λ of the plane wave and the spatial resolution by the spatial aspect ratio γ, which specifies the ellipticity of the support of the Gabor function. At any orientation, both types of filter may be tuned at any frequency or spatial wavenumber by varying the length or the wavelength respectively. The filters can be applied directly to two-dimensional radargrams, in which case they abstract information about given scales at given orientations. Alternatively, they can be rotated to different orientations under adaptive control, so that they remain tuned at a given frequency or wavenumber and the resulting images can be stacked in the LS sense, so as to obtain a complete representation of the input data at a given temporal or spatial scale. In addition to isolating geometrical information for further scrutiny, the proposed filtering methods can be used to enhance the S/N ratio in a manner particularly suitable for GPR data, because the frequency response of the filters mimics the frequency characteristics of the source wavelet. Finally, signal attenuation and temporal localization are closely associated: low attenuation interfaces tend to produce reflections rich in high frequencies and fine-scale localization as a function of time. Conversely, high attenuation interfaces will produce reflections rich in low frequencies and broad localization. Accordingly, the temporal localization characteristics of the filters may be exploited to investigate the characteristics of signal propagation (hence material properties). The method is shown to be very effective in extracting fine to coarse scale information from noisy data and is demonstrated with applications to noisy GPR data from archaeometric and geotechnical surveys.
NASA Astrophysics Data System (ADS)
Conceição, Ricardo; Silva, Hugo Gonçalves; Bennett, Alec; Salgado, Rui; Bortoli, Daniele; Costa, Maria João; Collares Pereira, Manuel
2018-01-01
The spectral response of atmospheric electric potential gradient gives important information about phenomena affecting this gradient at characteristic time scales ranging from years (e.g., solar modulation) to fractions of a second (e.g., turbulence). While long-term time scales have been exhaustively explored, short-term scales have received less attention. At such frequencies, space-charge transport inside the planetary boundary layer becomes a sizeable contribution to the potential gradient variability. For the first time, co-located (Évora, Portugal) measurements of boundary-layer backscatter profiles and the 100-Hz potential gradient are reported. Five campaign days are analyzed, providing evidence for a relation between high-frequency response of the potential gradient and strong dry convection.
Estimating Anesthesia Time Using the Medicare Claim: A Validation Study
Silber, Jeffrey H.; Rosenbaum, Paul R.; Even-Shoshan, Orit; Mi, Lanyu; Kyle, Fabienne; Teng, Yun; Bratzler, Dale W.; Fleisher, Lee A.
2012-01-01
Introduction Procedure length is a fundamental variable associated with quality of care, though seldom studied on a large scale. We sought to estimate procedure length through information obtained in the anesthesia claim submitted to Medicare to validate this method for future studies. Methods The Obesity and Surgical Outcomes Study enlisted 47 hospitals located across New York, Texas and Illinois to study patients undergoing hip, knee, colon and thoracotomy procedures. 15,914 charts were abstracted to determine body mass index and initial patient physiology. Included in this abstraction were induction, cut, close and recovery room times. This chart information was merged to Medicare claims which included anesthesia Part B billing information. Correlations between chart times and claim times were analyzed, models developed, and median absolute differences in minutes calculated. Results Of the 15,914 eligible patients, there were 14,369 where both chart and claim times were available for analysis. In these 14,369, the Spearman correlation between chart and claim time was 0.94 (95% CI 0.94, 0.95) and the median absolute difference between chart and claim time was only 5 minutes (95% CI: 5.0, 5.5). The anesthesia claim can also be used to estimate surgical procedure length, with only a modest increase in error. Conclusion The anesthesia bill found in Medicare claims provides an excellent source of information for studying operative time on a vast scale throughout the United States. However, errors in both chart abstraction and anesthesia claims can occur. Care must be taken in the handling of outliers in this data. PMID:21720242
Bush Encroachment Mapping for Africa - Multi-Scale Analysis with Remote Sensing and GIS
NASA Astrophysics Data System (ADS)
Graw, V. A. M.; Oldenburg, C.; Dubovyk, O.
2015-12-01
Bush encroachment describes a global problem which is especially facing the savanna ecosystem in Africa. Livestock is directly affected by decreasing grasslands and inedible invasive species which defines the process of bush encroachment. For many small scale farmers in developing countries livestock represents a type of insurance in times of crop failure or drought. Among that bush encroachment is also a problem for crop production. Studies on the mapping of bush encroachment so far focus on small scales using high-resolution data and rarely provide information beyond the national level. Therefore a process chain was developed using a multi-scale approach to detect bush encroachment for whole Africa. The bush encroachment map is calibrated with ground truth data provided by experts in Southern, Eastern and Western Africa. By up-scaling location specific information on different levels of remote sensing imagery - 30m with Landsat images and 250m with MODIS data - a map is created showing potential and actual areas of bush encroachment on the African continent and thereby provides an innovative approach to map bush encroachment on the regional scale. A classification approach links location data based on GPS information from experts to the respective pixel in the remote sensing imagery. Supervised classification is used while actual bush encroachment information represents the training samples for the up-scaling. The classification technique is based on Random Forests and regression trees, a machine learning classification approach. Working on multiple scales and with the help of field data an innovative approach can be presented showing areas affected by bush encroachment on the African continent. This information can help to prevent further grassland decrease and identify those regions where land management strategies are of high importance to sustain livestock keeping and thereby also secure livelihoods in rural areas.
During running in place, grid cells integrate elapsed time and distance run
Kraus, Benjamin J.; Brandon, Mark P.; Robinson, Robert J.; Connerney, Michael A.; Hasselmo, Michael E.; Eichenbaum, Howard
2015-01-01
Summary The spatial scale of grid cells may be provided by self-generated motion information or by external sensory information from environmental cues. To determine whether grid cell activity reflects distance traveled or elapsed time independent of external information, we recorded grid cells as animals ran in place on a treadmill. Grid cell activity was only weakly influenced by location but most grid cells and other neurons recorded from the same electrodes strongly signaled a combination of distance and time, with some signaling only distance or time. Grid cells were more sharply tuned to time and distance than non-grid cells. Many grid cells exhibited multiple firing fields during treadmill running, parallel to the periodic firing fields observed in open fields, suggesting a common mode of information processing. These observations indicate that, in the absence of external dynamic cues, grid cells integrate self-generated distance and time information to encode a representation of experience. PMID:26539893
Extracting information from AGN variability
NASA Astrophysics Data System (ADS)
Kasliwal, Vishal P.; Vogeley, Michael S.; Richards, Gordon T.
2017-09-01
Active galactic nuclei (AGNs) exhibit rapid, high-amplitude stochastic flux variations across the entire electromagnetic spectrum on time-scales ranging from hours to years. The cause of this variability is poorly understood. We present a Green's function-based method for using variability to (1) measure the time-scales on which flux perturbations evolve and (2) characterize the driving flux perturbations. We model the observed light curve of an AGN as a linear differential equation driven by stochastic impulses. We analyse the light curve of the Kepler AGN Zw 229-15 and find that the observed variability behaviour can be modelled as a damped harmonic oscillator perturbed by a coloured noise process. The model power spectrum turns over on time-scale 385 d. On shorter time-scales, the log-power-spectrum slope varies between 2 and 4, explaining the behaviour noted by previous studies. We recover and identify both the 5.6 and 67 d time-scales reported by previous work using the Green's function of the Continuous-time AutoRegressive Moving Average equation rather than by directly fitting the power spectrum of the light curve. These are the time-scales on which flux perturbations grow, and on which flux perturbations decay back to the steady-state flux level, respectively. We make the software package kālī used to study light curves using our method available to the community.
Information-sharing tendency on Twitter and time evolution of tweeting
NASA Astrophysics Data System (ADS)
Kwon, H. W.; Kim, H. S.; Lee, K.; Choi, M. Y.
2013-03-01
While topics on Twitter may be categorized according to their predictability and sustainability, some topics have characteristics depending on the time scale. Here we propose a good measure for the transition of sustainability, which we call the information-sharing tendency, and find that the unpredictability on Twitter is provoked by the exposure of Twitter users to external environments, e.g., mass media and other social network services. In addition, it is demonstrated that the numbers of articles and comments on on-line newspapers serve as plausible measures of exposure. From such measures of exposure, the time evolution of tweeting can be described, when the information-sharing tendency is known.
Berwig, Martin; Dichter, Martin Nikolaus; Albers, Bernd; Wermke, Katharina; Trutschel, Diana; Seismann-Petersen, Swantje; Halek, Margareta
2017-04-17
Caring for people with dementia at home requires a significant amount of time, organization, and commitment. Therefore, informal caregivers, mainly relatives, of people with dementia often feel a high burden. Although on-site support groups are known to have positive effects on the subjective well-being (SWB) and perceived social support of informal caregivers, there are cases in which relatives have either no time or no opportunity to leave the person alone or in which there are no support groups nearby. The TALKING TIME project aims to close this supply gap by providing structured telephone-based support groups in Germany for the first time. International studies have shown benefits for informal caregivers. The TALKING TIME study is a randomized controlled trial. The effects of the 3-month TALKING TIME intervention will be compared with those of a control group without intervention at two time points (baseline = T 0 , after 3 months = T 1 ). The control group will receive the TALKING TIME intervention after T 1 . With a planned sample size of 88 participants, the study is powered to detect an estimated effect size of 0.70 for psychological quality of life, considering an α of 0.05 (two-sided), a power of 80%. Caregivers are informal caregivers who are eligible if they are 18 years of age or older and have cared for a person with diagnosed dementia for at least four hours, four days per week, in the past six months. The exclusion criteria are psychiatric disorders of the informal caregiver. The primary outcome is the mental component summary of the SF-12 rated by informal caregivers. The secondary outcomes for informal caregivers are the physical component summary of the SF-12, the Perceived Social Support Caregiver Scale (SSCS) score, and the Caregiver Reaction Scale (CRS) score. The secondary outcome for care recipients is the Neuropsychiatric Inventory (NPI-Q). For the process evaluation, different quantitative and qualitative data sources will be collected to address reach, fidelity, dosage and context. The results will provide further information on the effectiveness and optimization of telephone-based support groups for informal caregivers of people with dementia, which can help guide the further development of effective telephone-based social support group interventions. Clinical Trials: NCT02806583 , June 9, 2016.
Can correlations among receptors affect the information about the stimulus?
NASA Astrophysics Data System (ADS)
Singh, Vijay; Tchernookov, Martin; Nemenman, Ilya
2014-03-01
In the context of neural information processing, it has been observed that, compared to the case of independent receptors, correlated receptors can often carry more information about the stimulus. We explore similar ideas in the context of molecular information processing, analyzing a cell with receptors whose activity is intrinsically negatively correlated because they compete for the same ligand molecules. We show analytically that, in case the involved biochemical interactions are linear, the information between the number of molecules captured by the receptors and the ligand concentration does not depend on correlations among the receptors. For a nonlinear kinetic network, correlations similarly do not change the amount of information for observation times much shorter or much longer than the characteristic time scale of ligand molecule binding and unbinding. However, at intermediate times, correlations can increase the amount of available information. This work has been supported by the James S McDonnell foundation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elber, Ron
Atomically detailed computer simulations of complex molecular events attracted the imagination of many researchers in the field as providing comprehensive information on chemical, biological, and physical processes. However, one of the greatest limitations of these simulations is of time scales. The physical time scales accessible to straightforward simulations are too short to address many interesting and important molecular events. In the last decade significant advances were made in different directions (theory, software, and hardware) that significantly expand the capabilities and accuracies of these techniques. This perspective describes and critically examines some of these advances.
Comparing SMAP to Macro-scale and Hyper-resolution Land Surface Models over Continental U. S.
NASA Astrophysics Data System (ADS)
Pan, Ming; Cai, Xitian; Chaney, Nathaniel; Wood, Eric
2016-04-01
SMAP sensors collect moisture information in top soil at the spatial resolution of ~40 km (radiometer) and ~1 to 3 km (radar, before its failure in July 2015). Such information is extremely valuable for understanding various terrestrial hydrologic processes and their implications on human life. At the same time, soil moisture is a joint consequence of numerous physical processes (precipitation, temperature, radiation, topography, crop/vegetation dynamics, soil properties, etc.) that happen at a wide range of scales from tens of kilometers down to tens of meters. Therefore, a full and thorough analysis/exploration of SMAP data products calls for investigations at multiple spatial scales - from regional, to catchment, and to field scales. Here we first compare the SMAP retrievals to the Variable Infiltration Capacity (VIC) macro-scale land surface model simulations over the continental U. S. region at 3 km resolution. The forcing inputs to the model are merged/downscaled from a suite of best available data products including the NLDAS-2 forcing, Stage IV and Stage II precipitation, GOES Surface and Insolation Products, and fine elevation data. The near real time VIC simulation is intended to provide a source of large scale comparisons at the active sensor resolution. Beyond the VIC model scale, we perform comparisons at 30 m resolution against the recently developed HydroBloks hyper-resolution land surface model over several densely gauged USDA experimental watersheds. Comparisons are also made against in-situ point-scale observations from various SMAP Cal/Val and field campaign sites.
Coping strategies among patients with newly diagnosed amyotrophic lateral sclerosis.
Jakobsson Larsson, Birgitta; Nordin, Karin; Askmark, Håkan; Nygren, Ingela
2014-11-01
To prospectively identify different coping strategies among newly diagnosed amyotrophic lateral sclerosis patients and whether they change over time and to determine whether physical function, psychological well-being, age and gender correlated with the use of different coping strategies. Amyotrophic lateral sclerosis is a fatal disease with impact on both physical function and psychological well-being. Different coping strategies are used to manage symptoms and disease progression, but knowledge about coping in newly diagnosed amyotrophic lateral sclerosis patients is scarce. This was a prospective study with a longitudinal and descriptive design. A total of 33 patients were included and evaluation was made at two time points, one to three months and six months after diagnosis. Patients were asked to complete the Motor Neuron Disease Coping Scale and the Hospital Anxiety and Depression Scale. Physical function was estimated using the revised Amyotrophic Lateral Sclerosis Functional Rating Scale. The most commonly used strategies were support and independence. Avoidance/venting and information seeking were seldom used at both time points. The use of information seeking decreased between the two time points. Men did not differ from women, but patients ≤64 years used positive action more often than older patients. Amyotrophic Lateral Sclerosis Functional Rating Scale was positively correlated with positive action at time point 1, but not at time point 2. Patients' psychological well-being was correlated with the use of different coping strategies. Support and independence were the most used coping strategies, and the use of different strategies changed over time. Psychological well-being was correlated with different coping strategies in newly diagnosed amyotrophic lateral sclerosis patients. The knowledge about coping strategies in early stage of the disease may help the nurses to improve and develop the care and support for these patients. © 2014 John Wiley & Sons Ltd.
Coarse-Grained Models for Protein-Cell Membrane Interactions
Bradley, Ryan; Radhakrishnan, Ravi
2015-01-01
The physiological properties of biological soft matter are the product of collective interactions, which span many time and length scales. Recent computational modeling efforts have helped illuminate experiments that characterize the ways in which proteins modulate membrane physics. Linking these models across time and length scales in a multiscale model explains how atomistic information propagates to larger scales. This paper reviews continuum modeling and coarse-grained molecular dynamics methods, which connect atomistic simulations and single-molecule experiments with the observed microscopic or mesoscale properties of soft-matter systems essential to our understanding of cells, particularly those involved in sculpting and remodeling cell membranes. PMID:26613047
Jones, Joseph L.; Fulford, Janice M.; Voss, Frank D.
2002-01-01
A system of numerical hydraulic modeling, geographic information system processing, and Internet map serving, supported by new data sources and application automation, was developed that generates inundation maps for forecast floods in near real time and makes them available through the Internet. Forecasts for flooding are generated by the National Weather Service (NWS) River Forecast Center (RFC); these forecasts are retrieved automatically by the system and prepared for input to a hydraulic model. The model, TrimR2D, is a new, robust, two-dimensional model capable of simulating wide varieties of discharge hydrographs and relatively long stream reaches. TrimR2D was calibrated for a 28-kilometer reach of the Snoqualmie River in Washington State, and is used to estimate flood extent, depth, arrival time, and peak time for the RFC forecast. The results of the model are processed automatically by a Geographic Information System (GIS) into maps of flood extent, depth, and arrival and peak times. These maps subsequently are processed into formats acceptable by an Internet map server (IMS). The IMS application is a user-friendly interface to access the maps over the Internet; it allows users to select what information they wish to see presented and allows the authors to define scale-dependent availability of map layers and their symbology (appearance of map features). For example, the IMS presents a background of a digital USGS 1:100,000-scale quadrangle at smaller scales, and automatically switches to an ortho-rectified aerial photograph (a digital photograph that has camera angle and tilt distortions removed) at larger scales so viewers can see ground features that help them identify their area of interest more effectively. For the user, the option exists to select either background at any scale. Similar options are provided for both the map creator and the viewer for the various flood maps. This combination of a robust model, emerging IMS software, and application interface programming should allow the technology developed in the pilot study to be applied to other river systems where NWS forecasts are provided routinely.
A Multi-Scale, Integrated Approach to Representing Watershed Systems
NASA Astrophysics Data System (ADS)
Ivanov, Valeriy; Kim, Jongho; Fatichi, Simone; Katopodes, Nikolaos
2014-05-01
Understanding and predicting process dynamics across a range of scales are fundamental challenges for basic hydrologic research and practical applications. This is particularly true when larger-spatial-scale processes, such as surface-subsurface flow and precipitation, need to be translated to fine space-time scale dynamics of processes, such as channel hydraulics and sediment transport, that are often of primary interest. Inferring characteristics of fine-scale processes from uncertain coarse-scale climate projection information poses additional challenges. We have developed an integrated model simulating hydrological processes, flow dynamics, erosion, and sediment transport, tRIBS+VEGGIE-FEaST. The model targets to take the advantage of the current generation of wealth of data representing watershed topography, vegetation, soil, and landuse, as well as to explore the hydrological effects of physical factors and their feedback mechanisms over a range of scales. We illustrate how the modeling system connects precipitation-hydrologic runoff partition process to the dynamics of flow, erosion, and sedimentation, and how the soil's substrate condition can impact the latter processes, resulting in a non-unique response. We further illustrate an approach to using downscaled climate change information with a process-based model to infer the moments of hydrologic variables in future climate conditions and explore the impact of climate information uncertainty.
Downscaling ocean conditions: Experiments with a quasi-geostrophic model
NASA Astrophysics Data System (ADS)
Katavouta, A.; Thompson, K. R.
2013-12-01
The predictability of small-scale ocean variability, given the time history of the associated large-scales, is investigated using a quasi-geostrophic model of two wind-driven gyres separated by an unstable, mid-ocean jet. Motivated by the recent theoretical study of Henshaw et al. (2003), we propose a straightforward method for assimilating information on the large-scale in order to recover the small-scale details of the quasi-geostrophic circulation. The similarity of this method to the spectral nudging of limited area atmospheric models is discussed. Results from the spectral nudging of the quasi-geostrophic model, and an independent multivariate regression-based approach, show that important features of the ocean circulation, including the position of the meandering mid-ocean jet and the associated pinch-off eddies, can be recovered from the time history of a small number of large-scale modes. We next propose a hybrid approach for assimilating both the large-scales and additional observed time series from a limited number of locations that alone are too sparse to recover the small scales using traditional assimilation techniques. The hybrid approach improved significantly the recovery of the small-scales. The results highlight the importance of the coupling between length scales in downscaling applications, and the value of assimilating limited point observations after the large-scales have been set correctly. The application of the hybrid and spectral nudging to practical ocean forecasting, and projecting changes in ocean conditions on climate time scales, is discussed briefly.
Characterizing the human postural control system using detrended fluctuation analysis
NASA Astrophysics Data System (ADS)
Teresa Blázquez, M.; Anguiano, Marta; de Saavedra, Fernando Arias; Lallena, Antonio M.; Carpena, Pedro
2010-01-01
Detrended fluctuation analysis is used to study the behaviour of the time series of the position of the center of pressure, output from the activity of a human postural control system. The results suggest that these trajectories present a crossover in their scaling properties from persistent (for high frequencies, short-range time scale) to anti-persistent (for low frequencies, long-range time scale) behaviours. The values of the scaling exponent found for the persistent parts of the trajectories are very similar for all the cases analysed. The similarity of the results obtained for the measurements done with both eyes open and both eyes closed indicate either that the visual system may be disregarded by the postural control system, while maintaining quiet standing, or that the control mechanisms associated with each type of information (visual, vestibular and somatosensory) cannot be disentangled with this technique.
Scaling relation between earthquake magnitude and the departure time from P wave similar growth
Noda, Shunta; Ellsworth, William L.
2016-01-01
We introduce a new scaling relation between earthquake magnitude (M) and a characteristic of initial P wave displacement. By examining Japanese K-NET data averaged in bins partitioned by Mw and hypocentral distance, we demonstrate that the P wave displacement briefly displays similar growth at the onset of rupture and that the departure time (Tdp), which is defined as the time of departure from similarity of the absolute displacement after applying a band-pass filter, correlates with the final M in a range of 4.5 ≤ Mw ≤ 7. The scaling relation between Mw and Tdp implies that useful information on the final M can be derived while the event is still in progress because Tdp occurs before the completion of rupture. We conclude that the scaling relation is important not only for earthquake early warning but also for the source physics of earthquakes.
Delayed and time-cumulative toxicity of imidacloprid in bees, ants and termites
Rondeau, Gary; Sánchez-Bayo, Francisco; Tennekes, Henk A.; Decourtye, Axel; Ramírez-Romero, Ricardo; Desneux, Nicolas
2014-01-01
Imidacloprid, one of the most commonly used insecticides, is highly toxic to bees and other beneficial insects. The regulatory challenge to determine safe levels of residual pesticides can benefit from information about the time-dependent toxicity of this chemical. Using published toxicity data for imidacloprid for several insect species, we construct time-to-lethal-effect toxicity plots and fit temporal power-law scaling curves to the data. The level of toxic exposure that results in 50% mortality after time t is found to scale as t1.7 for ants, from t1.6 to t5 for honeybees, and from t1.46 to t2.9 for termites. We present a simple toxicological model that can explain t2 scaling. Extrapolating the toxicity scaling for honeybees to the lifespan of winter bees suggests that imidacloprid in honey at 0.25 μg/kg would be lethal to a large proportion of bees nearing the end of their life. PMID:24993452
Millisecond-Scale Motor Encoding in a Cortical Vocal Area
NASA Astrophysics Data System (ADS)
Nemenman, Ilya; Tang, Claire; Chehayeb, Diala; Srivastava, Kyle; Sober, Samuel
2015-03-01
Studies of motor control have almost universally examined firing rates to investigate how the brain shapes behavior. In principle, however, neurons could encode information through the precise temporal patterning of their spike trains as well as (or instead of) through their firing rates. Although the importance of spike timing has been demonstrated in sensory systems, it is largely unknown whether timing differences in motor areas could affect behavior. We tested the hypothesis that significant information about trial-by-trial variations in behavior is represented by spike timing in the songbird vocal motor system. We found that neurons in motor cortex convey information via spike timing far more often than via spike rate and that the amount of information conveyed at the millisecond timescale greatly exceeds the information available from spike counts. These results demonstrate that information can be represented by spike timing in motor circuits and suggest that timing variations evoke differences in behavior. This work was supported in part by the National Institutes of Health, National Science Foundation, and James S. McDonnell Foundation
3-Dimensional Root Cause Diagnosis via Co-analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Ziming; Lan, Zhiling; Yu, Li
2012-01-01
With the growth of system size and complexity, reliability has become a major concern for large-scale systems. Upon the occurrence of failure, system administrators typically trace the events in Reliability, Availability, and Serviceability (RAS) logs for root cause diagnosis. However, RAS log only contains limited diagnosis information. Moreover, the manual processing is time-consuming, error-prone, and not scalable. To address the problem, in this paper we present an automated root cause diagnosis mechanism for large-scale HPC systems. Our mechanism examines multiple logs to provide a 3-D fine-grained root cause analysis. Here, 3-D means that our analysis will pinpoint the failure layer,more » the time, and the location of the event that causes the problem. We evaluate our mechanism by means of real logs collected from a production IBM Blue Gene/P system at Oak Ridge National Laboratory. It successfully identifies failure layer information for 219 failures during 23-month period. Furthermore, it effectively identifies the triggering events with time and location information, even when the triggering events occur hundreds of hours before the resulting failures.« less
Scaling, Similarity, and the Fourth Paradigm for Hydrology
NASA Technical Reports Server (NTRS)
Peters-Lidard, Christa D.; Clark, Martyn; Samaniego, Luis; Verhoest, Niko E. C.; van Emmerik, Tim; Uijlenhoet, Remko; Achieng, Kevin; Franz, Trenton E.; Woods, Ross
2017-01-01
In this synthesis paper addressing hydrologic scaling and similarity, we posit that roadblocks in the search for universal laws of hydrology are hindered by our focus on computational simulation (the third paradigm), and assert that it is time for hydrology to embrace a fourth paradigm of data-intensive science. Advances in information-based hydrologic science, coupled with an explosion of hydrologic data and advances in parameter estimation and modelling, have laid the foundation for a data-driven framework for scrutinizing hydrological scaling and similarity hypotheses. We summarize important scaling and similarity concepts (hypotheses) that require testing, describe a mutual information framework for testing these hypotheses, describe boundary condition, state flux, and parameter data requirements across scales to support testing these hypotheses, and discuss some challenges to overcome while pursuing the fourth hydrological paradigm. We call upon the hydrologic sciences community to develop a focused effort towards adopting the fourth paradigm and apply this to outstanding challenges in scaling and similarity.
Opto-acoustic breast imaging with co-registered ultrasound
NASA Astrophysics Data System (ADS)
Zalev, Jason; Clingman, Bryan; Herzog, Don; Miller, Tom; Stavros, A. Thomas; Oraevsky, Alexander; Kist, Kenneth; Dornbluth, N. Carol; Otto, Pamela
2014-03-01
We present results from a recent study involving the ImagioTM breast imaging system, which produces fused real-time two-dimensional color-coded opto-acoustic (OA) images that are co-registered and temporally inter- leaved with real-time gray scale ultrasound using a specialized duplex handheld probe. The use of dual optical wavelengths provides functional blood map images of breast tissue and tumors displayed with high contrast based on total hemoglobin and oxygen saturation of the blood. This provides functional diagnostic information pertaining to tumor metabolism. OA also shows morphologic information about tumor neo-vascularity that is complementary to the morphological information obtained with conventional gray scale ultrasound. This fusion technology conveniently enables real-time analysis of the functional opto-acoustic features of lesions detected by readers familiar with anatomical gray scale ultrasound. We demonstrate co-registered opto-acoustic and ultrasonic images of malignant and benign tumors from a recent clinical study that provide new insight into the function of tumors in-vivo. Results from the Feasibility Study show preliminary evidence that the technology may have the capability to improve characterization of benign and malignant breast masses over conventional diagnostic breast ultrasound alone and to improve overall accuracy of breast mass diagnosis. In particular, OA improved speci city over that of conventional diagnostic ultrasound, which could potentially reduce the number of negative biopsies performed without missing cancers.
Optimal adaptive control for quantum metrology with time-dependent Hamiltonians.
Pang, Shengshi; Jordan, Andrew N
2017-03-09
Quantum metrology has been studied for a wide range of systems with time-independent Hamiltonians. For systems with time-dependent Hamiltonians, however, due to the complexity of dynamics, little has been known about quantum metrology. Here we investigate quantum metrology with time-dependent Hamiltonians to bridge this gap. We obtain the optimal quantum Fisher information for parameters in time-dependent Hamiltonians, and show proper Hamiltonian control is generally necessary to optimize the Fisher information. We derive the optimal Hamiltonian control, which is generally adaptive, and the measurement scheme to attain the optimal Fisher information. In a minimal example of a qubit in a rotating magnetic field, we find a surprising result that the fundamental limit of T 2 time scaling of quantum Fisher information can be broken with time-dependent Hamiltonians, which reaches T 4 in estimating the rotation frequency of the field. We conclude by considering level crossings in the derivatives of the Hamiltonians, and point out additional control is necessary for that case.
Optimal adaptive control for quantum metrology with time-dependent Hamiltonians
Pang, Shengshi; Jordan, Andrew N.
2017-01-01
Quantum metrology has been studied for a wide range of systems with time-independent Hamiltonians. For systems with time-dependent Hamiltonians, however, due to the complexity of dynamics, little has been known about quantum metrology. Here we investigate quantum metrology with time-dependent Hamiltonians to bridge this gap. We obtain the optimal quantum Fisher information for parameters in time-dependent Hamiltonians, and show proper Hamiltonian control is generally necessary to optimize the Fisher information. We derive the optimal Hamiltonian control, which is generally adaptive, and the measurement scheme to attain the optimal Fisher information. In a minimal example of a qubit in a rotating magnetic field, we find a surprising result that the fundamental limit of T2 time scaling of quantum Fisher information can be broken with time-dependent Hamiltonians, which reaches T4 in estimating the rotation frequency of the field. We conclude by considering level crossings in the derivatives of the Hamiltonians, and point out additional control is necessary for that case. PMID:28276428
Scale Invariance in Lateral Head Scans During Spatial Exploration.
Yadav, Chetan K; Doreswamy, Yoganarasimha
2017-04-14
Universality connects various natural phenomena through physical principles governing their dynamics, and has provided broadly accepted answers to many complex questions, including information processing in neuronal systems. However, its significance in behavioral systems is still elusive. Lateral head scanning (LHS) behavior in rodents might contribute to spatial navigation by actively managing (optimizing) the available sensory information. Our findings of scale invariant distributions in LHS lifetimes, interevent intervals and event magnitudes, provide evidence for the first time that the optimization takes place at a critical point in LHS dynamics. We propose that the LHS behavior is responsible for preprocessing of the spatial information content, critical for subsequent foolproof encoding by the respective downstream neural networks.
Scale Invariance in Lateral Head Scans During Spatial Exploration
NASA Astrophysics Data System (ADS)
Yadav, Chetan K.; Doreswamy, Yoganarasimha
2017-04-01
Universality connects various natural phenomena through physical principles governing their dynamics, and has provided broadly accepted answers to many complex questions, including information processing in neuronal systems. However, its significance in behavioral systems is still elusive. Lateral head scanning (LHS) behavior in rodents might contribute to spatial navigation by actively managing (optimizing) the available sensory information. Our findings of scale invariant distributions in LHS lifetimes, interevent intervals and event magnitudes, provide evidence for the first time that the optimization takes place at a critical point in LHS dynamics. We propose that the LHS behavior is responsible for preprocessing of the spatial information content, critical for subsequent foolproof encoding by the respective downstream neural networks.
NASA Astrophysics Data System (ADS)
Krejcar, Ondrej
New kind of mobile lightweight devices can run full scale applications with same comfort as on desktop devices only with several limitations. One of them is insufficient transfer speed on wireless connectivity. Main area of interest is in a model of a radio-frequency based system enhancement for locating and tracking users of a mobile information system. The experimental framework prototype uses a wireless network infrastructure to let a mobile lightweight device determine its indoor or outdoor position. User location is used for data prebuffering and pushing information from server to user’s PDA. All server data is saved as artifacts along with its position information in building or larger area environment. The accessing of prebuffered data on mobile lightweight device can highly improve response time needed to view large multimedia data. This fact can help with design of new full scale applications for mobile lightweight devices.
The EORTC information questionnaire, EORTC QLQ-INFO25. Validation study for Spanish patients.
Arraras, Juan Ignacio; Manterola, Ana; Hernández, Berta; Arias de la Vega, Fernando; Martínez, Maite; Vila, Meritxell; Eito, Clara; Vera, Ruth; Domínguez, Miguel Ángel
2011-06-01
The EORTC QLQ-INFO25 evaluates the information received by cancer patients. This study assesses the psychometric properties of the QLQ-INFO25 when applied to a sample of Spanish patients. A total of 169 patients with different cancers and stages of disease completed the EORTC QLQINFO25, the EORTC QLQ-C30 and the information scales of the inpatient satisfaction module EORTC IN-PATSAT32 on two occasions during the patients' treatment and follow- up period. Psychometric evaluation of the structure, reliability, validity and responsiveness to changes was conducted. Patient acceptability was assessed with a debriefing questionnaire. Multi-trait scaling confirmed the 4 multi-item scales (information about disease, medical tests, treatment and other services) and eight single items. All items met the standards for convergent validity and all except one met the standards of item discriminant validity. Internal consistency for all scales (α>0.70) and the whole questionnaire (α>0.90) was adequate in the three measurements, except information about the disease (0.67) and other services (0.68) in the first measurement, as was test-retest reliability (intraclass correlations >0.70). Correlations with related areas of IN-PATSAT32 (r>0.40) supported convergent validity. Divergent validity was confirmed through low correlations with EORTC QLQ-C30 scales (r<0.30). The EORTC QLQ-INFO-25 discriminated among groups based on gender, age, education, levels of anxiety and depression, treatment line, wish for information and satisfaction. One scale and an item showed changes over time. The EORTC QLQ-INFO 25 is a reliable and valid instrument when applied to a sample of Spanish cancer patients. These results are in line with those of the EORTC validation study.
Quantum Metrology beyond the Classical Limit under the Effect of Dephasing
NASA Astrophysics Data System (ADS)
Matsuzaki, Yuichiro; Benjamin, Simon; Nakayama, Shojun; Saito, Shiro; Munro, William J.
2018-04-01
Quantum sensors have the potential to outperform their classical counterparts. For classical sensing, the uncertainty of the estimation of the target fields scales inversely with the square root of the measurement time T . On the other hand, by using quantum resources, we can reduce this scaling of the uncertainty with time to 1 /T . However, as quantum states are susceptible to dephasing, it has not been clear whether we can achieve sensitivities with a scaling of 1 /T for a measurement time longer than the coherence time. Here, we propose a scheme that estimates the amplitude of globally applied fields with the uncertainty of 1 /T for an arbitrary time scale under the effect of dephasing. We use one-way quantum-computing-based teleportation between qubits to prevent any increase in the correlation between the quantum state and its local environment from building up and have shown that such a teleportation protocol can suppress the local dephasing while the information from the target fields keeps growing. Our method has the potential to realize a quantum sensor with a sensitivity far beyond that of any classical sensor.
Modelling spatiotemporal change using multidimensional arrays Meng
NASA Astrophysics Data System (ADS)
Lu, Meng; Appel, Marius; Pebesma, Edzer
2017-04-01
The large variety of remote sensors, model simulations, and in-situ records provide great opportunities to model environmental change. The massive amount of high-dimensional data calls for methods to integrate data from various sources and to analyse spatiotemporal and thematic information jointly. An array is a collection of elements ordered and indexed in arbitrary dimensions, which naturally represent spatiotemporal phenomena that are identified by their geographic locations and recording time. In addition, array regridding (e.g., resampling, down-/up-scaling), dimension reduction, and spatiotemporal statistical algorithms are readily applicable to arrays. However, the role of arrays in big geoscientific data analysis has not been systematically studied: How can arrays discretise continuous spatiotemporal phenomena? How can arrays facilitate the extraction of multidimensional information? How can arrays provide a clean, scalable and reproducible change modelling process that is communicable between mathematicians, computer scientist, Earth system scientist and stakeholders? This study emphasises on detecting spatiotemporal change using satellite image time series. Current change detection methods using satellite image time series commonly analyse data in separate steps: 1) forming a vegetation index, 2) conducting time series analysis on each pixel, and 3) post-processing and mapping time series analysis results, which does not consider spatiotemporal correlations and ignores much of the spectral information. Multidimensional information can be better extracted by jointly considering spatial, spectral, and temporal information. To approach this goal, we use principal component analysis to extract multispectral information and spatial autoregressive models to account for spatial correlation in residual based time series structural change modelling. We also discuss the potential of multivariate non-parametric time series structural change methods, hierarchical modelling, and extreme event detection methods to model spatiotemporal change. We show how array operations can facilitate expressing these methods, and how the open-source array data management and analytics software SciDB and R can be used to scale the process and make it easily reproducible.
Wood, Fiona; Kowalczuk, Jenny; Elwyn, Glyn; Mitchell, Clive; Gallacher, John
2011-08-01
Population based genetics studies are dependent on large numbers of individuals in the pursuit of small effect sizes. Recruiting and consenting a large number of participants is both costly and time consuming. We explored whether an online consent process for large-scale genetics studies is acceptable for prospective participants using an example online genetics study. We conducted semi-structured interviews with 42 members of the public stratified by age group, gender and newspaper readership (a measure of social status). Respondents were asked to use a website designed to recruit for a large-scale genetic study. After using the website a semi-structured interview was conducted to explore opinions and any issues they would have. Responses were analysed using thematic content analysis. The majority of respondents said they would take part in the research (32/42). Those who said they would decline to participate saw fewer benefits from the research, wanted more information and expressed a greater number of concerns about the study. Younger respondents had concerns over time commitment. Middle aged respondents were concerned about privacy and security. Older respondents were more altruistic in their motivation to participate. Common themes included trust in the authenticity of the website, security of personal data, curiosity about their own genetic profile, operational concerns and a desire for more information about the research. Online consent to large-scale genetic studies is likely to be acceptable to the public. The online consent process must establish trust quickly and effectively by asserting authenticity and credentials, and provide access to a range of information to suit different information preferences.
Wang, Danny J J; Jann, Kay; Fan, Chang; Qiao, Yang; Zang, Yu-Feng; Lu, Hanbing; Yang, Yihong
2018-01-01
Recently, non-linear statistical measures such as multi-scale entropy (MSE) have been introduced as indices of the complexity of electrophysiology and fMRI time-series across multiple time scales. In this work, we investigated the neurophysiological underpinnings of complexity (MSE) of electrophysiology and fMRI signals and their relations to functional connectivity (FC). MSE and FC analyses were performed on simulated data using neural mass model based brain network model with the Brain Dynamics Toolbox, on animal models with concurrent recording of fMRI and electrophysiology in conjunction with pharmacological manipulations, and on resting-state fMRI data from the Human Connectome Project. Our results show that the complexity of regional electrophysiology and fMRI signals is positively correlated with network FC. The associations between MSE and FC are dependent on the temporal scales or frequencies, with higher associations between MSE and FC at lower temporal frequencies. Our results from theoretical modeling, animal experiment and human fMRI indicate that (1) Regional neural complexity and network FC may be two related aspects of brain's information processing: the more complex regional neural activity, the higher FC this region has with other brain regions; (2) MSE at high and low frequencies may represent local and distributed information processing across brain regions. Based on literature and our data, we propose that the complexity of regional neural signals may serve as an index of the brain's capacity of information processing-increased complexity may indicate greater transition or exploration between different states of brain networks, thereby a greater propensity for information processing.
NASA Astrophysics Data System (ADS)
Teresa Blázquez, M.; Anguiano, Marta; de Saavedra, Fernando Arias; Lallena, Antonio M.; Carpena, Pedro
2009-05-01
The detrended fluctuation analysis is used to study the behavior of different time series obtained from the trajectory of the center of pressure, the output of the activity of the human postural control system. The results suggest that these trajectories present two different regimes in their scaling properties: persistent (for high frequencies, short-range time scale) to antipersistent (for low frequencies, long-range time scale) behaviors. The similitude between the results obtained for the measurements, done with both eyes open and eyes closed, indicate either that the visual system may be disregarded by the postural control system while maintaining the quiet standing, or that the control mechanisms associated with each type of information (visual, vestibular and somatosensory) cannot be disentangled with the type of analysis performed here.
A FAST BAYESIAN METHOD FOR UPDATING AND FORECASTING HOURLY OZONE LEVELS
A Bayesian hierarchical space-time model is proposed by combining information from real-time ambient AIRNow air monitoring data, and output from a computer simulation model known as the Community Multi-scale Air Quality (Eta-CMAQ) forecast model. A model validation analysis shows...
Temporal scaling and spatial statistical analyses of groundwater level fluctuations
NASA Astrophysics Data System (ADS)
Sun, H.; Yuan, L., Sr.; Zhang, Y.
2017-12-01
Natural dynamics such as groundwater level fluctuations can exhibit multifractionality and/or multifractality due likely to multi-scale aquifer heterogeneity and controlling factors, whose statistics requires efficient quantification methods. This study explores multifractionality and non-Gaussian properties in groundwater dynamics expressed by time series of daily level fluctuation at three wells located in the lower Mississippi valley, after removing the seasonal cycle in the temporal scaling and spatial statistical analysis. First, using the time-scale multifractional analysis, a systematic statistical method is developed to analyze groundwater level fluctuations quantified by the time-scale local Hurst exponent (TS-LHE). Results show that the TS-LHE does not remain constant, implying the fractal-scaling behavior changing with time and location. Hence, we can distinguish the potentially location-dependent scaling feature, which may characterize the hydrology dynamic system. Second, spatial statistical analysis shows that the increment of groundwater level fluctuations exhibits a heavy tailed, non-Gaussian distribution, which can be better quantified by a Lévy stable distribution. Monte Carlo simulations of the fluctuation process also show that the linear fractional stable motion model can well depict the transient dynamics (i.e., fractal non-Gaussian property) of groundwater level, while fractional Brownian motion is inadequate to describe natural processes with anomalous dynamics. Analysis of temporal scaling and spatial statistics therefore may provide useful information and quantification to understand further the nature of complex dynamics in hydrology.
Real-Time Monitoring System for a Utility-Scale Photovoltaic Power Plant
Moreno-Garcia, Isabel M.; Palacios-Garcia, Emilio J.; Pallares-Lopez, Victor; Santiago, Isabel; Gonzalez-Redondo, Miguel J.; Varo-Martinez, Marta; Real-Calvo, Rafael J.
2016-01-01
There is, at present, considerable interest in the storage and dispatchability of photovoltaic (PV) energy, together with the need to manage power flows in real-time. This paper presents a new system, PV-on time, which has been developed to supervise the operating mode of a Grid-Connected Utility-Scale PV Power Plant in order to ensure the reliability and continuity of its supply. This system presents an architecture of acquisition devices, including wireless sensors distributed around the plant, which measure the required information. It is also equipped with a high-precision protocol for synchronizing all data acquisition equipment, something that is necessary for correctly establishing relationships among events in the plant. Moreover, a system for monitoring and supervising all of the distributed devices, as well as for the real-time treatment of all the registered information, is presented. Performances were analyzed in a 400 kW transformation center belonging to a 6.1 MW Utility-Scale PV Power Plant. In addition to monitoring the performance of all of the PV plant’s components and detecting any failures or deviations in production, this system enables users to control the power quality of the signal injected and the influence of the installation on the distribution grid. PMID:27240365
Decorrelation scales for Arctic Ocean hydrography - Part I: Amerasian Basin
NASA Astrophysics Data System (ADS)
Sumata, Hiroshi; Kauker, Frank; Karcher, Michael; Rabe, Benjamin; Timmermans, Mary-Louise; Behrendt, Axel; Gerdes, Rüdiger; Schauer, Ursula; Shimada, Koji; Cho, Kyoung-Ho; Kikuchi, Takashi
2018-03-01
Any use of observational data for data assimilation requires adequate information of their representativeness in space and time. This is particularly important for sparse, non-synoptic data, which comprise the bulk of oceanic in situ observations in the Arctic. To quantify spatial and temporal scales of temperature and salinity variations, we estimate the autocorrelation function and associated decorrelation scales for the Amerasian Basin of the Arctic Ocean. For this purpose, we compile historical measurements from 1980 to 2015. Assuming spatial and temporal homogeneity of the decorrelation scale in the basin interior (abyssal plain area), we calculate autocorrelations as a function of spatial distance and temporal lag. The examination of the functional form of autocorrelation in each depth range reveals that the autocorrelation is well described by a Gaussian function in space and time. We derive decorrelation scales of 150-200 km in space and 100-300 days in time. These scales are directly applicable to quantify the representation error, which is essential for use of ocean in situ measurements in data assimilation. We also describe how the estimated autocorrelation function and decorrelation scale should be applied for cost function calculation in a data assimilation system.
Multi-Spatiotemporal Patterns of Residential Burglary Crimes in Chicago: 2006-2016
NASA Astrophysics Data System (ADS)
Luo, J.
2017-10-01
This research attempts to explore the patterns of burglary crimes at multi-spatiotemporal scales in Chicago between 2006 and 2016. Two spatial scales are investigated that are census block and police beat area. At each spatial scale, three temporal scales are integrated to make spatiotemporal slices: hourly scale with two-hour time step from 12:00am to the end of the day; daily scale with one-day step from Sunday to Saturday within a week; monthly scale with one-month step from January to December. A total of six types of spatiotemporal slices will be created as the base for the analysis. Burglary crimes are spatiotemporally aggregated to spatiotemporal slices based on where and when they occurred. For each type of spatiotemporal slices with burglary occurrences integrated, spatiotemporal neighborhood will be defined and managed in a spatiotemporal matrix. Hot-spot analysis will identify spatiotemporal clusters of each type of spatiotemporal slices. Spatiotemporal trend analysis is conducted to indicate how the clusters shift in space and time. The analysis results will provide helpful information for better target policing and crime prevention policy such as police patrol scheduling regarding times and places covered.
Informal Nature Experience on the School Playground
ERIC Educational Resources Information Center
Raith, Andreas
2015-01-01
In Germany, all-day care and all-day schooling are currently increasing on a large-scale. The extended time children spend in educational institutions could potentially result in limited access to nature experience for children. On the other hand, it could equally create opportunities for informal nature experience if school playgrounds have a…
Recent Trends in Local-Scale Marine Biodiversity Reflect Community Structure and Human Impacts.
Elahi, Robin; O'Connor, Mary I; Byrnes, Jarrett E K; Dunic, Jillian; Eriksson, Britas Klemens; Hensel, Marc J S; Kearns, Patrick J
2015-07-20
The modern biodiversity crisis reflects global extinctions and local introductions. Human activities have dramatically altered rates and scales of processes that regulate biodiversity at local scales. Reconciling the threat of global biodiversity loss with recent evidence of stability at fine spatial scales is a major challenge and requires a nuanced approach to biodiversity change that integrates ecological understanding. With a new dataset of 471 diversity time series spanning from 1962 to 2015 from marine coastal ecosystems, we tested (1) whether biodiversity changed at local scales in recent decades, and (2) whether we can ignore ecological context (e.g., proximate human impacts, trophic level, spatial scale) and still make informative inferences regarding local change. We detected a predominant signal of increasing species richness in coastal systems since 1962 in our dataset, though net species loss was associated with localized effects of anthropogenic impacts. Our geographically extensive dataset is unlikely to be a random sample of marine coastal habitats; impacted sites (3% of our time series) were underrepresented relative to their global presence. These local-scale patterns do not contradict the prospect of accelerating global extinctions but are consistent with local species loss in areas with direct human impacts and increases in diversity due to invasions and range expansions in lower impact areas. Attempts to detect and understand local biodiversity trends are incomplete without information on local human activities and ecological context. Copyright © 2015 Elsevier Ltd. All rights reserved.
Convective organization in the Pacific ITCZ: Merging OLR, TOVS, and SSM/I information
NASA Technical Reports Server (NTRS)
Hayes, Patrick M.; Mcguirk, James P.
1993-01-01
One of the most striking features of the planet's long-time average cloudiness is the zonal band of concentrated convection lying near the equator. Large-scale variability of the Intertropical Convergence Zone (ITCZ) has been well documented in studies of the planetary spatial scales and seasonal/annual/interannual temporal cycles of convection. Smaller-scale variability is difficult to study over the tropical oceans for several reasons. Conventional surface and upper-air data are virtually non-existent in some regions; diurnal and annual signals overwhelm fluctuations on other time scales; and analyses of variables such as geopotential and moisture are generally less reliable in the tropics. These problems make the use of satellite data an attractive alternative and the preferred means to study variability of tropical weather systems.
On supervised graph Laplacian embedding CA model & kernel construction and its application
NASA Astrophysics Data System (ADS)
Zeng, Junwei; Qian, Yongsheng; Wang, Min; Yang, Yongzhong
2017-01-01
There are many methods to construct kernel with given data attribute information. Gaussian radial basis function (RBF) kernel is one of the most popular ways to construct a kernel. The key observation is that in real-world data, besides the data attribute information, data label information also exists, which indicates the data class. In order to make use of both data attribute information and data label information, in this work, we propose a supervised kernel construction method. Supervised information from training data is integrated into standard kernel construction process to improve the discriminative property of resulting kernel. A supervised Laplacian embedding cellular automaton model is another key application developed for two-lane heterogeneous traffic flow with the safe distance and large-scale truck. Based on the properties of traffic flow in China, we re-calibrate the cell length, velocity, random slowing mechanism and lane-change conditions and use simulation tests to study the relationships among the speed, density and flux. The numerical results show that the large-scale trucks will have great effects on the traffic flow, which are relevant to the proportion of the large-scale trucks, random slowing rate and the times of the lane space change.
Methods and apparatus for determining cardiac output
NASA Technical Reports Server (NTRS)
Cohen, Richard J. (Inventor); Sherman, Derin A. (Inventor); Mukkamala, Ramakrishna (Inventor)
2010-01-01
The present invention provides methods and apparatus for determining a dynamical property of the systemic or pulmonary arterial tree using long time scale information, i.e., information obtained from measurements over time scales greater than a single cardiac cycle. In one aspect, the invention provides a method and apparatus for monitoring cardiac output (CO) from a single blood pressure signal measurement obtained at any site in the systemic or pulmonary arterial tree or from any related measurement including, for example, fingertip photoplethysmography.According to the method the time constant of the arterial tree, defined to be the product of the total peripheral resistance (TPR) and the nearly constant arterial compliance, is determined by analyzing the long time scale variations (greater than a single cardiac cycle) in any of these blood pressure signals. Then, according to Ohm's law, a value proportional to CO may be determined from the ratio of the blood pressure signal to the estimated time constant. The proportional CO values derived from this method may be calibrated to absolute CO, if desired, with a single, absolute measure of CO (e.g., thermodilution). The present invention may be applied to invasive radial arterial blood pressure or pulmonary arterial blood pressure signals which are routinely measured in intensive care units and surgical suites or to noninvasively measured peripheral arterial blood pressure signals or related noninvasively measured signals in order to facilitate the clinical monitoring of CO as well as TPR.
A hybrid procedure for MSW generation forecasting at multiple time scales in Xiamen City, China.
Xu, Lilai; Gao, Peiqing; Cui, Shenghui; Liu, Chun
2013-06-01
Accurate forecasting of municipal solid waste (MSW) generation is crucial and fundamental for the planning, operation and optimization of any MSW management system. Comprehensive information on waste generation for month-scale, medium-term and long-term time scales is especially needed, considering the necessity of MSW management upgrade facing many developing countries. Several existing models are available but of little use in forecasting MSW generation at multiple time scales. The goal of this study is to propose a hybrid model that combines the seasonal autoregressive integrated moving average (SARIMA) model and grey system theory to forecast MSW generation at multiple time scales without needing to consider other variables such as demographics and socioeconomic factors. To demonstrate its applicability, a case study of Xiamen City, China was performed. Results show that the model is robust enough to fit and forecast seasonal and annual dynamics of MSW generation at month-scale, medium- and long-term time scales with the desired accuracy. In the month-scale, MSW generation in Xiamen City will peak at 132.2 thousand tonnes in July 2015 - 1.5 times the volume in July 2010. In the medium term, annual MSW generation will increase to 1518.1 thousand tonnes by 2015 at an average growth rate of 10%. In the long term, a large volume of MSW will be output annually and will increase to 2486.3 thousand tonnes by 2020 - 2.5 times the value for 2010. The hybrid model proposed in this paper can enable decision makers to develop integrated policies and measures for waste management over the long term. Copyright © 2013 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chorover, Jon; Mueller, Karl; O'Day, Peggy Anne
2016-06-30
Objectives of the Project: 1. Determine the process coupling that occurs between mineral transformation and contaminant (U and Sr) speciation in acid-uranium waste weathered Hanford sediments. 2. Establish linkages between molecular-scale contaminant speciation and meso-scale contaminant lability, release and reactive transport. 3. Make conjunctive use of molecular- to bench-scale data to constrain the development of a mechanistic, reactive transport model that includes coupling of contaminant sorption-desorption and mineral transformation reactions. Hypotheses Tested: Uranium and strontium speciation in legacy sediments from the U-8 and U-12 Crib sites can be reproduced in bench-scale weathering experiments conducted on unimpacted Hanford sediments from themore » same formations; Reactive transport modeling of future uranium and strontium releases from the vadose zone of acid-waste weathered sediments can be effectively constrained by combining molecular-scale information on contaminant bonding environment with grain-scale information on contaminant phase partitioning, and meso-scale kinetic data on contaminant release from the waste-weathered porous media; Although field contamination and laboratory experiments differ in their diagenetic time scales (decades for field vs. months to years for lab), sediment dissolution, neophase nucleation, and crystal growth reactions that occur during the initial disequilibrium induced by waste-sediment interaction leave a strong imprint that persists over subsequent longer-term equilibration time scales and, therefore, give rise to long-term memory effects. Enabling Capabilities Developed: Our team developed an iterative measure-model approach that is broadly applicable to elucidate the mechanistic underpinnings of reactive contaminant transport in geomedia subject to active weathering.« less
NASA Astrophysics Data System (ADS)
Kirchhoff, C.; Dilling, L.
2011-12-01
Water managers have long experienced the challenges of managing water resources in a variable climate. However, climate change has the potential to reshape the experiential landscape by, for example, increasing the intensity and duration of droughts, shifting precipitation timing and amounts, and changing sea levels. Given the uncertainty in evaluating potential climate risks as well as future water availability and water demands, scholars suggest water managers employ more flexible and adaptive science-based management to manage uncertainty (NRC 2009). While such an approach is appropriate, for adaptive science-based management to be effective both governance and information must be concordant across three measures: fit, interplay and scale (Young 2002)(Note 1). Our research relies on interviews of state water managers and related experts (n=50) and documentary analysis in five U.S. states to understand the drivers and constraints to improving water resource planning and decision-making in a changing climate using an assessment of fit, interplay and scale as an evaluative framework. We apply this framework to assess and compare how water managers plan and respond to current or anticipated water resource challenges within each state. We hypothesize that better alignment between the data and management framework and the water resource problem improves water managers' facility to understand (via available, relevant, timely information) and respond appropriately (through institutional response mechanisms). In addition, better alignment between governance mechanisms (between the scope of the problem and identified appropriate responses) improves water management. Moreover, because many of the management challenges analyzed in this study concern present day issues with scarcity brought on by a combination of growth and drought, better alignment of fit, interplay, and scale today will enable and prepare water managers to be more successful in adapting to climate change impacts in the long-term. Note 1: For the purposes of this research, the problem of fit deals with the level of concordance between the natural and human systems while interplay involves how institutional arrangements interact both horizontally and vertically. Lastly, scale considers both spatial and temporal alignment of the physical systems and management structure. For example, to manage water resources effectively in a changing climate suggests having information that informs short-term and long-term changes and having institutional arrangements that seek understanding across temporal scales and facilitate responses based on information available (Young 2002).
Data compression and information retrieval via symbolization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, X.Z.; Tracy, E.R.
Converting a continuous signal into a multisymbol stream is a simple method of data compression which preserves much of the dynamical information present in the original signal. The retrieval of selected types of information from symbolic data involves binary operations and is therefore optimal for digital computers. For example, correlation time scales can be easily recovered, even at high noise levels, by varying the time delay for symbolization. Also, the presence of periodicity in the signal can be reliably detected even if it is weak and masked by a dominant chaotic/stochastic background. {copyright} {ital 1998 American Institute of Physics.}
Grech, Alana; Sheppard, James; Marsh, Helene
2011-01-01
Background Conservation planning and the design of marine protected areas (MPAs) requires spatially explicit information on the distribution of ecological features. Most species of marine mammals range over large areas and across multiple planning regions. The spatial distributions of marine mammals are difficult to predict using habitat modelling at ecological scales because of insufficient understanding of their habitat needs, however, relevant information may be available from surveys conducted to inform mandatory stock assessments. Methodology and Results We use a 20-year time series of systematic aerial surveys of dugong (Dugong dugong) abundance to create spatially-explicit models of dugong distribution and relative density at the scale of the coastal waters of northeast Australia (∼136,000 km2). We interpolated the corrected data at the scale of 2 km * 2 km planning units using geostatistics. Planning units were classified as low, medium, high and very high dugong density on the basis of the relative density of dugongs estimated from the models and a frequency analysis. Torres Strait was identified as the most significant dugong habitat in northeast Australia and the most globally significant habitat known for any member of the Order Sirenia. The models are used by local, State and Federal agencies to inform management decisions related to the Indigenous harvest of dugongs, gill-net fisheries and Australia's National Representative System of Marine Protected Areas. Conclusion/Significance In this paper we demonstrate that spatially-explicit population models add value to data collected for stock assessments, provide a robust alternative to predictive habitat distribution models, and inform species conservation at multiple scales. PMID:21464933
Matthew J. Gregory; Zhiqiang Yang; David M. Bell; Warren B. Cohen; Sean Healey; Janet L. Ohmann; Heather M. Roberts
2015-01-01
Mapping vegetation and landscape change at fine spatial scales is needed to inform natural resource and conservation planning, but such maps are expensive and time-consuming to produce. For Landsat-based methodologies, mapping efforts are hampered by the daunting task of manipulating multivariate data for millions to billions of pixels. The advent of cloud-based...
USDA-ARS?s Scientific Manuscript database
Drought has significant impacts over broad spatial and temporal scales, and information about the timing and extent of such conditions is of critical importance to many end users in the agricultural and water resource management communities. The ability to accurately monitor effects on crops and pr...
NASA Astrophysics Data System (ADS)
Lazrus, H.; Done, J.; Morss, R. E.
2017-12-01
A new branch of climate science, known as decadal prediction, seeks to predict the time-varying trajectory of climate over the next 3-30 years and not just the longer-term trends. Decadal predictions bring climate information into the time horizon of decision makers, particularly those tasked with managing water resources and floods whose master planning is often on the timescale of decades. Information from decadal predictions may help alleviate some aspects of vulnerability by helping to inform decisions that reduce drought and flood exposure and increase adaptive capacities including preparedness, response, and recovery. This presentation will highlight an interdisciplinary project - involving atmospheric and social scientists - on the development of decadal climate information and its use in decision making. The presentation will explore the skill and utility of decadal drought and flood prediction along Colorado's Front Range, an area experiencing rapid population growth and uncertain climate variability and climate change impacts. Innovative statistical and dynamical atmospheric modeling techniques explore the extent to which Colorado precipitation can be predicted on decadal scales using remote Pacific Ocean surface temperature patterns. Concurrently, stakeholder interviews with flood managers in Colorado are being used to explore the potential utility of decadal climate information. Combining the modeling results with results from the stakeholder interviews shows that while there is still significant uncertainty surrounding precipitation on decadal time scales, relevant and well communicated decadal information has potential to be useful for drought and flood management.
Decadal-Scale Forecasting of Climate Drivers for Marine Applications.
Salinger, J; Hobday, A J; Matear, R J; O'Kane, T J; Risbey, J S; Dunstan, P; Eveson, J P; Fulton, E A; Feng, M; Plagányi, É E; Poloczanska, E S; Marshall, A G; Thompson, P A
Climate influences marine ecosystems on a range of time scales, from weather-scale (days) through to climate-scale (hundreds of years). Understanding of interannual to decadal climate variability and impacts on marine industries has received less attention. Predictability up to 10 years ahead may come from large-scale climate modes in the ocean that can persist over these time scales. In Australia the key drivers of climate variability affecting the marine environment are the Southern Annular Mode, the Indian Ocean Dipole, the El Niño/Southern Oscillation, and the Interdecadal Pacific Oscillation, each has phases that are associated with different ocean circulation patterns and regional environmental variables. The roles of these drivers are illustrated with three case studies of extreme events-a marine heatwave in Western Australia, a coral bleaching of the Great Barrier Reef, and flooding in Queensland. Statistical and dynamical approaches are described to generate forecasts of climate drivers that can subsequently be translated to useful information for marine end users making decisions at these time scales. Considerable investment is still needed to support decadal forecasting including improvement of ocean-atmosphere models, enhancement of observing systems on all scales to support initiation of forecasting models, collection of important biological data, and integration of forecasts into decision support tools. Collaboration between forecast developers and marine resource sectors-fisheries, aquaculture, tourism, biodiversity management, infrastructure-is needed to support forecast-based tactical and strategic decisions that reduce environmental risk over annual to decadal time scales. © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Solomon, Sean C.; Jordan, Thomas H.
1993-01-01
Long-wavelength variations in geoid height, bathymetry, and SS-S travel times are all relatable to lateral variations in the characteristic temperature and bulk composition of the upper mantle. The temperature and composition are in turn relatable to mantle convection and the degree of melt extraction from the upper mantle residuum. Thus the combined inversion of the geoid or gravity field, residual bathymetry, and seismic velocity information offers the promise of resolving fundamental aspects of the pattern of mantle dynamics. The use of differential body wave travel times as a measure of seismic velocity information, in particular, permits resolution of lateral variations at scales not resolvable by conventional global or regional-scale seismic tomography with long-period surface waves. These intermediate scale lengths, well resolved in global gravity field models, are crucial for understanding the details of any chemical or physical layering in the mantle and of the characteristics of so-called 'small-scale' convection beneath oceanic lithosphere. In 1991 a three-year project to the NASA Geophysics Program was proposed to carry out a systematic inversion of long-wavelength geoid anomalies, residual bathymetric anomalies, and differential SS-S travel time delays for the lateral variation in characteristic temperature and bulk composition of the oceanic upper mantle. The project was funded as a three-year award, beginning on 1 Jan. 1992.
Ju, Jinyong; Li, Wei; Wang, Yuqiao; Fan, Mengbao; Yang, Xuefeng
2016-01-01
Effective feedback control requires all state variable information of the system. However, in the translational flexible-link manipulator (TFM) system, it is unrealistic to measure the vibration signals and their time derivative of any points of the TFM by infinite sensors. With the rigid-flexible coupling between the global motion of the rigid base and the elastic vibration of the flexible-link manipulator considered, a two-time scale virtual sensor, which includes the speed observer and the vibration observer, is designed to achieve the estimation for the vibration signals and their time derivative of the TFM, as well as the speed observer and the vibration observer are separately designed for the slow and fast subsystems, which are decomposed from the dynamic model of the TFM by the singular perturbation. Additionally, based on the linear-quadratic differential games, the observer gains of the two-time scale virtual sensor are optimized, which aims to minimize the estimation error while keeping the observer stable. Finally, the numerical calculation and experiment verify the efficiency of the designed two-time scale virtual sensor. PMID:27801840
NASA Astrophysics Data System (ADS)
Pressel, K. G.; Collins, W.; Desai, A. R.
2011-12-01
Deficiencies in the parameterization of boundary layer clouds in global climate models (GCMs) remains one of the greatest sources of uncertainty in climate change predictions. Many GCM cloud parameterizations, which seek to include some representation of subgrid-scale cloud variability, do so by making assumptions regarding the subgrid-scale spatial probability density function (PDF) of total water content. Properly specifying the form and parameters of the total water PDF is an essential step in the formulation of PDF based cloud parameterizations. In the cloud free boundary layer, the PDF of total water mixing ratio is equivalent to the PDF of water vapor mixing ratio. Understanding the PDF of water vapor mixing ratio in the cloud free atmosphere is a necessary step towards understanding the PDF of water vapor in the cloudy atmosphere. A primary challenge in empirically constraining the PDF of water vapor mixing ratio is a distinct lack of a spatially distributed observational dataset at or near cloud scale. However, at meso-beta (20-50km) and larger scales, there is a wealth of information on the spatial distribution of water vapor contained in the physically retrieved water vapor profiles from the Atmospheric Infrared Sounder onboard NASA`s Aqua satellite. The scaling (scale-invariance) of the observed water vapor field has been suggested as means of using observations at satellite observed (meso-beta) scales to derive information about cloud scale PDFs. However, doing so requires the derivation of a robust climatology of water vapor scaling from in-situ observations across the meso- gamma (2-20km) and meso-beta scales. In this work, we present the results of the scaling of high frequency (10Hz) time series of water vapor mixing ratio as observed from the 447m WLEF tower located near Park Falls, Wisconsin. Observations from a tall tower offer an ideal set of observations with which to investigate scaling at meso-gamma and meso-beta scales requiring only the assumption of Taylor`s Hypothesis to convert observed time scales to spatial scales. Furthermore, the WLEF tower holds an instrument suite offering a diverse set of variables at the 396m, 122m, and 30m levels with which to characterize the state of the boundary layer. Three methods are used to compute scaling exponents for the observed time series; poor man`s variance spectra, first order structure functions, and detrended fluctuation analysis. In each case scaling exponents are computed by linear regression. The results for each method are compared and used to build a climatology of scaling exponents. In particular, the results for June 2007 are presented, and it is shown that the scaling of water vapor time series at the 396m level is characterized by two regimes that are determined by the state of the boundary layer. Finally, the results are compared to, and shown to be roughly consistent with, scaling exponents computed from AIRS observations.
Ground-based demonstration of the European Laser Timing (ELT) experiment.
Schreiber, Karl Ulrich; Prochazka, Ivan; Lauber, Pierre; Hugentobler, Urs; Schäfer, Wolfgang; Cacciapuoti, Luigi; Nasca, Rosario
2010-03-01
The development of techniques for the comparison of distant clocks and for the distribution of stable and accurate time scales has important applications in metrology and fundamental physics research. Additionally, the rapid progress of frequency standards in the optical domain is presently demanding additional efforts for improving the performances of existing time and frequency transfer links. Present clock comparison systems in the microwave domain are based on GPS and two-way satellite time and frequency transfer (TWSTFT). European Laser Timing (ELT) is an optical link presently under study in the frame of the ESA mission Atomic Clock Ensemble in Space (ACES). The on-board hardware for ELT consists of a corner cube retro-reflector (CCR), a single-photon avalanche diode (SPAD), and an event timer board connected to the ACES time scale. Light pulses fired toward ACES by a laser ranging station will be detected by the SPAD diode and time tagged in the ACES time scale. At the same time, the CCR will re-direct the laser pulse toward the ground station providing precise ranging information. We have carried out a ground-based feasibility study at the Geodetic Observatory Wettzell. By using ordinary satellites with laser reflectors and providing a second independent detection port and laser pulse timing unit with an independent time scale, it is possible to evaluate many aspects of the proposed time transfer link before the ACES launch.
Chen, Xiaoling; Xie, Ping; Zhang, Yuanyuan; Chen, Yuling; Yang, Fangmei; Zhang, Litai; Li, Xiaoli
2018-01-01
Recently, functional corticomuscular coupling (FCMC) between the cortex and the contralateral muscle has been used to evaluate motor function after stroke. As we know, the motor-control system is a closed-loop system that is regulated by complex self-regulating and interactive mechanisms which operate in multiple spatial and temporal scales. Multiscale analysis can represent the inherent complexity. However, previous studies in FCMC for stroke patients mainly focused on the coupling strength in single-time scale, without considering the changes of the inherently directional and multiscale properties in sensorimotor systems. In this paper, a multiscale-causal model, named multiscale transfer entropy, was used to quantify the functional connection between electroencephalogram over the scalp and electromyogram from the flexor digitorum superficialis (FDS) recorded simultaneously during steady-state grip task in eight stroke patients and eight healthy controls. Our results showed that healthy controls exhibited higher coupling when the scale reached up to about 12, and the FCMC in descending direction was stronger at certain scales (1, 7, 12, and 14) than that in ascending direction. Further analysis showed these multi-time scale characteristics mainly focused on the beta1 band at scale 11 and beta2 band at scale 9, 11, 13, and 15. Compared to controls, the multiscale properties of the FCMC for stroke were changed, the strengths in both directions were reduced, and the gaps between the descending and ascending directions were disappeared over all scales. Further analysis in specific bands showed that the reduced FCMC mainly focused on the alpha2 at higher scale, beta1 and beta2 across almost the entire scales. This study about multi-scale confirms that the FCMC between the brain and muscles is capable of complex and directional characteristics, and these characteristics in functional connection for stroke are destroyed by the structural lesion in the brain that might disrupt coordination, feedback, and information transmission in efferent control and afferent feedback. The study demonstrates for the first time the multiscale and directional characteristics of the FCMC for stroke patients, and provides a preliminary observation for application in clinical assessment following stroke. PMID:29765351
Monitoring forest dynamics with multi-scale and time series imagery.
Huang, Chunbo; Zhou, Zhixiang; Wang, Di; Dian, Yuanyong
2016-05-01
To learn the forest dynamics and evaluate the ecosystem services of forest effectively, a timely acquisition of spatial and quantitative information of forestland is very necessary. Here, a new method was proposed for mapping forest cover changes by combining multi-scale satellite remote-sensing imagery with time series data. Using time series Normalized Difference Vegetation Index products derived from the Moderate Resolution Imaging Spectroradiometer images (MODIS-NDVI) and Landsat Thematic Mapper/Enhanced Thematic Mapper Plus (TM/ETM+) images as data source, a hierarchy stepwise analysis from coarse scale to fine scale was developed for detecting the forest change area. At the coarse scale, MODIS-NDVI data with 1-km resolution were used to detect the changes in land cover types and a land cover change map was constructed using NDVI values at vegetation growing seasons. At the fine scale, based on the results at the coarse scale, Landsat TM/ETM+ data with 30-m resolution were used to precisely detect the forest change location and forest change trend by analyzing time series forest vegetation indices (IFZ). The method was tested using the data for Hubei Province, China. The MODIS-NDVI data from 2001 to 2012 were used to detect the land cover changes, and the overall accuracy was 94.02 % at the coarse scale. At the fine scale, the available TM/ETM+ images at vegetation growing seasons between 2001 and 2012 were used to locate and verify forest changes in the Three Gorges Reservoir Area, and the overall accuracy was 94.53 %. The accuracy of the two layer hierarchical monitoring results indicated that the multi-scale monitoring method is feasible and reliable.
Zhang, Zhenming; Zhou, Yunchao; Wang, Shijie; Huang, Xianfei
2018-04-13
Karst areas are typical ecologically fragile areas, and stony desertification has become the most serious ecological and economic problems in these areas worldwide as well as a source of disasters and poverty. A reasonable sampling scale is of great importance for research on soil science in karst areas. In this paper, the spatial distribution of stony desertification characteristics and its influencing factors in karst areas are studied at different sampling scales using a grid sampling method based on geographic information system (GIS) technology and geo-statistics. The rock exposure obtained through sampling over a 150 m × 150 m grid in the Houzhai River Basin was utilized as the original data, and five grid scales (300 m × 300 m, 450 m × 450 m, 600 m × 600 m, 750 m × 750 m, and 900 m × 900 m) were used as the subsample sets. The results show that the rock exposure does not vary substantially from one sampling scale to another, while the average values of the five subsamples all fluctuate around the average value of the entire set. As the sampling scale increases, the maximum value and the average value of the rock exposure gradually decrease, and there is a gradual increase in the coefficient of variability. At the scale of 150 m × 150 m, the areas of minor stony desertification, medium stony desertification, and major stony desertification in the Houzhai River Basin are 7.81 km², 4.50 km², and 1.87 km², respectively. The spatial variability of stony desertification at small scales is influenced by many factors, and the variability at medium scales is jointly influenced by gradient, rock content, and rock exposure. At large scales, the spatial variability of stony desertification is mainly influenced by soil thickness and rock content.
Environment spectrum and coherence behaviours in a rare-earth doped crystal for quantum memory.
Gong, Bo; Tu, Tao; Zhou, Zhong-Quan; Zhu, Xing-Yu; Li, Chuan-Feng; Guo, Guang-Can
2017-12-21
We theoretically investigate the dynamics of environment and coherence behaviours of the central ion in a quantum memory based on a rare-earth doped crystal. The interactions between the central ion and the bath spins suppress the flip-flop rate of the neighbour bath spins and yield a specific environment spectral density S(ω). Under dynamical decoupling pulses, this spectrum provides a general scaling for the coherence envelope and coherence time, which significantly extend over a range on an hour-long time scale. The characterized environment spectrum with ultra-long coherence time can be used to implement various quantum communication and information processing protocols.
Tracing information flow on a global scale using Internet chain-letter data
Liben-Nowell, David; Kleinberg, Jon
2008-01-01
Although information, news, and opinions continuously circulate in the worldwide social network, the actual mechanics of how any single piece of information spreads on a global scale have been a mystery. Here, we trace such information-spreading processes at a person-by-person level using methods to reconstruct the propagation of massively circulated Internet chain letters. We find that rather than fanning out widely, reaching many people in very few steps according to “small-world” principles, the progress of these chain letters proceeds in a narrow but very deep tree-like pattern, continuing for several hundred steps. This suggests a new and more complex picture for the spread of information through a social network. We describe a probabilistic model based on network clustering and asynchronous response times that produces trees with this characteristic structure on social-network data. PMID:18353985
U.S. stream flow measurement and data dissemination improve
Hirsch, Robert M.; Costa, John E.
2004-01-01
Stream flow information is essential for many important uses across a broad range of scales, including global water balances, engineering design, flood forecasting, reservoir operations, navigation, water supply, recreation, and environmental management. Growing populations and competing priorities for water, including preservation and restoration of aquatic habitat, are spurring demand for more accurate, timely, and accessible water data.To be most useful, stream flow information must be collected in a standardized manner, with a known accuracy, and for a long and continuous time period.
Detecting climate-change responses of plants and soil organic matter using isotopomers
NASA Astrophysics Data System (ADS)
Schleucher, Jürgen; Ehlers, Ina; Segura, Javier; Haei, Mahsa; Augusti, Angela; Köhler, Iris; Zuidema, Pieter; Nilsson, Mats; Öquist, Mats
2015-04-01
Responses of vegetation and soils to environmental changes will strongly influence future climate, and responses on century time scales are most important for feedbacks on the carbon cycle, climate models, prediction of crop productivity, and for adaptation to climate change. That plants respond to increasing CO2 on century time scales has been proven by changes in stomatal index, but very little is known beyond this. In soil, the complexity of soil organic matter (SOM) has hampered a sufficient understanding of the temperature sensitivity of SOM turnover. Here we present new stable isotope methodology that allows detecting shifts in metabolism on long time scales, and elucidating SOM turnover on the molecular level. Compound-specific isotope analysis measures isotope ratios of defined metabolites, but as average of the entire molecule. Here we demonstrate how much more detailed information can be obtained from analyses of intramolecular distributions of stable isotopes, so-called isotopomer abundances. As key tool, we use nuclear magnetic resonance (NMR) spectroscopy, which allows detecting isotope abundance with intramolecular resolution and without risk for isotope fractionation during analysis. Enzyme isotope fractionations create non-random isotopomer patterns in biochemical metabolites. At natural isotope abundance, these patterns continuously store metabolic information. We present a strategy how these patterns can be used as to extract signals on plant physiology, climate variables, and their interactions. Applied in retrospective analyses to herbarium samples and tree-ring series, we detect century-time-scale metabolic changes in response to increasing atmospheric CO2, with no evidence for acclimatory reactions by the plants. In trees, the increase in photosynthesis expected from increasing CO2 ("CO2 fertilization) was diminished by increasing temperatures, which resolves the discrepancy between expected increases in photosynthesis and commonly observed lack of biomass increases. Isotopomer patterns are a rich source of metabolic information, which can be retrieved from archives of plant material covering centuries and millennia, the time scales relevant for climate change. Boreal soils contain a huge carbon pool that may be particularly vulnerable to climate change. Biological activity persists in soils under frozen conditions, but it is largely unknown what controls it, and whether it differs from unfrozen conditions. In an incubation experiment, we traced the metabolism of 13C-labeled cellulose by soil microorganisms. NMR analysis revealed that the 13C label was converted both to respired CO2 and to phospholipid fatty acids, indicating that the polymeric substrate cellulose entered both catabolic and anabolic pathways. Both applications demonstrate a fundamental advantage of isotopomer analysis, namely that their abundances directly reflect biochemical processes. This allows obtaining metabolic information on millennial time scales, thus bridging between plant-physiology and paleo sciences. It may also be key to characterizing SOM with sufficient resolution to understand current biogeochemical fluxes involving SOM and to identify molecular components and organisms that are key for SOM turnover.
Generation of skeletal mechanism by means of projected entropy participation indices
NASA Astrophysics Data System (ADS)
Paolucci, Samuel; Valorani, Mauro; Ciottoli, Pietro Paolo; Galassi, Riccardo Malpica
2017-11-01
When the dynamics of reactive systems develop very-slow and very-fast time scales separated by a range of active time scales, with gaps in the fast/active and slow/active time scales, then it is possible to achieve multi-scale adaptive model reduction along-with the integration of the ODEs using the G-Scheme. The scheme assumes that the dynamics is decomposed into active, slow, fast, and invariant subspaces. We derive expressions that establish a direct link between time scales and entropy production by using estimates provided by the G-Scheme. To calculate the contribution to entropy production, we resort to a standard model of a constant pressure, adiabatic, batch reactor, where the mixture temperature of the reactants is initially set above the auto-ignition temperature. Numerical experiments show that the contribution to entropy production of the fast subspace is of the same magnitude as the error threshold chosen for the identification of the decomposition of the tangent space, and the contribution of the slow subspace is generally much smaller than that of the active subspace. The information on entropy production associated with reactions within each subspace is used to define an entropy participation index that is subsequently utilized for model reduction.
Forecasting Hourly Water Demands With Seasonal Autoregressive Models for Real-Time Application
NASA Astrophysics Data System (ADS)
Chen, Jinduan; Boccelli, Dominic L.
2018-02-01
Consumer water demands are not typically measured at temporal or spatial scales adequate to support real-time decision making, and recent approaches for estimating unobserved demands using observed hydraulic measurements are generally not capable of forecasting demands and uncertainty information. While time series modeling has shown promise for representing total system demands, these models have generally not been evaluated at spatial scales appropriate for representative real-time modeling. This study investigates the use of a double-seasonal time series model to capture daily and weekly autocorrelations to both total system demands and regional aggregated demands at a scale that would capture demand variability across a distribution system. Emphasis was placed on the ability to forecast demands and quantify uncertainties with results compared to traditional time series pattern-based demand models as well as nonseasonal and single-seasonal time series models. Additional research included the implementation of an adaptive-parameter estimation scheme to update the time series model when unobserved changes occurred in the system. For two case studies, results showed that (1) for the smaller-scale aggregated water demands, the log-transformed time series model resulted in improved forecasts, (2) the double-seasonal model outperformed other models in terms of forecasting errors, and (3) the adaptive adjustment of parameters during forecasting improved the accuracy of the generated prediction intervals. These results illustrate the capabilities of time series modeling to forecast both water demands and uncertainty estimates at spatial scales commensurate for real-time modeling applications and provide a foundation for developing a real-time integrated demand-hydraulic model.
NASA Astrophysics Data System (ADS)
Schwab, Markus J.; Brauer, Achim; Błaszkiewicz, Mirosław; Raab, Thomas; Wilmking, Martin
2015-04-01
Understanding causes and effects of present-day climate change on landscapes and the human habitat faces two main challenges, (i) too short time series of instrumental observation that do not cover the full range of variability since mechanisms of climate change and landscape evolution work on different time scales, which often not susceptible to human perception, and, (ii) distinct regional differences due to the location with respect to oceanic/continental climatic influences, the geological underground, and the history and intensity of anthropogenic land-use. Both challenges are central for the ICLEA research strategy and demand a high degree of interdisciplinary. In particular, the need to link observations and measurements of ongoing changes with information from the past taken from natural archives requires joint work of scientists with very different time perspectives. On the one hand, scientists that work at geological time scales of thousands and more years and, on the other hand, those observing and investigating recent processes at short time scales. The GFZ, Greifswald University and the Brandenburg University of Technology together with their partner the Polish Academy of Sciences strive for focusing their research capacities and expertise in ICLEA. ICLEA offers young researchers an interdisciplinary and structured education and promote their early independence through coaching and mentoring. Postdoctoral rotation positions at the ICLEA partner institutions ensure mobility of young researchers and promote dissemination of information and expertise between disciplines. Training, Research and Analytical workshops between research partners of the ICLEA virtual institute are another important measure to qualify young researchers. The long-term mission of the Virtual Institute is to provide a substantiated data basis for sustained environmental maintenance based on a profound process understanding at all relevant time scales. Aim is to explore processes of climate and landscape evolution in an historical cultural landscape extending from northeastern Germany into northwestern Poland. The northern-central European lowlands will be facilitated as a natural laboratory providing an ideal case for utilizing a systematic and holistic approach. In ICLEA five complementary work packages (WP) are established according to the key research aspects. WP 1 focused on monitoring mainly hydrology and soil moisture as well as meteorological parameters. WP 2 is linking present day and future monitoring data with the most recent past through analyzing satellite images. This WP will further provide larger spatial scales. WP 3-5 focus on different natural archives to obtain a broad variety of high quality proxy data. Tree rings provide sub-seasonal data for the last centuries up to few millennia, varved lake sediments cover the entire research time interval at seasonal to decadal resolution and palaeosoils and geomorphological features also cover the entire period but not continuously and with lower resolution. Complementary information, like climate, tree ecophysiological and limnological data etc., are provided by cooperation with associated partners. Further information about ICLEA: www.iclea.de
Hallmann, Konstantin; Griebeler, Eva Maria
2018-06-01
Allometric relationships linking species characteristics to body size or mass (scaling) are important in biology. However, studies on the scaling of life history traits in the reptiles (the nonavian Reptilia) are rather scarce, especially for the clades Crocodilia, Testudines, and Rhynchocephalia (single extant species, the tuatara). Previous studies on the scaling of reptilian life history traits indicated that they differ from those seen in the other amniotes (mammals and birds), but so far most comparative studies used small species samples and also not phylogenetically informed analyses. Here, we analyzed the scaling of nine life history traits with adult body mass for crocodiles ( n = 22), squamates ( n = 294), turtles ( n = 52), and reptiles ( n = 369). We used for the first time a phylogenetically informed approach for crocodiles, turtles, and the whole group of reptiles. We explored differences in scaling relationships between the reptilian clades Crocodilia, Squamata, and Testudines as well as differences between reptiles, mammals, and birds. Finally, we applied our scaling relationships, in order to gain new insights into the degree of the exceptionality of the tuatara's life history within reptiles. We observed for none of the life history traits studied any difference in their scaling with body mass between squamates, crocodiles, and turtles, except for clutch size and egg weight showing small differences between these groups. Compared to birds and mammals, scaling relationships of reptiles were similar for time-related traits, but they differed for reproductive traits. The tuatara's life history is more similar to that of a similar-sized turtle or crocodile than to a squamate.
On the time-scales of magmatism at island-arc volcanoes.
Turner, S P
2002-12-15
Precise information on time-scales and rates of change is fundamental to an understanding of natural processes and the development of quantitative physical models in the Earth sciences. U-series isotope studies are revolutionizing this field by providing time information in the range 10(2)-10(4) years, which is similar to that of many modern Earth processes. I review how the application of U-series isotopes has been used to constrain the time-scales of magma formation, ascent and storage beneath island-arc volcanoes. Different elements are distilled-off the subducting plate at different times and in different places. Contributions from subducted sediments to island-arc lava sources appear to occur some 350 kyr to 4 Myr prior to eruption. Fluid release from the subducting oceanic crust into the mantle wedge may be a multi-stage process and occurs over a period ranging from a few hundred kyr to less than one kyr prior to eruption. This implies that dehydration commences prior to the initiation of partial melting within the mantle wedge, which is consistent with recent evidence that the onset of melting is controlled by an isotherm and thus the thermal structure within the wedge. U-Pa disequilibria appear to require a component of decompression melting, possibly due to the development of gravitational instabilities. The preservation of large (226)Ra disequilibria permits only a short period of time between fluid addition and eruption. This requires rapid melt segregation, magma ascent by channelled flow and minimal residence time within the lithosphere. The evolution from basalt to basaltic andesite probably occurs rapidly during ascent or in magma reservoirs inferred from some geophysical data to lie within the lithospheric mantle. The flux across the Moho is broadly andesitic, and some magmas subsequently stall in more shallow crustal-level magma chambers, where they evolve to more differentiated compositions on time-scales of a few thousand years or less.
NASA Astrophysics Data System (ADS)
Schuite, Jonathan; Longuevergne, Laurent; Bour, Olivier; Burbey, Thomas J.; Boudin, Frédéric
2017-04-01
Flow through reservoirs such as fractured media is powered by pressure gradients which also generate measurable poroelastic deformation of the rock body. The combined analysis of ground surface deformation and sub-surface fluid pressure provides valuable insights of a reservoir's structure and hydromechanical properties, which are of interest for deep-seated CO2 or nuclear waste storage for instance. Amongst all surveying tools, surface tiltmeters offer the possibility to grasp hydraulically-induced deformation over a broad range of time scales with a remarkable precision (1 nanoradian). Here, we investigate the information content of transient surface tilt generated by flow in a kilometer scale sub-vertical fault zone and its surrounding fractured rock matrix. Our approach involves the combined analysis of field data and results of a fully coupled poroelastic model, where fault and matrix are represented as equivalent homogeneous domains. The signature of pressure changes in the fault zone due to pumping cycles is clearly recognizable in field tilt data and we aim to explain the peculiar features that appear in: 1) tilt time series alone from a set of 4 instruments; 2) the ratio of tilt over pressure. With the model, we evidence that the shape of tilt measurements on both sides of a fault zone is sensitive to its diffusivity and its elastic modulus. In particular, we show a few well placed tiltmeters (on each side of a fault) give more information on the medium's properties than well spatialized surface displacement maps. Furthermore, the ratio of tilt over pressure predominantly encompasses information about the system's dynamic behavior and extent of the fault zone, and allows separating contributions of flow in the different compartments. Hence, tiltmeters are well suited to characterize hydromechanical processes associated to fault zone hydrogeology at short time scales, where space-borne surveying methods fail to seize any deformation signal.
NASA Astrophysics Data System (ADS)
Luchner, Jakob; Anghileri, Daniela; Castelletti, Andrea
2017-04-01
Real-time control of multi-purpose reservoirs can benefit significantly from hydro-meteorological forecast products. Because of their reliability, the most used forecasts range on time scales from hours to few days and are suitable for short-term operation targets such as flood control. In recent years, hydro-meteorological forecasts have become more accurate and reliable on longer time scales, which are more relevant to long-term reservoir operation targets such as water supply. While the forecast quality of such products has been studied extensively, the forecast value, i.e. the operational effectiveness of using forecasts to support water management, has been only relatively explored. It is comparatively easy to identify the most effective forecasting information needed to design reservoir operation rules for flood control but it is not straightforward to identify which forecast variable and lead time is needed to define effective hedging rules for operational targets with slow dynamics such as water supply. The task is even more complex when multiple targets, with diverse slow and fast dynamics, are considered at the same time. In these cases, the relative importance of different pieces of information, e.g. magnitude and timing of peak flow rate and accumulated inflow on different time lags, may vary depending on the season or the hydrological conditions. In this work, we analyze the relationship between operational forecast value and streamflow forecast horizon for different multi-purpose reservoir trade-offs. We use the Information Selection and Assessment (ISA) framework to identify the most effective forecast variables and horizons for informing multi-objective reservoir operation over short- and long-term temporal scales. The ISA framework is an automatic iterative procedure to discriminate the information with the highest potential to improve multi-objective reservoir operating performance. Forecast variables and horizons are selected using a feature selection technique. The technique determines the most informative combination in a multi-variate regression model to the optimal reservoir releases based on perfect information at a fixed objective trade-off. The improved reservoir operation is evaluated against optimal reservoir operation conditioned upon perfect information on future disturbances and basic reservoir operation using only the day of the year and the reservoir level. Different objective trade-offs are selected for analyzing resulting differences in improved reservoir operation and selected forecast variables and horizons. For comparison, the effective streamflow forecast horizon determined by the ISA framework is benchmarked against the performances obtained with a deterministic model predictive control (MPC) optimization scheme. Both the ISA framework and the MPC optimization scheme are applied to the real-world case study of Lake Como, Italy, using perfect streamflow forecast information. The principal operation targets for Lake Como are flood control and downstream water supply which makes its operation a suitable case study. Results provide critical feedback to reservoir operators on the use of long-term streamflow forecasts and to the hydro-meteorological forecasting community with respect to the forecast horizon needed from reliable streamflow forecasts.
Using Remotely Sensed Information for Near Real-Time Landslide Hazard Assessment
NASA Technical Reports Server (NTRS)
Kirschbaum, Dalia; Adler, Robert; Peters-Lidard, Christa
2013-01-01
The increasing availability of remotely sensed precipitation and surface products provides a unique opportunity to explore how landslide susceptibility and hazard assessment may be approached at larger spatial scales with higher resolution remote sensing products. A prototype global landslide hazard assessment framework has been developed to evaluate how landslide susceptibility and satellite-derived precipitation estimates can be used to identify potential landslide conditions in near-real time. Preliminary analysis of this algorithm suggests that forecasting errors are geographically variable due to the resolution and accuracy of the current susceptibility map and the application of satellite-based rainfall estimates. This research is currently working to improve the algorithm through considering higher spatial and temporal resolution landslide susceptibility information and testing different rainfall triggering thresholds, antecedent rainfall scenarios, and various surface products at regional and global scales.
The role of topography on catchment‐scale water residence time
McGuire, K.J.; McDonnell, Jeffery J.; Weiler, M.; Kendall, C.; McGlynn, B.L.; Welker, J.M.; Seibert, J.
2005-01-01
The age, or residence time, of water is a fundamental descriptor of catchment hydrology, revealing information about the storage, flow pathways, and source of water in a single integrated measure. While there has been tremendous recent interest in residence time estimation to characterize watersheds, there are relatively few studies that have quantified residence time at the watershed scale, and fewer still that have extended those results beyond single catchments to larger landscape scales. We examined topographic controls on residence time for seven catchments (0.085–62.4 km2) that represent diverse geologic and geomorphic conditions in the western Cascade Mountains of Oregon. Our primary objective was to determine the dominant physical controls on catchment‐scale water residence time and specifically test the hypothesis that residence time is related to the size of the basin. Residence times were estimated by simple convolution models that described the transfer of precipitation isotopic composition to the stream network. We found that base flow mean residence times for exponential distributions ranged from 0.8 to 3.3 years. Mean residence time showed no correlation to basin area (r2 < 0.01) but instead was correlated (r2 = 0.91) to catchment terrain indices representing the flow path distance and flow path gradient to the stream network. These results illustrate that landscape organization (i.e., topography) rather than basin area controls catchment‐scale transport. Results from this study may provide a framework for describing scale‐invariant transport across climatic and geologic conditions, whereby the internal form and structure of the basin defines the first‐order control on base flow residence time.
DEXTER: Disease-Expression Relation Extraction from Text.
Gupta, Samir; Dingerdissen, Hayley; Ross, Karen E; Hu, Yu; Wu, Cathy H; Mazumder, Raja; Vijay-Shanker, K
2018-01-01
Gene expression levels affect biological processes and play a key role in many diseases. Characterizing expression profiles is useful for clinical research, and diagnostics and prognostics of diseases. There are currently several high-quality databases that capture gene expression information, obtained mostly from large-scale studies, such as microarray and next-generation sequencing technologies, in the context of disease. The scientific literature is another rich source of information on gene expression-disease relationships that not only have been captured from large-scale studies but have also been observed in thousands of small-scale studies. Expression information obtained from literature through manual curation can extend expression databases. While many of the existing databases include information from literature, they are limited by the time-consuming nature of manual curation and have difficulty keeping up with the explosion of publications in the biomedical field. In this work, we describe an automated text-mining tool, Disease-Expression Relation Extraction from Text (DEXTER) to extract information from literature on gene and microRNA expression in the context of disease. One of the motivations in developing DEXTER was to extend the BioXpress database, a cancer-focused gene expression database that includes data derived from large-scale experiments and manual curation of publications. The literature-based portion of BioXpress lags behind significantly compared to expression information obtained from large-scale studies and can benefit from our text-mined results. We have conducted two different evaluations to measure the accuracy of our text-mining tool and achieved average F-scores of 88.51 and 81.81% for the two evaluations, respectively. Also, to demonstrate the ability to extract rich expression information in different disease-related scenarios, we used DEXTER to extract information on differential expression information for 2024 genes in lung cancer, 115 glycosyltransferases in 62 cancers and 826 microRNA in 171 cancers. All extractions using DEXTER are integrated in the literature-based portion of BioXpress.Database URL: http://biotm.cis.udel.edu/DEXTER.
Estimation of Time Scales in Unsteady Flows in a Turbomachinery Rig
NASA Technical Reports Server (NTRS)
Lewalle, Jacques; Ashpis, David E.
2004-01-01
Time scales in turbulent and transitional flow provide a link between experimental data and modeling, both in terms of physical content and for quantitative assessment. The problem of interest here is the definition of time scales in an unsteady flow. Using representative samples of data from GEAE low pressure turbine experiment in low speed research turbine facility with wake-induced transition, we document several methods to extract dominant frequencies, and compare the results. We show that conventional methods of time scale evaluation (based on autocorrelation functions and on Fourier spectra) and wavelet-based methods provide similar information when applied to stationary signals. We also show the greater flexibility of the wavelet-based methods when dealing with intermittent or strongly modulated data, as are encountered in transitioning boundary layers and in flows with unsteady forcing associated with wake passing. We define phase-averaged dominant frequencies that characterize the turbulence associated with freestream conditions and with the passing wakes downstream of a rotor. The relevance of these results for modeling is discussed in the paper.
Vergara, Pablo M.; Soto, Gerardo E.; Rodewald, Amanda D.; Meneses, Luis O.; Pérez-Hernández, Christian G.
2016-01-01
Theoretical models predict that animals should make foraging decisions after assessing the quality of available habitat, but most models fail to consider the spatio-temporal scales at which animals perceive habitat availability. We tested three foraging strategies that explain how Magellanic woodpeckers (Campephilus magellanicus) assess the relative quality of trees: 1) Woodpeckers with local knowledge select trees based on the available trees in the immediate vicinity. 2) Woodpeckers lacking local knowledge select trees based on their availability at previously visited locations. 3) Woodpeckers using information from long-term memory select trees based on knowledge about trees available within the entire landscape. We observed foraging woodpeckers and used a Brownian Bridge Movement Model to identify trees available to woodpeckers along foraging routes. Woodpeckers selected trees with a later decay stage than available trees. Selection models indicated that preferences of Magellanic woodpeckers were based on clusters of trees near the most recently visited trees, thus suggesting that woodpeckers use visual cues from neighboring trees. In a second analysis, Cox’s proportional hazards models showed that woodpeckers used information consolidated across broader spatial scales to adjust tree residence times. Specifically, woodpeckers spent more time at trees with larger diameters and in a more advanced stage of decay than trees available along their routes. These results suggest that Magellanic woodpeckers make foraging decisions based on the relative quality of trees that they perceive and memorize information at different spatio-temporal scales. PMID:27416115
Vergara, Pablo M; Soto, Gerardo E; Moreira-Arce, Darío; Rodewald, Amanda D; Meneses, Luis O; Pérez-Hernández, Christian G
2016-01-01
Theoretical models predict that animals should make foraging decisions after assessing the quality of available habitat, but most models fail to consider the spatio-temporal scales at which animals perceive habitat availability. We tested three foraging strategies that explain how Magellanic woodpeckers (Campephilus magellanicus) assess the relative quality of trees: 1) Woodpeckers with local knowledge select trees based on the available trees in the immediate vicinity. 2) Woodpeckers lacking local knowledge select trees based on their availability at previously visited locations. 3) Woodpeckers using information from long-term memory select trees based on knowledge about trees available within the entire landscape. We observed foraging woodpeckers and used a Brownian Bridge Movement Model to identify trees available to woodpeckers along foraging routes. Woodpeckers selected trees with a later decay stage than available trees. Selection models indicated that preferences of Magellanic woodpeckers were based on clusters of trees near the most recently visited trees, thus suggesting that woodpeckers use visual cues from neighboring trees. In a second analysis, Cox's proportional hazards models showed that woodpeckers used information consolidated across broader spatial scales to adjust tree residence times. Specifically, woodpeckers spent more time at trees with larger diameters and in a more advanced stage of decay than trees available along their routes. These results suggest that Magellanic woodpeckers make foraging decisions based on the relative quality of trees that they perceive and memorize information at different spatio-temporal scales.
Construction of regulatory networks using expression time-series data of a genotyped population.
Yeung, Ka Yee; Dombek, Kenneth M; Lo, Kenneth; Mittler, John E; Zhu, Jun; Schadt, Eric E; Bumgarner, Roger E; Raftery, Adrian E
2011-11-29
The inference of regulatory and biochemical networks from large-scale genomics data is a basic problem in molecular biology. The goal is to generate testable hypotheses of gene-to-gene influences and subsequently to design bench experiments to confirm these network predictions. Coexpression of genes in large-scale gene-expression data implies coregulation and potential gene-gene interactions, but provide little information about the direction of influences. Here, we use both time-series data and genetics data to infer directionality of edges in regulatory networks: time-series data contain information about the chronological order of regulatory events and genetics data allow us to map DNA variations to variations at the RNA level. We generate microarray data measuring time-dependent gene-expression levels in 95 genotyped yeast segregants subjected to a drug perturbation. We develop a Bayesian model averaging regression algorithm that incorporates external information from diverse data types to infer regulatory networks from the time-series and genetics data. Our algorithm is capable of generating feedback loops. We show that our inferred network recovers existing and novel regulatory relationships. Following network construction, we generate independent microarray data on selected deletion mutants to prospectively test network predictions. We demonstrate the potential of our network to discover de novo transcription-factor binding sites. Applying our construction method to previously published data demonstrates that our method is competitive with leading network construction algorithms in the literature.
Large-scale neuromorphic computing systems
NASA Astrophysics Data System (ADS)
Furber, Steve
2016-10-01
Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.
Behaviors of susceptible-infected epidemics on scale-free networks with identical infectivity
NASA Astrophysics Data System (ADS)
Zhou, Tao; Liu, Jian-Guo; Bai, Wen-Jie; Chen, Guanrong; Wang, Bing-Hong
2006-11-01
In this paper, we propose a susceptible-infected model with identical infectivity, in which, at every time step, each node can only contact a constant number of neighbors. We implemented this model on scale-free networks, and found that the infected population grows in an exponential form with the time scale proportional to the spreading rate. Furthermore, by numerical simulation, we demonstrated that the targeted immunization of the present model is much less efficient than that of the standard susceptible-infected model. Finally, we investigate a fast spreading strategy when only local information is available. Different from the extensively studied path-finding strategy, the strategy preferring small-degree nodes is more efficient than that preferring large-degree nodes. Our results indicate the existence of an essential relationship between network traffic and network epidemic on scale-free networks.
Statistical measures of Planck scale signal correlations in interferometers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, Craig J.; Kwon, Ohkyung
2015-06-22
A model-independent statistical framework is presented to interpret data from systems where the mean time derivative of positional cross correlation between world lines, a measure of spreading in a quantum geometrical wave function, is measured with a precision smaller than the Planck time. The framework provides a general way to constrain possible departures from perfect independence of classical world lines, associated with Planck scale bounds on positional information. A parametrized candidate set of possible correlation functions is shown to be consistent with the known causal structure of the classical geometry measured by an apparatus, and the holographic scaling of informationmore » suggested by gravity. Frequency-domain power spectra are derived that can be compared with interferometer data. As a result, simple projections of sensitivity for specific experimental set-ups suggests that measurements will directly yield constraints on a universal time derivative of the correlation function, and thereby confirm or rule out a class of Planck scale departures from classical geometry.« less
NASA Astrophysics Data System (ADS)
Erickson, M.; Olaguer, J.; Wijesinghe, A.; Colvin, J.; Neish, B.; Williams, J.
2014-12-01
It is becoming increasingly important to understand the emissions and health effects of industrial facilities. Many areas have no or limited sustained monitoring capabilities, making it difficult to quantify the major pollution sources affecting human health, especially in fence line communities. Developments in real-time monitoring and micro-scale modeling offer unique ways to tackle these complex issues. This presentation will demonstrate the capability of coupling real-time observations with micro-scale modeling to provide real-time information and near real-time source attribution. The Houston Advanced Research Center constructed the Mobile Acquisition of Real-time Concentrations (MARC) laboratory. MARC consists of a Ford E-350 passenger van outfitted with a Proton Transfer Reaction Mass Spectrometer (PTR-MS) and meteorological equipment. This allows for the fast measurement of various VOCs important to air quality. The data recorded from the van is uploaded to an off-site database and the information is broadcast to a website in real-time. This provides for off-site monitoring of MARC's observations, which allows off-site personnel to provide immediate input to the MARC operators on how to best achieve project objectives. The information stored in the database can also be used to provide near real-time source attribution. An inverse model has been used to ascertain the amount, location, and timing of emissions based on MARC measurements in the vicinity of industrial sites. The inverse model is based on a 3D micro-scale Eulerian forward and adjoint air quality model known as the HARC model. The HARC model uses output from the Quick Urban and Industrial Complex (QUIC) wind model and requires a 3D digital model of the monitored facility based on lidar or industrial permit data. MARC is one of the instrument platforms deployed during the 2014 Benzene and other Toxics Exposure Study (BEE-TEX) in Houston, TX. The main goal of the study is to quantify and explain the origin of ambient exposure to hazardous air pollutants in an industrial fence line community near the Houston Ship Channel. Preliminary results derived from analysis of MARC observations during the BEE-TEX experiment will be presented.
Classical Wave Model of Quantum-Like Processing in Brain
NASA Astrophysics Data System (ADS)
Khrennikov, A.
2011-01-01
We discuss the conjecture on quantum-like (QL) processing of information in the brain. It is not based on the physical quantum brain (e.g., Penrose) - quantum physical carriers of information. In our approach the brain created the QL representation (QLR) of information in Hilbert space. It uses quantum information rules in decision making. The existence of such QLR was (at least preliminary) confirmed by experimental data from cognitive psychology. The violation of the law of total probability in these experiments is an important sign of nonclassicality of data. In so called "constructive wave function approach" such data can be represented by complex amplitudes. We presented 1,2 the QL model of decision making. In this paper we speculate on a possible physical realization of QLR in the brain: a classical wave model producing QLR . It is based on variety of time scales in the brain. Each pair of scales (fine - the background fluctuations of electromagnetic field and rough - the cognitive image scale) induces the QL representation. The background field plays the crucial role in creation of "superstrong QL correlations" in the brain.
NASA Astrophysics Data System (ADS)
Schuite, Jonathan; Longuevergne, Laurent; Bour, Olivier; Burbey, Thomas J.; Boudin, Frédérick; Lavenant, Nicolas; Davy, Philippe
2017-12-01
Flow through reservoirs such as fractured media is powered by head gradients which also generate measurable poroelastic deformation of the rock body. The combined analysis of surface deformation and subsurface pressure provides valuable insights of a reservoir's structure and hydromechanical properties, which are of interest for deep-seated CO2 or nuclear waste storage for instance. Among all surveying tools, surface tiltmeters offer the possibility to grasp hydraulically induced deformations over a broad range of time scales with a remarkable precision. Here we investigate the information content of transient surface tilt generated by the pressurization a kilometer scale subvertical fault zone. Our approach involves the combination of field data and results of a fully coupled poromechanical model. The signature of pressure changes in the fault zone due to pumping cycles is clearly recognizable in field tilt data and we aim to explain the peculiar features that appear in (1) tilt time series alone from a set of four instruments and 2) the ratio of tilt over pressure. We evidence that the shape of tilt measurements on both sides of a fault zone is sensitive to its diffusivity and its elastic modulus. The ratio of tilt over pressure predominantly encompasses information about the system's dynamic behavior and extent of the fault zone and allows separating contributions of flow in the different compartments. Hence, tiltmeters are well suited to characterize hydromechanical processes associated with fault zone hydrogeology at short time scales, where spaceborne surveying methods fail to recognize any deformation signal.
Impulse-induced optimum signal amplification in scale-free networks.
Martínez, Pedro J; Chacón, Ricardo
2016-04-01
Optimizing information transmission across a network is an essential task for controlling and manipulating generic information-processing systems. Here, we show how topological amplification effects in scale-free networks of signaling devices are optimally enhanced when the impulse transmitted by periodic external signals (time integral over two consecutive zeros) is maximum. This is demonstrated theoretically by means of a star-like network of overdamped bistable systems subjected to generic zero-mean periodic signals and confirmed numerically by simulations of scale-free networks of such systems. Our results show that the enhancer effect of increasing values of the signal's impulse is due to a correlative increase of the energy transmitted by the periodic signals, while it is found to be resonant-like with respect to the topology-induced amplification mechanism.
Road Network State Estimation Using Random Forest Ensemble Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Yi; Edara, Praveen; Chang, Yohan
Network-scale travel time prediction not only enables traffic management centers (TMC) to proactively implement traffic management strategies, but also allows travelers make informed decisions about route choices between various origins and destinations. In this paper, a random forest estimator was proposed to predict travel time in a network. The estimator was trained using two years of historical travel time data for a case study network in St. Louis, Missouri. Both temporal and spatial effects were considered in the modeling process. The random forest models predicted travel times accurately during both congested and uncongested traffic conditions. The computational times for themore » models were low, thus useful for real-time traffic management and traveler information applications.« less
Characterization of double continuum formulations of transport through pore-scale information
NASA Astrophysics Data System (ADS)
Porta, G.; Ceriotti, G.; Bijeljic, B.
2016-12-01
Information on pore-scale characteristics is becoming increasingly available at unprecedented levels of detail from modern visualization/data-acquisition techniques. These advancements are not completely matched by corresponding developments of operational procedures according to which we can engineer theoretical findings aiming at improving our ability to reduce the uncertainty associated with the outputs of continuum-scale models to be employed at large scales. We present here a modeling approach which rests on pore-scale information to achieve a complete characterization of a double continuum model of transport and fluid-fluid reactive processes. Our model makes full use of pore-scale velocity distributions to identify mobile and immobile regions. We do so on the basis of a pointwise (in the pore space) evaluation of the relative strength of advection and diffusion time scales, as rendered by spatially variable values of local Péclet numbers. After mobile and immobile regions are demarcated, we build a simplified unit cell which is employed as a representative proxy of the real porous domain. This model geometry is then employed to simplify the computation of the effective parameters embedded in the double continuum transport model, while retaining relevant information from the pore-scale characterization of the geometry and velocity field. We document results which illustrate the applicability of the methodology to predict transport of a passive tracer within two- and three-dimensional media upon comparison with direct pore-scale numerical simulation of transport in the same geometrical settings. We also show preliminary results about the extension of this model to fluid-fluid reactive transport processes. In this context, we focus on results obtained in two-dimensional porous systems. We discuss the impact of critical quantities required as input to our modeling approach to obtain continuum-scale outputs. We identify the key limitations of the proposed methodology and discuss its capability also in comparison with alternative approaches grounded, e.g., on nonlocal and particle-based approximations.
Direction of information flow in large-scale resting-state networks is frequency-dependent.
Hillebrand, Arjan; Tewarie, Prejaas; van Dellen, Edwin; Yu, Meichen; Carbo, Ellen W S; Douw, Linda; Gouw, Alida A; van Straaten, Elisabeth C W; Stam, Cornelis J
2016-04-05
Normal brain function requires interactions between spatially separated, and functionally specialized, macroscopic regions, yet the directionality of these interactions in large-scale functional networks is unknown. Magnetoencephalography was used to determine the directionality of these interactions, where directionality was inferred from time series of beamformer-reconstructed estimates of neuronal activation, using a recently proposed measure of phase transfer entropy. We observed well-organized posterior-to-anterior patterns of information flow in the higher-frequency bands (alpha1, alpha2, and beta band), dominated by regions in the visual cortex and posterior default mode network. Opposite patterns of anterior-to-posterior flow were found in the theta band, involving mainly regions in the frontal lobe that were sending information to a more distributed network. Many strong information senders in the theta band were also frequent receivers in the alpha2 band, and vice versa. Our results provide evidence that large-scale resting-state patterns of information flow in the human brain form frequency-dependent reentry loops that are dominated by flow from parieto-occipital cortex to integrative frontal areas in the higher-frequency bands, which is mirrored by a theta band anterior-to-posterior flow.
The validation and translation of Multidimensional Measure of Informed Choice in Greek.
Gourounti, Kleanthi; Sandall, Jane
2011-04-01
to translate the original English version of the Multidimensional Measure of Informed Choice (MMIC) into Greek, to adapt it culturally to Greece, and to determine its psychometric properties for the assessment of informed choice in antenatal screening for Down syndrome. survey using self-administrated questionnaires. public hospital in Athens, Greece. 135 pregnant women with gestational age between 11th and 20th week just prior to having antenatal screening for Down syndrome. 96% of women had a positive attitude towards screening and 45% had a good level of knowledge concerning the screening process for Down syndrome. Using a standard measure of informed choice, validated for use in Greek, it was found that 44% of women made an informed choice, and thus 56% of women made an uninformed choice. The internal consistency of the scales was good; Cronbach's alpha was found to be 0.76 for the attitude scale and 0.64 for the knowledge scale, suggesting that all items were appropriate to measure. The performed factor analysis of the attitude scale indicated three factors with an eigenvalue over 1.0. Those factors were responsible for 87% of the variance. this study indicates that the Greek version of the MMIC appears to be a reliable and valid tool for measuring informed choice in antenatal screening for Down syndrome. Due to its short length and consumption of time, it seems to be a practical instrument for use in Greek antenatal clinics. Copyright © 2009 Elsevier Ltd. All rights reserved.
Finding the Age of the Earth by Physics or by Faith?
ERIC Educational Resources Information Center
Brush, Stephen G.
1982-01-01
Refutes scientific creationists' arguments that the earth is less than 10,000 years old by presenting information related to the time scales for creation and evolution models, times from stellar distances, Kelvin's estimate of the earth's age, radioactive decay, radiometric dating, and the decay of the earth's magnetic field. (DC)
Generalized activity equations for spiking neural network dynamics.
Buice, Michael A; Chow, Carson C
2013-01-01
Much progress has been made in uncovering the computational capabilities of spiking neural networks. However, spiking neurons will always be more expensive to simulate compared to rate neurons because of the inherent disparity in time scales-the spike duration time is much shorter than the inter-spike time, which is much shorter than any learning time scale. In numerical analysis, this is a classic stiff problem. Spiking neurons are also much more difficult to study analytically. One possible approach to making spiking networks more tractable is to augment mean field activity models with some information about spiking correlations. For example, such a generalized activity model could carry information about spiking rates and correlations between spikes self-consistently. Here, we will show how this can be accomplished by constructing a complete formal probabilistic description of the network and then expanding around a small parameter such as the inverse of the number of neurons in the network. The mean field theory of the system gives a rate-like description. The first order terms in the perturbation expansion keep track of covariances.
Using Matrix and Tensor Factorizations for the Single-Trial Analysis of Population Spike Trains.
Onken, Arno; Liu, Jian K; Karunasekara, P P Chamanthi R; Delis, Ioannis; Gollisch, Tim; Panzeri, Stefano
2016-11-01
Advances in neuronal recording techniques are leading to ever larger numbers of simultaneously monitored neurons. This poses the important analytical challenge of how to capture compactly all sensory information that neural population codes carry in their spatial dimension (differences in stimulus tuning across neurons at different locations), in their temporal dimension (temporal neural response variations), or in their combination (temporally coordinated neural population firing). Here we investigate the utility of tensor factorizations of population spike trains along space and time. These factorizations decompose a dataset of single-trial population spike trains into spatial firing patterns (combinations of neurons firing together), temporal firing patterns (temporal activation of these groups of neurons) and trial-dependent activation coefficients (strength of recruitment of such neural patterns on each trial). We validated various factorization methods on simulated data and on populations of ganglion cells simultaneously recorded in the salamander retina. We found that single-trial tensor space-by-time decompositions provided low-dimensional data-robust representations of spike trains that capture efficiently both their spatial and temporal information about sensory stimuli. Tensor decompositions with orthogonality constraints were the most efficient in extracting sensory information, whereas non-negative tensor decompositions worked well even on non-independent and overlapping spike patterns, and retrieved informative firing patterns expressed by the same population in response to novel stimuli. Our method showed that populations of retinal ganglion cells carried information in their spike timing on the ten-milliseconds-scale about spatial details of natural images. This information could not be recovered from the spike counts of these cells. First-spike latencies carried the majority of information provided by the whole spike train about fine-scale image features, and supplied almost as much information about coarse natural image features as firing rates. Together, these results highlight the importance of spike timing, and particularly of first-spike latencies, in retinal coding.
Using Matrix and Tensor Factorizations for the Single-Trial Analysis of Population Spike Trains
Onken, Arno; Liu, Jian K.; Karunasekara, P. P. Chamanthi R.; Delis, Ioannis; Gollisch, Tim; Panzeri, Stefano
2016-01-01
Advances in neuronal recording techniques are leading to ever larger numbers of simultaneously monitored neurons. This poses the important analytical challenge of how to capture compactly all sensory information that neural population codes carry in their spatial dimension (differences in stimulus tuning across neurons at different locations), in their temporal dimension (temporal neural response variations), or in their combination (temporally coordinated neural population firing). Here we investigate the utility of tensor factorizations of population spike trains along space and time. These factorizations decompose a dataset of single-trial population spike trains into spatial firing patterns (combinations of neurons firing together), temporal firing patterns (temporal activation of these groups of neurons) and trial-dependent activation coefficients (strength of recruitment of such neural patterns on each trial). We validated various factorization methods on simulated data and on populations of ganglion cells simultaneously recorded in the salamander retina. We found that single-trial tensor space-by-time decompositions provided low-dimensional data-robust representations of spike trains that capture efficiently both their spatial and temporal information about sensory stimuli. Tensor decompositions with orthogonality constraints were the most efficient in extracting sensory information, whereas non-negative tensor decompositions worked well even on non-independent and overlapping spike patterns, and retrieved informative firing patterns expressed by the same population in response to novel stimuli. Our method showed that populations of retinal ganglion cells carried information in their spike timing on the ten-milliseconds-scale about spatial details of natural images. This information could not be recovered from the spike counts of these cells. First-spike latencies carried the majority of information provided by the whole spike train about fine-scale image features, and supplied almost as much information about coarse natural image features as firing rates. Together, these results highlight the importance of spike timing, and particularly of first-spike latencies, in retinal coding. PMID:27814363
Dimension reduction and multiscaling law through source extraction
NASA Astrophysics Data System (ADS)
Capobianco, Enrico
2003-04-01
Through the empirical analysis of financial return generating processes one may find features that are common to other research fields, such as internet data from network traffic, physiological studies about human heart beat, speech and sleep recorded time series, geophysics signals, just to mention well-known cases of study. In particular, long range dependence, intermittency, heteroscedasticity are clearly appearing, and consequently power laws and multi-scaling behavior result typical signatures of either the spectral or the time correlation diagnostics. We study these features and the dynamics underlying financial volatility, which can respectively be detected and inferred from high frequency realizations of stock index returns, and show that they vary according to the resolution levels used for both the analysis and the synthesis of the available information. Discovering whether the volatility dynamics are subject to changes in scaling regimes requires the consideration of a model embedding scale-dependent information packets, thus accounting for possible heterogeneous activity occurring in financial markets. Independent component analysis result to be an important tool for reducing the dimension of the problem and calibrating greedy approximation techniques aimed to learn the structure of the underlying volatility.
Direct Characterization of Ultrafast Energy-Time Entangled Photon Pairs.
MacLean, Jean-Philippe W; Donohue, John M; Resch, Kevin J
2018-02-02
Energy-time entangled photons are critical in many quantum optical phenomena and have emerged as important elements in quantum information protocols. Entanglement in this degree of freedom often manifests itself on ultrafast time scales, making it very difficult to detect, whether one employs direct or interferometric techniques, as photon-counting detectors have insufficient time resolution. Here, we implement ultrafast photon counters based on nonlinear interactions and strong femtosecond laser pulses to probe energy-time entanglement in this important regime. Using this technique and single-photon spectrometers, we characterize all the spectral and temporal correlations of two entangled photons with femtosecond resolution. This enables the witnessing of energy-time entanglement using uncertainty relations and the direct observation of nonlocal dispersion cancellation on ultrafast time scales. These techniques are essential to understand and control the energy-time degree of freedom of light for ultrafast quantum optics.
An Integrated Knowledge Framework to Characterize and Scaffold Size and Scale Cognition (FS2C)
NASA Astrophysics Data System (ADS)
Magana, Alejandra J.; Brophy, Sean P.; Bryan, Lynn A.
2012-09-01
Size and scale cognition is a critical ability associated with reasoning with concepts in different disciplines of science, technology, engineering, and mathematics. As such, researchers and educators have identified the need for young learners and their educators to become scale-literate. Informed by developmental psychology literature and recent findings in nanoscale science and engineering education, we propose an integrated knowledge framework for characterizing and scaffolding size and scale cognition called the FS2C framework. Five ad hoc assessment tasks were designed informed by the FS2C framework with the goal of identifying participants' understandings of size and scale. Findings identified participants' difficulties to discern different sizes of microscale and nanoscale objects and a low level of sophistication on identifying scale worlds among participants. Results also identified that as bigger the difference between the sizes of the objects is, the more difficult was for participants to identify how many times an object is bigger or smaller than another one. Similarly, participants showed difficulties to estimate approximate sizes of sub-macroscopic objects as well as a difficulty for participants to estimate the size of very large objects. Participants' accurate location of objects on a logarithmic scale was also challenging.
Mapping unstable manifolds using drifters/floats in a Southern Ocean field campaign
NASA Astrophysics Data System (ADS)
Shuckburgh, Emily F.
2012-09-01
Ideas from dynamical systems theory have been used in an observational field campaign in the Southern Ocean to provide information on the mixing structure of the flow. Instantaneous snapshops of data from satellite altimetry provide information concerning surface currents at a scale of 100 km or so. We show that by using time-series of satellite altimetry we are able to deduce reliable information about the structure of the surface flow at scales as small as 10 km or so. This information was used in near-real time to provide an estimate of the location of stable and unstable manifolds in the vicinity of the Antarctic Circumpolar Current. As part of a large U.K./U.S. observational field campaign (DIMES: Diapycnal and Isopycnal Mixing Experiment in the Southern Ocean) a number of drifters and floats were then released (at the surface and at a depth of approximately 1 km) close to the estimated hyperbolic point at the intersection of the two manifolds, in several locations with apparently different dynamical characteristics. The subsequent trajectories of the drifters/floats has allowed the unstable manifolds to be tracked, and the relative separation of pairs of floats has allowed an estimation of Lyapunov exponents. The results of these deployments have given insight into the strengths and limitations of the satellite data which does not resolve small scales in the velocity field, and have elucidated the transport and mixing structure of the Southern Ocean at the surface and at depth.
NASA Astrophysics Data System (ADS)
Hong, Zixuan; Bian, Fuling
2008-10-01
Geographic space, time space and cognition space are three fundamental and interrelated spaces in geographic information systems for transportation. However, the cognition space and its relationships to the time space and geographic space are often neglected. This paper studies the relationships of these three spaces in urban transportation system from a new perspective and proposes a novel MDS-SOM transformation method which takes the advantages of the techniques of multidimensional scaling (MDS) and self-organizing map (SOM). The MDS-SOM transformation framework includes three kinds of mapping: the geographic-time transformation, the cognition-time transformation and the time-cognition transformation. The transformations in our research provide a better understanding of the interactions of these three spaces and beneficial knowledge is discovered to help the transportation analysis and decision supports.
NASA Astrophysics Data System (ADS)
Safdari, Hadiseh; Chechkin, Aleksei V.; Jafari, Gholamreza R.; Metzler, Ralf
2015-04-01
Scaled Brownian motion (SBM) is widely used to model anomalous diffusion of passive tracers in complex and biological systems. It is a highly nonstationary process governed by the Langevin equation for Brownian motion, however, with a power-law time dependence of the noise strength. Here we study the aging properties of SBM for both unconfined and confined motion. Specifically, we derive the ensemble and time averaged mean squared displacements and analyze their behavior in the regimes of weak, intermediate, and strong aging. A very rich behavior is revealed for confined aging SBM depending on different aging times and whether the process is sub- or superdiffusive. We demonstrate that the information on the aging factorizes with respect to the lag time and exhibits a functional form that is identical to the aging behavior of scale-free continuous time random walk processes. While SBM exhibits a disparity between ensemble and time averaged observables and is thus weakly nonergodic, strong aging is shown to effect a convergence of the ensemble and time averaged mean squared displacement. Finally, we derive the density of first passage times in the semi-infinite domain that features a crossover defined by the aging time.
Safdari, Hadiseh; Chechkin, Aleksei V; Jafari, Gholamreza R; Metzler, Ralf
2015-04-01
Scaled Brownian motion (SBM) is widely used to model anomalous diffusion of passive tracers in complex and biological systems. It is a highly nonstationary process governed by the Langevin equation for Brownian motion, however, with a power-law time dependence of the noise strength. Here we study the aging properties of SBM for both unconfined and confined motion. Specifically, we derive the ensemble and time averaged mean squared displacements and analyze their behavior in the regimes of weak, intermediate, and strong aging. A very rich behavior is revealed for confined aging SBM depending on different aging times and whether the process is sub- or superdiffusive. We demonstrate that the information on the aging factorizes with respect to the lag time and exhibits a functional form that is identical to the aging behavior of scale-free continuous time random walk processes. While SBM exhibits a disparity between ensemble and time averaged observables and is thus weakly nonergodic, strong aging is shown to effect a convergence of the ensemble and time averaged mean squared displacement. Finally, we derive the density of first passage times in the semi-infinite domain that features a crossover defined by the aging time.
Seabirds as indicators of marine ecosystems: Introduction: A modern role for seabirds as indicators
Piatt, John F.; Sydeman, William J.; Wiese, Francis
2007-01-01
A key requirement for implementing ecosystem-based management is to obtain timely information on significant fluctuations in the ecosystem (Botsford et al. 1997). However, obtaining all necessary information about physical and biological changes at appropriate temporal and spatial scales is a daunting task. Intuitively, one might assume that physical data are more important for the interpretation of ecosystem changes than biological data, but analyses of time series data suggest otherwise: physical data are more erratic and often confusing over the short term compared to biological data, which tend to fluctuate less on annual time scales (Hare & Mantua 2000). Even so, biological time-series may also be confusing when coexisting marine species respond differently to ecosystem variability. For example, while warming temperatures in the Gulf of Alaska following the 1976 to 1977 regime shift favored an increase in gadoids and flatfish, a variety of forage fish and pandalid shrimp species virtually disappeared (Anderson & Piatt 1999). Zooplankton communities in the Gulf of Alaska also demonstrated similar patterns of response (Francis et al. 1998). At the basin scale, favorable conditions for salmon in Alaska following the regime shift were matched inversely by poor conditions in the California Current (Francis et al. 1998). In marine birds, subtropical species increased, while subarctic ones decreased during a warming phase in the southern California Bight. Clearly, no single index can tell the whole story accurately. Multi-species, multi-region, and multi-trophic level approaches are needed to quantify fluctuations in marine ecosystem processes and in the distribution and abundance of its inhabitants, to determine critical parameter thresholds and to use this information in management and marine conservation.
Bitwise efficiency in chaotic models
Düben, Peter; Palmer, Tim
2017-01-01
Motivated by the increasing energy consumption of supercomputing for weather and climate simulations, we introduce a framework for investigating the bit-level information efficiency of chaotic models. In comparison with previous explorations of inexactness in climate modelling, the proposed and tested information metric has three specific advantages: (i) it requires only a single high-precision time series; (ii) information does not grow indefinitely for decreasing time step; and (iii) information is more sensitive to the dynamics and uncertainties of the model rather than to the implementation details. We demonstrate the notion of bit-level information efficiency in two of Edward Lorenz’s prototypical chaotic models: Lorenz 1963 (L63) and Lorenz 1996 (L96). Although L63 is typically integrated in 64-bit ‘double’ floating point precision, we show that only 16 bits have significant information content, given an initial condition uncertainty of approximately 1% of the size of the attractor. This result is sensitive to the size of the uncertainty but not to the time step of the model. We then apply the metric to the L96 model and find that a 16-bit scaled integer model would suffice given the uncertainty of the unresolved sub-grid-scale dynamics. We then show that, by dedicating computational resources to spatial resolution rather than numeric precision in a field programmable gate array (FPGA), we see up to 28.6% improvement in forecast accuracy, an approximately fivefold reduction in the number of logical computing elements required and an approximately 10-fold reduction in energy consumed by the FPGA, for the L96 model. PMID:28989303
Bitwise efficiency in chaotic models
NASA Astrophysics Data System (ADS)
Jeffress, Stephen; Düben, Peter; Palmer, Tim
2017-09-01
Motivated by the increasing energy consumption of supercomputing for weather and climate simulations, we introduce a framework for investigating the bit-level information efficiency of chaotic models. In comparison with previous explorations of inexactness in climate modelling, the proposed and tested information metric has three specific advantages: (i) it requires only a single high-precision time series; (ii) information does not grow indefinitely for decreasing time step; and (iii) information is more sensitive to the dynamics and uncertainties of the model rather than to the implementation details. We demonstrate the notion of bit-level information efficiency in two of Edward Lorenz's prototypical chaotic models: Lorenz 1963 (L63) and Lorenz 1996 (L96). Although L63 is typically integrated in 64-bit `double' floating point precision, we show that only 16 bits have significant information content, given an initial condition uncertainty of approximately 1% of the size of the attractor. This result is sensitive to the size of the uncertainty but not to the time step of the model. We then apply the metric to the L96 model and find that a 16-bit scaled integer model would suffice given the uncertainty of the unresolved sub-grid-scale dynamics. We then show that, by dedicating computational resources to spatial resolution rather than numeric precision in a field programmable gate array (FPGA), we see up to 28.6% improvement in forecast accuracy, an approximately fivefold reduction in the number of logical computing elements required and an approximately 10-fold reduction in energy consumed by the FPGA, for the L96 model.
Multi-fluid Dynamics for Supersonic Jet-and-Crossflows and Liquid Plug Rupture
NASA Astrophysics Data System (ADS)
Hassan, Ezeldin A.
Multi-fluid dynamics simulations require appropriate numerical treatments based on the main flow characteristics, such as flow speed, turbulence, thermodynamic state, and time and length scales. In this thesis, two distinct problems are investigated: supersonic jet and crossflow interactions; and liquid plug propagation and rupture in an airway. Gaseous non-reactive ethylene jet and air crossflow simulation represents essential physics for fuel injection in SCRAMJET engines. The regime is highly unsteady, involving shocks, turbulent mixing, and large-scale vortical structures. An eddy-viscosity-based multi-scale turbulence model is proposed to resolve turbulent structures consistent with grid resolution and turbulence length scales. Predictions of the time-averaged fuel concentration from the multi-scale model is improved over Reynolds-averaged Navier-Stokes models originally derived from stationary flow. The response to the multi-scale model alone is, however, limited, in cases where the vortical structures are small and scattered thus requiring prohibitively expensive grids in order to resolve the flow field accurately. Statistical information related to turbulent fluctuations is utilized to estimate an effective turbulent Schmidt number, which is shown to be highly varying in space. Accordingly, an adaptive turbulent Schmidt number approach is proposed, by allowing the resolved field to adaptively influence the value of turbulent Schmidt number in the multi-scale turbulence model. The proposed model estimates a time-averaged turbulent Schmidt number adapted to the computed flowfield, instead of the constant value common to the eddy-viscosity-based Navier-Stokes models. This approach is assessed using a grid-refinement study for the normal injection case, and tested with 30 degree injection, showing improved results over the constant turbulent Schmidt model both in mean and variance of fuel concentration predictions. For the incompressible liquid plug propagation and rupture study, numerical simulations are conducted using an Eulerian-Lagrangian approach with a continuous-interface method. A reconstruction scheme is developed to allow topological changes during plug rupture by altering the connectivity information of the interface mesh. Rupture time is shown to be delayed as the initial precursor film thickness increases. During the plug rupture process, a sudden increase of mechanical stresses on the tube wall is recorded, which can cause tissue damage.
Detectability of Granger causality for subsampled continuous-time neurophysiological processes.
Barnett, Lionel; Seth, Anil K
2017-01-01
Granger causality is well established within the neurosciences for inference of directed functional connectivity from neurophysiological data. These data usually consist of time series which subsample a continuous-time biophysiological process. While it is well known that subsampling can lead to imputation of spurious causal connections where none exist, less is known about the effects of subsampling on the ability to reliably detect causal connections which do exist. We present a theoretical analysis of the effects of subsampling on Granger-causal inference. Neurophysiological processes typically feature signal propagation delays on multiple time scales; accordingly, we base our analysis on a distributed-lag, continuous-time stochastic model, and consider Granger causality in continuous time at finite prediction horizons. Via exact analytical solutions, we identify relationships among sampling frequency, underlying causal time scales and detectability of causalities. We reveal complex interactions between the time scale(s) of neural signal propagation and sampling frequency. We demonstrate that detectability decays exponentially as the sample time interval increases beyond causal delay times, identify detectability "black spots" and "sweet spots", and show that downsampling may potentially improve detectability. We also demonstrate that the invariance of Granger causality under causal, invertible filtering fails at finite prediction horizons, with particular implications for inference of Granger causality from fMRI data. Our analysis emphasises that sampling rates for causal analysis of neurophysiological time series should be informed by domain-specific time scales, and that state-space modelling should be preferred to purely autoregressive modelling. On the basis of a very general model that captures the structure of neurophysiological processes, we are able to help identify confounds, and offer practical insights, for successful detection of causal connectivity from neurophysiological recordings. Copyright © 2016 Elsevier B.V. All rights reserved.
Kellie A. Uyeda; Douglas A. Stow; Dar A. Roberts; Philip J. Riggan
2017-01-01
Multi-temporal satellite imagery can provide valuable information on the patterns of vegetation growth over large spatial extents and long time periods, but corresponding ground-referenced biomass information is often difficult to acquire, especially at an annual scale. In this study, we test the relationship between annual biomass estimated using shrub growth rings...
A Comprehensive Analysis of the Quality of Online Health-Related Information regarding Schizophrenia
ERIC Educational Resources Information Center
Guada, Joseph; Venable, Victoria
2011-01-01
Social workers are major mental health providers and, thus, can be key players in guiding consumers and their families to accurate information regarding schizophrenia. The present study, using the WebMedQual scale, is a comprehensive analysis across a one-year period at two different time points of the top for-profit and nonprofit sites that…
ERIC Educational Resources Information Center
Rozendaal, J.S.; Minnaert, A.; Boekaerts, M.
2005-01-01
This study investigates the influence of teacher perceived administration of self-regulated learning on students' motivation and information-processing over time. This was done in the context of the Interactive Learning group System (ILS^(R)): a large-scale innovation program in Dutch vocational schools. A total of 185 students were grouped post…
ERIC Educational Resources Information Center
Dibiase, David; Rademacher, Henry J.
2005-01-01
This article explores issues of scalability and sustainability in distance learning. The authors kept detailed records of time they spent teaching a course in geographic information science via the World Wide Web over a six-month period, during which class sizes averaged 49 students. The authors also surveyed students' satisfaction with the…
ERIC Educational Resources Information Center
Vigil, Vannesa T.; Eyer, Julia A.; Hardee, W Paul
2005-01-01
Responding relevantly to an information-soliciting utterance (ISU) is required of a school-age child many times daily. For the child with pragmatic language difficulties, this may be especially problematic, yet clinicians have had few data to design intervention for improving these skills. This small-scale study looks at the ability of a child…
NASA Astrophysics Data System (ADS)
Antonenko, I.; Osinski, G. R.; Battler, M.; Beauchamp, M.; Cupelli, L.; Chanou, A.; Francis, R.; Mader, M. M.; Marion, C.; McCullough, E.; Pickersgill, A. E.; Preston, L. J.; Shankar, B.; Unrau, T.; Veillette, D.
2013-07-01
Remote robotic data provides different information than that obtained from immersion in the field. This significantly affects the geological situational awareness experienced by members of a mission control science team. In order to optimize science return from planetary robotic missions, these limitations must be understood and their effects mitigated to fully leverage the field experience of scientists at mission control.Results from a 13-day analogue deployment at the Mistastin Lake impact structure in Labrador, Canada suggest that scale, relief, geological detail, and time are intertwined issues that impact the mission control science team's effectiveness in interpreting the geology of an area. These issues are evaluated and several mitigation options are suggested. Scale was found to be difficult to interpret without the reference of known objects, even when numerical scale data were available. For this reason, embedding intuitive scale-indicating features into image data is recommended. Since relief is not conveyed in 2D images, both 3D data and observations from multiple angles are required. Furthermore, the 3D data must be observed in animation or as anaglyphs, since without such assistance much of the relief information in 3D data is not communicated. Geological detail may also be missed due to the time required to collect, analyze, and request data.We also suggest that these issues can be addressed, in part, by an improved understanding of the operational time costs and benefits of scientific data collection. Robotic activities operate on inherently slow time-scales. This fact needs to be embraced and accommodated. Instead of focusing too quickly on the details of a target of interest, thereby potentially minimizing science return, time should be allocated at first to more broad data collection at that target, including preliminary surveys, multiple observations from various vantage points, and progressively smaller scale of focus. This operational model more closely follows techniques employed by field geologists and is fundamental to the geologic interpretation of an area. Even so, an operational time cost/benefit analyses should be carefully considered in each situation, to determine when such comprehensive data collection would maximize the science return.Finally, it should be recognized that analogue deployments cannot faithfully model the time scales of robotic planetary missions. Analogue missions are limited by the difficulty and expense of fieldwork. Thus, analogue deployments should focus on smaller aspects of robotic missions and test components in a modular way (e.g., dropping communications constraints, limiting mission scope, focusing on a specific problem, spreading the mission over several field seasons, etc.).
NASA Astrophysics Data System (ADS)
Smith, L. A.
2007-12-01
We question the relevance of climate-model based Bayesian (or other) probability statements for decision support and impact assessment on spatial scales less than continental and temporal averages less than seasonal. Scientific assessment of higher resolution space and time scale information is urgently needed, given the commercial availability of "products" at high spatiotemporal resolution, their provision by nationally funded agencies for use both in industry decision making and governmental policy support, and their presentation to the public as matters of fact. Specifically we seek to establish necessary conditions for probability forecasts (projections conditioned on a model structure and a forcing scenario) to be taken seriously as reflecting the probability of future real-world events. We illustrate how risk management can profitably employ imperfect models of complicated chaotic systems, following NASA's study of near-Earth PHOs (Potentially Hazardous Objects). Our climate models will never be perfect, nevertheless the space and time scales on which they provide decision- support relevant information is expected to improve with the models themselves. Our aim is to establish a set of baselines of internal consistency; these are merely necessary conditions (not sufficient conditions) that physics based state-of-the-art models are expected to pass if their output is to be judged decision support relevant. Probabilistic Similarity is proposed as one goal which can be obtained even when our models are not empirically adequate. In short, probabilistic similarity requires that, given inputs similar to today's empirical observations and observational uncertainties, we expect future models to produce similar forecast distributions. Expert opinion on the space and time scales on which we might reasonably expect probabilistic similarity may prove of much greater utility than expert elicitation of uncertainty in parameter values in a model that is not empirically adequate; this may help to explain the reluctance of experts to provide information on "parameter uncertainty." Probability statements about the real world are always conditioned on some information set; they may well be conditioned on "False" making them of little value to a rational decision maker. In other instances, they may be conditioned on physical assumptions not held by any of the modellers whose model output is being cast as a probability distribution. Our models will improve a great deal in the next decades, and our insight into the likely climate fifty years hence will improve: maintaining the credibility of the science and the coherence of science based decision support, as our models improve, require a clear statement of our current limitations. What evidence do we have that today's state-of-the-art models provide decision-relevant probability forecasts? What space and time scales do we currently have quantitative, decision-relevant information on for 2050? 2080?
Time-marching multi-grid seismic tomography
NASA Astrophysics Data System (ADS)
Tong, P.; Yang, D.; Liu, Q.
2016-12-01
From the classic ray-based traveltime tomography to the state-of-the-art full waveform inversion, because of the nonlinearity of seismic inverse problems, a good starting model is essential for preventing the convergence of the objective function toward local minima. With a focus on building high-accuracy starting models, we propose the so-called time-marching multi-grid seismic tomography method in this study. The new seismic tomography scheme consists of a temporal time-marching approach and a spatial multi-grid strategy. We first divide the recording period of seismic data into a series of time windows. Sequentially, the subsurface properties in each time window are iteratively updated starting from the final model of the previous time window. There are at least two advantages of the time-marching approach: (1) the information included in the seismic data of previous time windows has been explored to build the starting models of later time windows; (2) seismic data of later time windows could provide extra information to refine the subsurface images. Within each time window, we use a multi-grid method to decompose the scale of the inverse problem. Specifically, the unknowns of the inverse problem are sampled on a coarse mesh to capture the macro-scale structure of the subsurface at the beginning. Because of the low dimensionality, it is much easier to reach the global minimum on a coarse mesh. After that, finer meshes are introduced to recover the micro-scale properties. That is to say, the subsurface model is iteratively updated on multi-grid in every time window. We expect that high-accuracy starting models should be generated for the second and later time windows. We will test this time-marching multi-grid method by using our newly developed eikonal-based traveltime tomography software package tomoQuake. Real application results in the 2016 Kumamoto earthquake (Mw 7.0) region in Japan will be demonstrated.
A Neutral Network based Early Eathquake Warning model in California region
NASA Astrophysics Data System (ADS)
Xiao, H.; MacAyeal, D. R.
2016-12-01
Early Earthquake Warning systems could reduce loss of lives and other economic impact resulted from natural disaster or man-made calamity. Current systems could be further enhanced by neutral network method. A 3 layer neural network model combined with onsite method was deployed in this paper to improve the recognition time and detection time for large scale earthquakes.The 3 layer neutral network early earthquake warning model adopted the vector feature design for sample events happened within 150 km radius of the epicenters. Dataset used in this paper contained both destructive events and small scale events. All the data was extracted from IRIS database to properly train the model. In the training process, backpropagation algorithm was used to adjust the weight matrices and bias matrices during each iteration. The information in all three channels of the seismometers served as the source in this model. Through designed tests, it was indicated that this model could identify approximately 90 percent of the events' scale correctly. And the early detection could provide informative evidence for public authorities to make further decisions. This indicated that neutral network model could have the potential to strengthen current early warning system, since the onsite method may greatly reduce the responding time and save more lives in such disasters.
NASA Astrophysics Data System (ADS)
Rust, A. C.; Cashman, K. V.
2005-12-01
Like many rhyolite tephras, the pyroclastic deposits of the 1300 B.P. eruption of Newberry Volcano, USA, contain minor amounts of obsidian. The H2O and CO2 contents and textures of these clasts vary considerably and provide information on eruption history and dynamics. Early in the eruption, obsidian probably derived from veins of vanguard magma or tuffisite that, together with wall rock fragments, were eroded and incorporated into the eruption column as the vent widened. Later, following a temporary cessation of activity, the proportion of obsidian to lithic fragments increased and new types of obsidian dominated, types that represent remnants of a shallow conduit plug and welded fallback material. Analysis of bubble geometries provide flow parameters and time scales operative for deformation within the shallow conduit. Furthermore, spatial variations in CO2 help constrain welding and wall rock assimilation time scales. Comparison of obsidian characteristics from the Newberry eruption with those of the well-studied Mono Craters eruption shows intriguing differences in obsidian formation that may relate to the nature of the conduit feeding the two events. From this comparison we conclude that obsidian is less likely to provide information on magmatic fragmentation than on time scales and mechanisms of pre-fragmentation magma ascent.
Wadud, Zahid; Hussain, Sajjad; Javaid, Nadeem; Bouk, Safdar Hussain; Alrajeh, Nabil; Alabed, Mohamad Souheil; Guizani, Nadra
2017-09-30
Industrial Underwater Acoustic Sensor Networks (IUASNs) come with intrinsic challenges like long propagation delay, small bandwidth, large energy consumption, three-dimensional deployment, and high deployment and battery replacement cost. Any routing strategy proposed for IUASN must take into account these constraints. The vector based forwarding schemes in literature forward data packets to sink using holding time and location information of the sender, forwarder, and sink nodes. Holding time suppresses data broadcasts; however, it fails to keep energy and delay fairness in the network. To achieve this, we propose an Energy Scaled and Expanded Vector-Based Forwarding (ESEVBF) scheme. ESEVBF uses the residual energy of the node to scale and vector pipeline distance ratio to expand the holding time. Resulting scaled and expanded holding time of all forwarding nodes has a significant difference to avoid multiple forwarding, which reduces energy consumption and energy balancing in the network. If a node has a minimum holding time among its neighbors, it shrinks the holding time and quickly forwards the data packets upstream. The performance of ESEVBF is analyzed through in network scenario with and without node mobility to ensure its effectiveness. Simulation results show that ESEVBF has low energy consumption, reduces forwarded data copies, and less end-to-end delay.
Revealing the Link between Structural Relaxation and Dynamic Heterogeneity in Glass-Forming Liquids
NASA Astrophysics Data System (ADS)
Wang, Lijin; Xu, Ning; Wang, W. H.; Guan, Pengfei
2018-03-01
Despite the use of glasses for thousands of years, the nature of the glass transition is still mysterious. On approaching the glass transition, the growth of dynamic heterogeneity has long been thought to play a key role in explaining the abrupt slowdown of structural relaxation. However, it still remains elusive whether there is an underlying link between structural relaxation and dynamic heterogeneity. Here, we unravel the link by introducing a characteristic time scale hiding behind an identical dynamic heterogeneity for various model glass-forming liquids. We find that the time scale corresponds to the kinetic fragility of liquids. Moreover, it leads to scaling collapse of both the structural relaxation time and dynamic heterogeneity for all liquids studied, together with a characteristic temperature associated with the same dynamic heterogeneity. Our findings imply that studying the glass transition from the viewpoint of dynamic heterogeneity is more informative than expected.
A real-time interferometer technique for compressible flow research
NASA Technical Reports Server (NTRS)
Bachalo, W. D.; Houser, M. J.
1984-01-01
Strengths and shortcomings in the application of interferometric techniques to transonic flow fields are examined and an improved method is elaborated. Such applications have demonstrated the value of interferometry in obtaining data for compressible flow research. With holographic techniques, interferometry may be applied in large scale facilities without the use of expensive optics or elaborate vibration isolation equipment. Results obtained using holographic interferometry and other methods demonstrate that reliable qualitative and quantitative data can be acquired. Nevertheless, the conventional method can be difficult to set up and apply, and it cannot produce real-time data. A new interferometry technique is investigated that promises to be easier to apply and can provide real-time information. This single-beam technique has the necessary insensitivity to vibration for large scale wind tunnel operations. Capabilities of the method and preliminary tests on some laboratory scale flow fluids are described.
Stratigraphy of the Anthropocene.
Zalasiewicz, Jan; Williams, Mark; Fortey, Richard; Smith, Alan; Barry, Tiffany L; Coe, Angela L; Bown, Paul R; Rawson, Peter F; Gale, Andrew; Gibbard, Philip; Gregory, F John; Hounslow, Mark W; Kerr, Andrew C; Pearson, Paul; Knox, Robert; Powell, John; Waters, Colin; Marshall, John; Oates, Michael; Stone, Philip
2011-03-13
The Anthropocene, an informal term used to signal the impact of collective human activity on biological, physical and chemical processes on the Earth system, is assessed using stratigraphic criteria. It is complex in time, space and process, and may be considered in terms of the scale, relative timing, duration and novelty of its various phenomena. The lithostratigraphic signal includes both direct components, such as urban constructions and man-made deposits, and indirect ones, such as sediment flux changes. Already widespread, these are producing a significant 'event layer', locally with considerable long-term preservation potential. Chemostratigraphic signals include new organic compounds, but are likely to be dominated by the effects of CO(2) release, particularly via acidification in the marine realm, and man-made radionuclides. The sequence stratigraphic signal is negligible to date, but may become geologically significant over centennial/millennial time scales. The rapidly growing biostratigraphic signal includes geologically novel aspects (the scale of globally transferred species) and geologically will have permanent effects.
NASA Astrophysics Data System (ADS)
Jajcay, N.; Kravtsov, S.; Tsonis, A.; Palus, M.
2017-12-01
A better understanding of dynamics in complex systems, such as the Earth's climate is one of the key challenges for contemporary science and society. A large amount of experimental data requires new mathematical and computational approaches. Natural complex systems vary on many temporal and spatial scales, often exhibiting recurring patterns and quasi-oscillatory phenomena. The statistical inference of causal interactions and synchronization between dynamical phenomena evolving on different temporal scales is of vital importance for better understanding of underlying mechanisms and a key for modeling and prediction of such systems. This study introduces and applies information theory diagnostics to phase and amplitude time series of different wavelet components of the observed data that characterizes El Niño. A suite of significant interactions between processes operating on different time scales was detected, and intermittent synchronization among different time scales has been associated with the extreme El Niño events. The mechanisms of these nonlinear interactions were further studied in conceptual low-order and state-of-the-art dynamical, as well as statistical climate models. Observed and simulated interactions exhibit substantial discrepancies, whose understanding may be the key to an improved prediction. Moreover, the statistical framework which we apply here is suitable for direct usage of inferring cross-scale interactions in nonlinear time series from complex systems such as the terrestrial magnetosphere, solar-terrestrial interactions, seismic activity or even human brain dynamics.
Predicting survival time in noncurative patients with advanced cancer: a prospective study in China.
Cui, Jing; Zhou, Lingjun; Wee, B; Shen, Fengping; Ma, Xiuqiang; Zhao, Jijun
2014-05-01
Accurate prediction of prognosis for cancer patients is important for good clinical decision making in therapeutic and care strategies. The application of prognostic tools and indicators could improve prediction accuracy. This study aimed to develop a new prognostic scale to predict survival time of advanced cancer patients in China. We prospectively collected items that we anticipated might influence survival time of advanced cancer patients. Participants were recruited from 12 hospitals in Shanghai, China. We collected data including demographic information, clinical symptoms and signs, and biochemical test results. Log-rank tests, Cox regression, and linear regression were performed to develop a prognostic scale. Three hundred twenty patients with advanced cancer were recruited. Fourteen prognostic factors were included in the prognostic scale: Karnofsky Performance Scale (KPS) score, pain, ascites, hydrothorax, edema, delirium, cachexia, white blood cell (WBC) count, hemoglobin, sodium, total bilirubin, direct bilirubin, aspartate aminotransferase (AST), and alkaline phosphatase (ALP) values. The score was calculated by summing the partial scores, ranging from 0 to 30. When using the cutoff points of 7-day, 30-day, 90-day, and 180-day survival time, the scores were calculated as 12, 10, 8, and 6, respectively. We propose a new prognostic scale including KPS, pain, ascites, hydrothorax, edema, delirium, cachexia, WBC count, hemoglobin, sodium, total bilirubin, direct bilirubin, AST, and ALP values, which may help guide physicians in predicting the likely survival time of cancer patients more accurately. More studies are needed to validate this scale in the future.
Cell Phone-Based Expert Systems for Smoking Cessation
2012-03-01
depression prevention. Participants completed the HRI at each timepoint. B.3. Fagerström Nicotine Dependence Scale (FTND). The FTND66 is the most...widely used tool for assessing severity of nicotine tolerance and dependence. It is a six-item, self-report scale assessing severity of nicotine ...this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data
A Reliable and Real-Time Tracking Method with Color Distribution
Zhao, Zishu; Han, Yuqi; Xu, Tingfa; Li, Xiangmin; Song, Haiping; Luo, Jiqiang
2017-01-01
Occlusion is a challenging problem in visual tracking. Therefore, in recent years, many trackers have been explored to solve this problem, but most of them cannot track the target in real time because of the heavy computational cost. A spatio-temporal context (STC) tracker was proposed to accelerate the task by calculating context information in the Fourier domain, alleviating the performance in handling occlusion. In this paper, we take advantage of the high efficiency of the STC tracker and employ salient prior model information based on color distribution to improve the robustness. Furthermore, we exploit a scale pyramid for accurate scale estimation. In particular, a new high-confidence update strategy and a re-searching mechanism are used to avoid the model corruption and handle occlusion. Extensive experimental results demonstrate our algorithm outperforms several state-of-the-art algorithms on the OTB2015 dataset. PMID:28994748
NASA Technical Reports Server (NTRS)
Waight, Kenneth T., III; Zack, John W.; Karyampudi, V. Mohan
1989-01-01
Initial simulations of the June 28, 1986 Cooperative Huntsville Meteorological Experiment case illustrate the need for mesoscale moisture information in a summertime situation in which deep convection is organized by weak large scale forcing. A methodology is presented for enhancing the initial moisture field from a combination of IR satellite imagery, surface-based cloud observations, and manually digitized radar data. The Mesoscale Atmospheric Simulation Model is utilized to simulate the events of June 28-29. This procedure insures that areas known to have precipitation at the time of initialization will be nearly saturated on the grid scale, which should decrease the time needed by the model to produce the observed Bonnie (a relatively weak hurricane that moved on shore two days before) convection. This method will also result in an initial distribution of model cloudiness (transmissivity) that is very similar to that of the IR satellite image.
2011-02-01
Contracting • Engineering and Technology • Logistics • Acquisition Management • Program Management For more information , visit http://clc.dau.mil ...for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data...sources, gathering and maintaining the data needed, and completing and reviewing the collection of information . Send comments regarding this burden
Developing Local Scale, High Resolution, Data to Interface with Numerical Storm Models
NASA Astrophysics Data System (ADS)
Witkop, R.; Becker, A.; Stempel, P.
2017-12-01
High resolution, physical storm models that can rapidly predict storm surge, inundation, rainfall, wind velocity and wave height at the intra-facility scale for any storm affecting Rhode Island have been developed by Researchers at the University of Rhode Island's (URI's) Graduate School of Oceanography (GSO) (Ginis et al., 2017). At the same time, URI's Marine Affairs Department has developed methods that inhere individual geographic points into GSO's models and enable the models to accurately incorporate local scale, high resolution data (Stempel et al., 2017). This combination allows URI's storm models to predict any storm's impacts on individual Rhode Island facilities in near real time. The research presented here determines how a coastal Rhode Island town's critical facility managers (FMs) perceive their assets as being vulnerable to quantifiable hurricane-related forces at the individual facility scale and explores methods to elicit this information from FMs in a format usable for incorporation into URI's storm models.
Active Learning of Classification Models with Likert-Scale Feedback.
Xue, Yanbing; Hauskrecht, Milos
2017-01-01
Annotation of classification data by humans can be a time-consuming and tedious process. Finding ways of reducing the annotation effort is critical for building the classification models in practice and for applying them to a variety of classification tasks. In this paper, we develop a new active learning framework that combines two strategies to reduce the annotation effort. First, it relies on label uncertainty information obtained from the human in terms of the Likert-scale feedback. Second, it uses active learning to annotate examples with the greatest expected change. We propose a Bayesian approach to calculate the expectation and an incremental SVM solver to reduce the time complexity of the solvers. We show the combination of our active learning strategy and the Likert-scale feedback can learn classification models more rapidly and with a smaller number of labeled instances than methods that rely on either Likert-scale labels or active learning alone.
Active Learning of Classification Models with Likert-Scale Feedback
Xue, Yanbing; Hauskrecht, Milos
2017-01-01
Annotation of classification data by humans can be a time-consuming and tedious process. Finding ways of reducing the annotation effort is critical for building the classification models in practice and for applying them to a variety of classification tasks. In this paper, we develop a new active learning framework that combines two strategies to reduce the annotation effort. First, it relies on label uncertainty information obtained from the human in terms of the Likert-scale feedback. Second, it uses active learning to annotate examples with the greatest expected change. We propose a Bayesian approach to calculate the expectation and an incremental SVM solver to reduce the time complexity of the solvers. We show the combination of our active learning strategy and the Likert-scale feedback can learn classification models more rapidly and with a smaller number of labeled instances than methods that rely on either Likert-scale labels or active learning alone. PMID:28979827
USDA-ARS?s Scientific Manuscript database
Real-time rainfall accumulation estimates at the global scale is useful for many applications. However, the real-time versions of satellite-based rainfall products are known to contain errors relative to real rainfall observed in situ. Recent studies have demonstrated how information about rainfall ...
Learning From Small-Scale Experimental Evaluations of After School Programs. Snapshot Number 8
ERIC Educational Resources Information Center
Harvard Family Research Project, Harvard University, 2006
2006-01-01
The Harvard Family Research Project (HFRP) Out-of-School Time Program Evaluation Database contains profiles of out-of-school time (OST) program evaluations. Its purpose is to provide accessible information about previous and current evaluations to support the development of high quality evaluations and programs in the OST field. Types of Programs…
NASA Astrophysics Data System (ADS)
Mesinger, F.
The traditional views hold that high-resolution limited area models (LAMs) down- scale large-scale lateral boundary information, and that predictability of small scales is short. Inspection of various rms fits/errors has contributed to these views. It would follow that the skill of LAMs should visibly deteriorate compared to that of their driver models at more extended forecast times. The limited area Eta Model at NCEP has an additional handicap of being driven by LBCs of the previous Avn global model run, at 0000 and 1200 UTC estimated to amount to about an 8 h loss in accuracy. This should make its relative skill compared to that of the Avn deteriorate even faster. These views are challenged by various Eta results including rms fits to raobs out to 84 h. It is argued that it is the largest scales that contribute the most to the skill of the Eta relative to that of the Avn.
Voltage Imaging of Waking Mouse Cortex Reveals Emergence of Critical Neuronal Dynamics
Scott, Gregory; Fagerholm, Erik D.; Mutoh, Hiroki; Leech, Robert; Sharp, David J.; Shew, Woodrow L.
2014-01-01
Complex cognitive processes require neuronal activity to be coordinated across multiple scales, ranging from local microcircuits to cortex-wide networks. However, multiscale cortical dynamics are not well understood because few experimental approaches have provided sufficient support for hypotheses involving multiscale interactions. To address these limitations, we used, in experiments involving mice, genetically encoded voltage indicator imaging, which measures cortex-wide electrical activity at high spatiotemporal resolution. Here we show that, as mice recovered from anesthesia, scale-invariant spatiotemporal patterns of neuronal activity gradually emerge. We show for the first time that this scale-invariant activity spans four orders of magnitude in awake mice. In contrast, we found that the cortical dynamics of anesthetized mice were not scale invariant. Our results bridge empirical evidence from disparate scales and support theoretical predictions that the awake cortex operates in a dynamical regime known as criticality. The criticality hypothesis predicts that small-scale cortical dynamics are governed by the same principles as those governing larger-scale dynamics. Importantly, these scale-invariant principles also optimize certain aspects of information processing. Our results suggest that during the emergence from anesthesia, criticality arises as information processing demands increase. We expect that, as measurement tools advance toward larger scales and greater resolution, the multiscale framework offered by criticality will continue to provide quantitative predictions and insight on how neurons, microcircuits, and large-scale networks are dynamically coordinated in the brain. PMID:25505314
NASA Technical Reports Server (NTRS)
Vane, Deborah
1993-01-01
A discussion of the objectives of the Global Energy and Water Cycle Experiment (GEWEX) and the Continental-scale International Project (GCIP) is presented in vugraph form. The objectives of GEWEX are as follows: determine the hydrological cycle by global measurements; model the global hydrological cycle; improve observations and data assimilation; and predict response to environmental change. The objectives of GCIP are as follows: determine the time/space variability of the hydrological cycle over a continental-scale region; develop macro-scale hydrologic models that are coupled to atmospheric models; develop information retrieval schemes; and support regional climate change impact assessment.
NASA Astrophysics Data System (ADS)
McGranaghan, Ryan M.; Mannucci, Anthony J.; Forsyth, Colin
2017-12-01
We explore the characteristics, controlling parameters, and relationships of multiscale field-aligned currents (FACs) using a rigorous, comprehensive, and cross-platform analysis. Our unique approach combines FAC data from the Swarm satellites and the Advanced Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE) to create a database of small-scale (˜10-150 km, <1° latitudinal width), mesoscale (˜150-250 km, 1-2° latitudinal width), and large-scale (>250 km) FACs. We examine these data for the repeatable behavior of FACs across scales (i.e., the characteristics), the dependence on the interplanetary magnetic field orientation, and the degree to which each scale "departs" from nominal large-scale specification. We retrieve new information by utilizing magnetic latitude and local time dependence, correlation analyses, and quantification of the departure of smaller from larger scales. We find that (1) FACs characteristics and dependence on controlling parameters do not map between scales in a straight forward manner, (2) relationships between FAC scales exhibit local time dependence, and (3) the dayside high-latitude region is characterized by remarkably distinct FAC behavior when analyzed at different scales, and the locations of distinction correspond to "anomalous" ionosphere-thermosphere behavior. Comparing with nominal large-scale FACs, we find that differences are characterized by a horseshoe shape, maximizing across dayside local times, and that difference magnitudes increase when smaller-scale observed FACs are considered. We suggest that both new physics and increased resolution of models are required to address the multiscale complexities. We include a summary table of our findings to provide a quick reference for differences between multiscale FACs.
NASA Astrophysics Data System (ADS)
Christianson, D. S.; Kaufman, C. G.; Kueppers, L. M.; Harte, J.
2013-12-01
Sampling limitations and current modeling capacity justify the common use of mean temperature values in summaries of historical climate and future projections. However, a monthly mean temperature representing a 1-km2 area on the landscape is often unable to capture the climate complexity driving organismal and ecological processes. Estimates of variability in addition to mean values are more biologically meaningful and have been shown to improve projections of range shifts for certain species. Historical analyses of variance and extreme events at coarse spatial scales, as well as coarse-scale projections, show increasing temporal variability in temperature with warmer means. Few studies have considered how spatial variance changes with warming, and analysis for both temporal and spatial variability across scales is lacking. It is unclear how the spatial variability of fine-scale conditions relevant to plant and animal individuals may change given warmer coarse-scale mean values. A change in spatial variability will affect the availability of suitable habitat on the landscape and thus, will influence future species ranges. By characterizing variability across both temporal and spatial scales, we can account for potential bias in species range projections that use coarse climate data and enable improvements to current models. In this study, we use temperature data at multiple spatial and temporal scales to characterize spatial and temporal variability under a warmer climate, i.e., increased mean temperatures. Observational data from the Sierra Nevada (California, USA), experimental climate manipulation data from the eastern and western slopes of the Rocky Mountains (Colorado, USA), projected CMIP5 data for California (USA) and observed PRISM data (USA) allow us to compare characteristics of a mean-variance relationship across spatial scales ranging from sub-meter2 to 10,000 km2 and across temporal scales ranging from hours to decades. Preliminary spatial analysis at fine-spatial scales (sub-meter to 10-meter) shows greater temperature variability with warmer mean temperatures. This is inconsistent with the inherent assumption made in current species distribution models that fine-scale variability is static, implying that current projections of future species ranges may be biased -- the direction and magnitude requiring further study. While we focus our findings on the cross-scaling characteristics of temporal and spatial variability, we also compare the mean-variance relationship between 1) experimental climate manipulations and observed conditions and 2) temporal versus spatial variance, i.e., variability in a time-series at one location vs. variability across a landscape at a single time. The former informs the rich debate concerning the ability to experimentally mimic a warmer future. The latter informs space-for-time study design and analyses, as well as species persistence via a combined spatiotemporal probability of suitable future habitat.
Fuchs, Julian E; Waldner, Birgit J; Huber, Roland G; von Grafenstein, Susanne; Kramer, Christian; Liedl, Klaus R
2015-03-10
Conformational dynamics are central for understanding biomolecular structure and function, since biological macromolecules are inherently flexible at room temperature and in solution. Computational methods are nowadays capable of providing valuable information on the conformational ensembles of biomolecules. However, analysis tools and intuitive metrics that capture dynamic information from in silico generated structural ensembles are limited. In standard work-flows, flexibility in a conformational ensemble is represented through residue-wise root-mean-square fluctuations or B-factors following a global alignment. Consequently, these approaches relying on global alignments discard valuable information on local dynamics. Results inherently depend on global flexibility, residue size, and connectivity. In this study we present a novel approach for capturing positional fluctuations based on multiple local alignments instead of one single global alignment. The method captures local dynamics within a structural ensemble independent of residue type by splitting individual local and global degrees of freedom of protein backbone and side-chains. Dependence on residue type and size in the side-chains is removed via normalization with the B-factors of the isolated residue. As a test case, we demonstrate its application to a molecular dynamics simulation of bovine pancreatic trypsin inhibitor (BPTI) on the millisecond time scale. This allows for illustrating different time scales of backbone and side-chain flexibility. Additionally, we demonstrate the effects of ligand binding on side-chain flexibility of three serine proteases. We expect our new methodology for quantifying local flexibility to be helpful in unraveling local changes in biomolecular dynamics.
Scale-free models for the structure of business firm networks.
Kitsak, Maksim; Riccaboni, Massimo; Havlin, Shlomo; Pammolli, Fabio; Stanley, H Eugene
2010-03-01
We study firm collaborations in the life sciences and the information and communication technology sectors. We propose an approach to characterize industrial leadership using k -shell decomposition, with top-ranking firms in terms of market value in higher k -shell layers. We find that the life sciences industry network consists of three distinct components: a "nucleus," which is a small well-connected subgraph, "tendrils," which are small subgraphs consisting of small degree nodes connected exclusively to the nucleus, and a "bulk body," which consists of the majority of nodes. Industrial leaders, i.e., the largest companies in terms of market value, are in the highest k -shells of both networks. The nucleus of the life sciences sector is very stable: once a firm enters the nucleus, it is likely to stay there for a long time. At the same time we do not observe the above three components in the information and communication technology sector. We also conduct a systematic study of these three components in random scale-free networks. Our results suggest that the sizes of the nucleus and the tendrils in scale-free networks decrease as the exponent of the power-law degree distribution lambda increases, and disappear for lambda>or=3 . We compare the k -shell structure of random scale-free model networks with two real-world business firm networks in the life sciences and in the information and communication technology sectors. We argue that the observed behavior of the k -shell structure in the two industries is consistent with the coexistence of both preferential and random agreements in the evolution of industrial networks.
Waszczuk, M A; Zavos, H M S; Gregory, A M; Eley, T C
2016-01-01
Depression and anxiety persist within and across diagnostic boundaries. The manner in which common v. disorder-specific genetic and environmental influences operate across development to maintain internalizing disorders and their co-morbidity is unclear. This paper investigates the stability and change of etiological influences on depression, panic, generalized, separation and social anxiety symptoms, and their co-occurrence, across adolescence and young adulthood. A total of 2619 twins/siblings prospectively reported symptoms of depression and anxiety at mean ages 15, 17 and 20 years. Each symptom scale showed a similar pattern of moderate continuity across development, largely underpinned by genetic stability. New genetic influences contributing to change in the developmental course of the symptoms emerged at each time point. All symptom scales correlated moderately with one another over time. Genetic influences, both stable and time-specific, overlapped considerably between the scales. Non-shared environmental influences were largely time- and symptom-specific, but some contributed moderately to the stability of depression and anxiety symptom scales. These stable, longitudinal environmental influences were highly correlated between the symptoms. The results highlight both stable and dynamic etiology of depression and anxiety symptom scales. They provide preliminary evidence that stable as well as newly emerging genes contribute to the co-morbidity between depression and anxiety across adolescence and young adulthood. Conversely, environmental influences are largely time-specific and contribute to change in symptoms over time. The results inform molecular genetics research and transdiagnostic treatment and prevention approaches.
NASA Astrophysics Data System (ADS)
Huang, Shupei; An, Haizhong; Gao, Xiangyun; Huang, Xuan
2015-09-01
The aim of this research is to investigate the multiscale dynamic linkages between crude oil price and the stock market in China at the sector level. First, the Haar à trous wavelet transform is implemented to extract multiscale information from the original time series. Furthermore, we incorporate the vector autoregression model to estimate the dynamic relationship pairing the Brent oil price and each sector stock index at each scale. There is a strong evidence showing that there are bidirectional Granger causality relationships between most of the sector stock indices and the crude oil price in the short, medium and long terms, except for those in the health, utility and consumption sectors. In fact, the impacts of the crude oil price shocks vary for different sectors over different time horizons. More precisely, the energy, information, material and telecommunication sector stock indices respond to crude oil price shocks negatively in the short run and positively in the medium and long runs, terms whereas the finance sector responds positively over all three time horizons. Moreover, the Brent oil price shocks have a stronger influence on the stock indices of sectors other than the health, optional and utility sectors in the medium and long terms than in the short term. The results obtained suggest implication of this paper as that the investment and policymaking decisions made during different time horizons should be based on the information gathered from each corresponding time scale.
Large-scale Granger causality analysis on resting-state functional MRI
NASA Astrophysics Data System (ADS)
D'Souza, Adora M.; Abidin, Anas Zainul; Leistritz, Lutz; Wismüller, Axel
2016-03-01
We demonstrate an approach to measure the information flow between each pair of time series in resting-state functional MRI (fMRI) data of the human brain and subsequently recover its underlying network structure. By integrating dimensionality reduction into predictive time series modeling, large-scale Granger Causality (lsGC) analysis method can reveal directed information flow suggestive of causal influence at an individual voxel level, unlike other multivariate approaches. This method quantifies the influence each voxel time series has on every other voxel time series in a multivariate sense and hence contains information about the underlying dynamics of the whole system, which can be used to reveal functionally connected networks within the brain. To identify such networks, we perform non-metric network clustering, such as accomplished by the Louvain method. We demonstrate the effectiveness of our approach to recover the motor and visual cortex from resting state human brain fMRI data and compare it with the network recovered from a visuomotor stimulation experiment, where the similarity is measured by the Dice Coefficient (DC). The best DC obtained was 0.59 implying a strong agreement between the two networks. In addition, we thoroughly study the effect of dimensionality reduction in lsGC analysis on network recovery. We conclude that our approach is capable of detecting causal influence between time series in a multivariate sense, which can be used to segment functionally connected networks in the resting-state fMRI.
NASA Astrophysics Data System (ADS)
Palus, Milan
2017-04-01
Deeper understanding of complex dynamics of the Earth atmosphere and climate is inevitable for sustainable development, mitigation and adaptation strategies for global change and for prediction of and resilience against extreme events. Traditional (linear) approaches cannot explain or even detect nonlinear interactions of dynamical processes evolving on multiple spatial and temporal scales. Combination of nonlinear dynamics and information theory explains synchronization as a process of adjustment of information rates [1] and causal relations (à la Granger) as information transfer [2]. Information born in dynamical complexity or information transferred among systems on a way to synchronization might appear as an abstract quantity, however, information transfer is tied to a transfer of mass and energy, as demonstrated in a recent study using directed (causal) climate networks [2]. Recently, an information transfer across scales of atmospheric dynamics has been observed [3]. In particular, a climate oscillation with the period around 7-8 years has been identified as a factor influencing variability of surface air temperature (SAT) on shorter time scales. Its influence on the amplitude of the SAT annual cycle was estimated in the range 0.7-1.4 °C and the effect on the overall variability of the SAT anomalies (SATA) leads to the changes 1.5-1.7 °C in the annual SATA means. The strongest effect of the 7-8 year cycle was observed in the winter SATA means where it reaches 4-5 °C in central European station and reanalysis data [4]. In the dynamics of El Niño-Southern Oscillation, three principal time scales have been identified: the annual cycle (AC), the quasibiennial (QB) mode(s) and the low-frequency (LF) variability. An intricate causal network of information flows among these modes helps to understand the occurrence of extreme El Niño events, characterized by synchronization of the QB modes and AC, and modulation of the QB amplitude by the LF mode. The latter also influences the phase of the AC and QB modes. These examples provide an inspiration for a discussion how novel data analysis methods, based on topics from nonlinear dynamical systems, their synchronization, (Granger) causality and information transfer, in combination with dynamical and statistical models of different complexity, can help in understanding and prediction of climate variability on different scales and in estimating probability of occurrence of extreme climate events. [1] M. Palus, V. Komarek, Z. Hrncir, K. Sterbova, Phys. Rev. E, 63(4), 046211 (2001) http://www.cs.cas.cz/mp/epr/sir1-a.html [2] J. Hlinka, N. Jajcay, D. Hartman, M. Palus, Smooth Information Flow in Temperature Climate Network Reflects Mass Transport, submitted to Chaos. http://www.cs.cas.cz/mp/epr/vetry-a.html [3] M. Palus, Phys. Rev. Lett. 112 078702 (2014) http://www.cs.cas.cz/mp/epr/xf1-a.html [4] N. Jajcay, J. Hlinka, S. Kravtsov, A. A. Tsonis, M. Palus, Geophys. Res. Lett. 43(2), 902-909 (2016) http://www.cs.cas.cz/mp/epr/xfgrl1-a.html
The scale-dependent market trend: Empirical evidences using the lagged DFA method
NASA Astrophysics Data System (ADS)
Li, Daye; Kou, Zhun; Sun, Qiankun
2015-09-01
In this paper we make an empirical research and test the efficiency of 44 important market indexes in multiple scales. A modified method based on the lagged detrended fluctuation analysis is utilized to maximize the information of long-term correlations from the non-zero lags and keep the margin of errors small when measuring the local Hurst exponent. Our empirical result illustrates that a common pattern can be found in the majority of the measured market indexes which tend to be persistent (with the local Hurst exponent > 0.5) in the small time scale, whereas it displays significant anti-persistent characteristics in large time scales. Moreover, not only the stock markets but also the foreign exchange markets share this pattern. Considering that the exchange markets are only weakly synchronized with the economic cycles, it can be concluded that the economic cycles can cause anti-persistence in the large time scale but there are also other factors at work. The empirical result supports the view that financial markets are multi-fractal and it indicates that deviations from efficiency and the type of model to describe the trend of market price are dependent on the forecasting horizon.
THEORETICAL REVIEW The Hippocampus, Time, and Memory Across Scales
Howard, Marc W.; Eichenbaum, Howard
2014-01-01
A wealth of experimental studies with animals have offered insights about how neural networks within the hippocampus support the temporal organization of memories. These studies have revealed the existence of “time cells” that encode moments in time, much as the well-known “place cells” map locations in space. Another line of work inspired by human behavioral studies suggests that episodic memories are mediated by a state of temporal context that changes gradually over long time scales, up to at least a few thousand seconds. In this view, the “mental time travel” hypothesized to support the experience of episodic memory corresponds to a “jump back in time” in which a previous state of temporal context is recovered. We suggest that these 2 sets of findings could be different facets of a representation of temporal history that maintains a record at the last few thousand seconds of experience. The ability to represent long time scales comes at the cost of discarding precise information about when a stimulus was experienced—this uncertainty becomes greater for events further in the past. We review recent computational work that describes a mechanism that could construct such a scale-invariant representation. Taken as a whole, this suggests the hippocampus plays its role in multiple aspects of cognition by representing events embedded in a general spatiotemporal context. The representation of internal time can be useful across nonhippocampal memory systems. PMID:23915126
Integration of Air Quality & Exposure Models for Health Studies
The presentation describes a new community-scale tool called exposure model for individuals (EMI), which predicts five tiers of individual-level exposure metrics for ambient PM using outdoor concentrations, questionnaires, weather, and time-location information. In this modeling ...
Leveraging Large-Scale Cancer Genomics Datasets for Germline Discovery - TCGA
The session will review how data types have changed over time, focusing on how next-generation sequencing is being employed to yield more precise information about the underlying genomic variation that influences tumor etiology and biology.
Adaptive Spike Threshold Enables Robust and Temporally Precise Neuronal Encoding
Resnik, Andrey; Celikel, Tansu; Englitz, Bernhard
2016-01-01
Neural processing rests on the intracellular transformation of information as synaptic inputs are translated into action potentials. This transformation is governed by the spike threshold, which depends on the history of the membrane potential on many temporal scales. While the adaptation of the threshold after spiking activity has been addressed before both theoretically and experimentally, it has only recently been demonstrated that the subthreshold membrane state also influences the effective spike threshold. The consequences for neural computation are not well understood yet. We address this question here using neural simulations and whole cell intracellular recordings in combination with information theoretic analysis. We show that an adaptive spike threshold leads to better stimulus discrimination for tight input correlations than would be achieved otherwise, independent from whether the stimulus is encoded in the rate or pattern of action potentials. The time scales of input selectivity are jointly governed by membrane and threshold dynamics. Encoding information using adaptive thresholds further ensures robust information transmission across cortical states i.e. decoding from different states is less state dependent in the adaptive threshold case, if the decoding is performed in reference to the timing of the population response. Results from in vitro neural recordings were consistent with simulations from adaptive threshold neurons. In summary, the adaptive spike threshold reduces information loss during intracellular information transfer, improves stimulus discriminability and ensures robust decoding across membrane states in a regime of highly correlated inputs, similar to those seen in sensory nuclei during the encoding of sensory information. PMID:27304526
Adaptive Spike Threshold Enables Robust and Temporally Precise Neuronal Encoding.
Huang, Chao; Resnik, Andrey; Celikel, Tansu; Englitz, Bernhard
2016-06-01
Neural processing rests on the intracellular transformation of information as synaptic inputs are translated into action potentials. This transformation is governed by the spike threshold, which depends on the history of the membrane potential on many temporal scales. While the adaptation of the threshold after spiking activity has been addressed before both theoretically and experimentally, it has only recently been demonstrated that the subthreshold membrane state also influences the effective spike threshold. The consequences for neural computation are not well understood yet. We address this question here using neural simulations and whole cell intracellular recordings in combination with information theoretic analysis. We show that an adaptive spike threshold leads to better stimulus discrimination for tight input correlations than would be achieved otherwise, independent from whether the stimulus is encoded in the rate or pattern of action potentials. The time scales of input selectivity are jointly governed by membrane and threshold dynamics. Encoding information using adaptive thresholds further ensures robust information transmission across cortical states i.e. decoding from different states is less state dependent in the adaptive threshold case, if the decoding is performed in reference to the timing of the population response. Results from in vitro neural recordings were consistent with simulations from adaptive threshold neurons. In summary, the adaptive spike threshold reduces information loss during intracellular information transfer, improves stimulus discriminability and ensures robust decoding across membrane states in a regime of highly correlated inputs, similar to those seen in sensory nuclei during the encoding of sensory information.
NASA Astrophysics Data System (ADS)
Shiga, Yoichi P.; Tadić, Jovan M.; Qiu, Xuemei; Yadav, Vineet; Andrews, Arlyn E.; Berry, Joseph A.; Michalak, Anna M.
2018-01-01
Recent studies have shown the promise of remotely sensed solar-induced chlorophyll fluorescence (SIF) in informing terrestrial carbon exchange, but analyses have been limited to either plot level ( 1 km2) or hemispheric/global ( 108 km2) scales due to the lack of a direct measure of carbon exchange at intermediate scales. Here we use a network of atmospheric CO2 observations over North America to explore the value of SIF for informing net ecosystem exchange (NEE) at regional scales. We find that SIF explains space-time NEE patterns at regional ( 100 km2) scales better than a variety of other vegetation and climate indicators. We further show that incorporating SIF into an atmospheric inversion leads to a spatial redistribution of NEE estimates over North America, with more uptake attributed to agricultural regions and less to needleleaf forests. Our results highlight the synergy of ground-based and spaceborne carbon cycle observations.
Evidence and Options for Informed Decision-Making to Achive Arctic Sustainability
NASA Astrophysics Data System (ADS)
Berkman, P. A.
2017-12-01
This presentation will consider the development of evidence and options for informed decision-making that will need to operate across generations to achieve Arctic sustainability (Figure). Context of these Arctic decisions is global, recognizing that we live in an interconnected civilization on a planetary scale, as revealed unambiguously with evidence from the `world' wars in the first half of the 20thcentury. First, for clarification, data and evidence are not the same. Data is generated from information and observations to answer specific questions, posed with methods from the natural and social sciences as well as indigenous knowledge. These data reveal patterns and trends in our societies and natural world, underscoring the evidence for decisions to address impacts, issues and resources within, across and beyond the boundaries of nations - recognizing that nations still are the principal jurisdictional unit. However, for this decision-support process to be effective, options (without advocacy) - which can be used or ignored explicitly - must be generated from the evidence, taking into consideration stakeholder perspectives and governance records in a manner that will contribute to informed decision-making. The resulting decisions will involve built elements that require capitalization and technology as well as governance mechanisms coming from diverse jurisdictional authorities. The innovation required is to balance economic prosperity, environmental protection and societal well-being. These three pillars of sustainability further involve stability, balancing urgencies of the moment and of future generations, recognizing that children born today will be alive in the 22nd century. Consequently, options for informed decisions must operate across a continuum of urgencies from security time scales to sustainability time scales. This decision-support process is holistic (international, interdisciplinary and inclusive), reflecting the applications of science diplomacy to balance national interests and common interests for the benefit of all on Earth.
Depth resolution and preferential sputtering in depth profiling of sharp interfaces
NASA Astrophysics Data System (ADS)
Hofmann, S.; Han, Y. S.; Wang, J. Y.
2017-07-01
The influence of preferential sputtering on depth resolution of sputter depth profiles is studied for different sputtering rates of the two components at an A/B interface. Surface concentration and intensity depth profiles on both the sputtering time scale (as measured) and the depth scale are obtained by calculations with an extended Mixing-Roughness-Information depth (MRI)-model. The results show a clear difference for the two extreme cases (a) preponderant roughness and (b) preponderant atomic mixing. In case (a), the interface width on the time scale (Δt(16-84%)) increases with preferential sputtering if the faster sputtering component is on top of the slower sputtering component, but the true resolution on the depth scale (Δz(16-84%)) stays constant. In case (b), the interface width on the time scale stays constant but the true resolution on the depth scale varies with preferential sputtering. For similar order of magnitude of the atomic mixing and the roughness parameters, a transition state between the two extremes is obtained. While the normalized intensity profile of SIMS represents that of the surface concentration, an additional broadening effect is encountered in XPS or AES by the influence of the mean electron escape depth which may even cause an additional matrix effect at the interface.
Large scale track analysis for wide area motion imagery surveillance
NASA Astrophysics Data System (ADS)
van Leeuwen, C. J.; van Huis, J. R.; Baan, J.
2016-10-01
Wide Area Motion Imagery (WAMI) enables image based surveillance of areas that can cover multiple square kilometers. Interpreting and analyzing information from such sources, becomes increasingly time consuming as more data is added from newly developed methods for information extraction. Captured from a moving Unmanned Aerial Vehicle (UAV), the high-resolution images allow detection and tracking of moving vehicles, but this is a highly challenging task. By using a chain of computer vision detectors and machine learning techniques, we are capable of producing high quality track information of more than 40 thousand vehicles per five minutes. When faced with such a vast number of vehicular tracks, it is useful for analysts to be able to quickly query information based on region of interest, color, maneuvers or other high-level types of information, to gain insight and find relevant activities in the flood of information. In this paper we propose a set of tools, combined in a graphical user interface, which allows data analysts to survey vehicles in a large observed area. In order to retrieve (parts of) images from the high-resolution data, we developed a multi-scale tile-based video file format that allows to quickly obtain only a part, or a sub-sampling of the original high resolution image. By storing tiles of a still image according to a predefined order, we can quickly retrieve a particular region of the image at any relevant scale, by skipping to the correct frames and reconstructing the image. Location based queries allow a user to select tracks around a particular region of interest such as landmark, building or street. By using an integrated search engine, users can quickly select tracks that are in the vicinity of locations of interest. Another time-reducing method when searching for a particular vehicle, is to filter on color or color intensity. Automatic maneuver detection adds information to the tracks that can be used to find vehicles based on their behavior.
ITER L-Mode Confinement Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
S.M. Kaye and the ITER Confinement Database Working Group
This paper describes the content of an L-mode database that has been compiled with data from Alcator C-Mod, ASDEX, DIII, DIII-D, FTU, JET, JFT-2M, JT-60, PBX-M, PDX, T-10, TEXTOR, TFTR, and Tore-Supra. The database consists of a total of 2938 entries, 1881 of which are in the L-phase while 922 are ohmically heated (OH) only. Each entry contains up to 95 descriptive parameters, including global and kinetic information, machine conditioning, and configuration. The paper presents a description of the database and the variables contained therein, and it also presents global and thermal scalings along with predictions for ITER. The L-modemore » thermal confinement time scaling was determined from a subset of 1312 entries for which the thermal confinement time scaling was provided.« less
Improving health aid for a better planet: The planning, monitoring and evaluation tool (PLANET)
Sridhar, Devi; Car, Josip; Chopra, Mickey; Campbell, Harry; Woods, Ngaire; Rudan, Igor
2015-01-01
Background International development assistance for health (DAH) quadrupled between 1990 and 2012, from US$ 5.6 billion to US$ 28.1 billion. This generates an increasing need for transparent and replicable tools that could be used to set investment priorities, monitor the distribution of funding in real time, and evaluate the impact of those investments. Methods In this paper we present a methodology that addresses these three challenges. We call this approach PLANET, which stands for planning, monitoring and evaluation tool. Fundamentally, PLANET is based on crowdsourcing approach to obtaining information relevant to deployment of large–scale programs. Information is contributed in real time by a diverse group of participants involved in the program delivery. Findings PLANET relies on real–time information from three levels of participants in large–scale programs: funders, managers and recipients. At each level, information is solicited to assess five key risks that are most relevant to each level of operations. The risks at the level of funders involve systematic neglect of certain areas, focus on donor’s interests over that of program recipients, ineffective co–ordination between donors, questionable mechanisms of delivery and excessive loss of funding to “middle men”. At the level of managers, the risks are corruption, lack of capacity and/or competence, lack of information and /or communication, undue avoidance of governmental structures / preference to non–governmental organizations and exclusion of local expertise. At the level of primary recipients, the risks are corruption, parallel operations / “verticalization”, misalignment with local priorities and lack of community involvement, issues with ethics, equity and/or acceptability, and low likelihood of sustainability beyond the end of the program’s implementation. Interpretation PLANET is intended as an additional tool available to policy–makers to prioritize, monitor and evaluate large–scale development programs. In this, it should complement tools such as LiST (for health care/interventions), EQUIST (for health care/interventions) and CHNRI (for health research), which also rely on information from local experts and on local context to set priorities in a transparent, user–friendly, replicable, quantifiable and specific, algorithmic–like manner. PMID:26322228
Improving health aid for a better planet: The planning, monitoring and evaluation tool (PLANET).
Sridhar, Devi; Car, Josip; Chopra, Mickey; Campbell, Harry; Woods, Ngaire; Rudan, Igor
2015-12-01
International development assistance for health (DAH) quadrupled between 1990 and 2012, from US$ 5.6 billion to US$ 28.1 billion. This generates an increasing need for transparent and replicable tools that could be used to set investment priorities, monitor the distribution of funding in real time, and evaluate the impact of those investments. In this paper we present a methodology that addresses these three challenges. We call this approach PLANET, which stands for planning, monitoring and evaluation tool. Fundamentally, PLANET is based on crowdsourcing approach to obtaining information relevant to deployment of large-scale programs. Information is contributed in real time by a diverse group of participants involved in the program delivery. PLANET relies on real-time information from three levels of participants in large-scale programs: funders, managers and recipients. At each level, information is solicited to assess five key risks that are most relevant to each level of operations. The risks at the level of funders involve systematic neglect of certain areas, focus on donor's interests over that of program recipients, ineffective co-ordination between donors, questionable mechanisms of delivery and excessive loss of funding to "middle men". At the level of managers, the risks are corruption, lack of capacity and/or competence, lack of information and /or communication, undue avoidance of governmental structures / preference to non-governmental organizations and exclusion of local expertise. At the level of primary recipients, the risks are corruption, parallel operations / "verticalization", misalignment with local priorities and lack of community involvement, issues with ethics, equity and/or acceptability, and low likelihood of sustainability beyond the end of the program's implementation. PLANET is intended as an additional tool available to policy-makers to prioritize, monitor and evaluate large-scale development programs. In this, it should complement tools such as LiST (for health care/interventions), EQUIST (for health care/interventions) and CHNRI (for health research), which also rely on information from local experts and on local context to set priorities in a transparent, user-friendly, replicable, quantifiable and specific, algorithmic-like manner.
NASA Astrophysics Data System (ADS)
Purss, M. B. J.; Mueller, N. R.; Killough, B.; Oliver, S. A.
2016-12-01
In 2014 Geoscience Australia launched Water Observations from Space (WOfS) providing a continental-scale water product that shows how often surface water has been observed across Australia by the Landsat satellites since 1987. WOfS is a 23-step band-based decision tree that classifies pixels as water or non-water with 97% overall accuracy. The enabling infrastructure for WOfS is the Australian Geoscience Data Cube (AGDC), a high performance computing system organising Australian earth observation data into a systematic, consistently corrected analysis engine. The Committee on Earth Observation Satellites (CEOS) has adopted the AGDC methodology to create a series of international Data Cubes to provide the same capability to areas that would otherwise not be able to undertake time series analysis of the environment at these scales. The CEOS Systems Engineering Office (SEO) recently completed testing of WOfS using Data Cubes based on the AGDC version 2 over Kenya and Colombia. The results show how Data Cubes can provide water management information at large scales, and provide information in remote locations where other sources of water information are unavailable. The results also show an improvement in water detection capability over the Landsat CFmask. This water management product provides critical insight into the behavior of surface water over time and in particular, the extent of flooding.
A new approach to data management and its impact on frequency control requirements
NASA Technical Reports Server (NTRS)
Blanchard, D. L.; Fuchs, A. J.; Chi, A. R.
1979-01-01
A new approach to data management consisting of spacecraft and data/information autonomy and its impact on frequency control requirements is presented. An autonomous spacecraft is capable of functioning without external intervention for up to 72 hr by enabling the sensors to make observations, maintaining its health and safety, and by using logical safety modes when anomalies occur. Data/information are made autonomous by associating all relevant ancillary data such as time, position, attitude, and sensor identification with the data/information record of an event onboard the spacecraft. This record is so constructed that the record of the event can be physically identified in a complete and self-contained record that is independent of all other data. All data within a packet will be time tagged to the needed accuracy, and the time markings from packet to packet will be coherent to a UTC time scale.
NASA Astrophysics Data System (ADS)
Kandel, D. D.; Western, A. W.; Grayson, R. B.
2004-12-01
Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and erosion models. The statistical description of sub-daily variability is thus propagated through the model, allowing the effects of variability to be captured in the simulations. This results in cdfs of various fluxes, the integration of which over a day gives respective daily totals. Using 42-plot-years of surface runoff and soil erosion data from field studies in different environments from Australia and Nepal, simulation results from this cdf approach are compared with the sub-hourly (2-minute for Nepal and 6-minute for Australia) and daily models having similar process descriptions. Significant improvements in the simulation of surface runoff and erosion are achieved, compared with a daily model that uses average daily rainfall intensities. The cdf model compares well with a sub-hourly time-step model. This suggests that the approach captures the important effects of sub-daily variability while utilizing commonly available daily information. It is also found that the model parameters are more robustly defined using the cdf approach compared with the effective values obtained at the daily scale. This suggests that the cdf approach may offer improved model transferability spatially (to other areas) and temporally (to other periods).
An application of Landsat and computer technology to potential water pollution from soil erosion
NASA Technical Reports Server (NTRS)
Campbell, W. J.
1981-01-01
Agricultural activity has been recognized as the primary source of nonpoint source water pollution. Water quality planners have needed information that is timely, accurate, easily reproducible, and relatively inexpensive to utilize to implement 'Best Management Practices' for water quality. In this paper, a case study shows how the combination of satellite data, which can give accurate land-cover/land-use information, and a computerized geographic information system, can assess nonpoint pollution at a regional scale and be cost effective.
Understanding Pitch Perception as a Hierarchical Process with Top-Down Modulation
Balaguer-Ballester, Emili; Clark, Nicholas R.; Coath, Martin; Krumbholz, Katrin; Denham, Susan L.
2009-01-01
Pitch is one of the most important features of natural sounds, underlying the perception of melody in music and prosody in speech. However, the temporal dynamics of pitch processing are still poorly understood. Previous studies suggest that the auditory system uses a wide range of time scales to integrate pitch-related information and that the effective integration time is both task- and stimulus-dependent. None of the existing models of pitch processing can account for such task- and stimulus-dependent variations in processing time scales. This study presents an idealized neurocomputational model, which provides a unified account of the multiple time scales observed in pitch perception. The model is evaluated using a range of perceptual studies, which have not previously been accounted for by a single model, and new results from a neurophysiological experiment. In contrast to other approaches, the current model contains a hierarchy of integration stages and uses feedback to adapt the effective time scales of processing at each stage in response to changes in the input stimulus. The model has features in common with a hierarchical generative process and suggests a key role for efferent connections from central to sub-cortical areas in controlling the temporal dynamics of pitch processing. PMID:19266015
Degradation of metallic materials studied by correlative tomography
NASA Astrophysics Data System (ADS)
Burnett, T. L.; Holroyd, N. J. H.; Lewandowski, J. J.; Ogurreck, M.; Rau, C.; Kelley, R.; Pickering, E. J.; Daly, M.; Sherry, A. H.; Pawar, S.; Slater, T. J. A.; Withers, P. J.
2017-07-01
There are a huge array of characterization techniques available today and increasingly powerful computing resources allowing for the effective analysis and modelling of large datasets. However, each experimental and modelling tool only spans limited time and length scales. Correlative tomography can be thought of as the extension of correlative microscopy into three dimensions connecting different techniques, each providing different types of information, or covering different time or length scales. Here the focus is on the linking of time lapse X-ray computed tomography (CT) and serial section electron tomography using the focussed ion beam (FIB)-scanning electron microscope to study the degradation of metals. Correlative tomography can provide new levels of detail by delivering a multiscale 3D picture of key regions of interest. Specifically, the Xe+ Plasma FIB is used as an enabling tool for large-volume high-resolution serial sectioning of materials, and also as a tool for preparation of microscale test samples and samples for nanoscale X-ray CT imaging. The exemplars presented illustrate general aspects relating to correlative workflows, as well as to the time-lapse characterisation of metal microstructures during various failure mechanisms, including ductile fracture of steel and the corrosion of aluminium and magnesium alloys. Correlative tomography is already providing significant insights into materials behaviour, linking together information from different instruments across different scales. Multiscale and multifaceted work flows will become increasingly routine, providing a feed into multiscale materials models as well as illuminating other areas, particularly where hierarchical structures are of interest.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sreedharan, Priya
The sudden release of toxic contaminants that reach indoor spaces can be hazardousto building occupants. To respond effectively, the contaminant release must be quicklydetected and characterized to determine unobserved parameters, such as release locationand strength. Characterizing the release requires solving an inverse problem. Designinga robust real-time sensor system that solves the inverse problem is challenging becausethe fate and transport of contaminants is complex, sensor information is limited andimperfect, and real-time estimation is computationally constrained.This dissertation uses a system-level approach, based on a Bayes Monte Carloframework, to develop sensor-system design concepts and methods. I describe threeinvestigations that explore complex relationships amongmore » sensors, network architecture,interpretation algorithms, and system performance. The investigations use data obtainedfrom tracer gas experiments conducted in a real building. The influence of individual sensor characteristics on the sensor-system performance for binary-type contaminant sensors is analyzed. Performance tradeoffs among sensor accuracy, threshold level and response time are identified; these attributes could not be inferred without a system-level analysis. For example, more accurate but slower sensors are found to outperform less accurate but faster sensors. Secondly, I investigate how the sensor-system performance can be understood in terms of contaminant transport processes and the model representation that is used to solve the inverse problem. The determination of release location and mass are shown to be related to and constrained by transport and mixing time scales. These time scales explain performance differences among different sensor networks. For example, the effect of longer sensor response times is comparably less for releases with longer mixing time scales. The third investigation explores how information fusion from heterogeneous sensors may improve the sensor-system performance and offset the need for more contaminant sensors. Physics- and algorithm-based frameworks are presented for selecting and fusing information from noncontaminant sensors. The frameworks are demonstrated with door-position sensors, which are found to be more useful in natural airflow conditions, but which cannot compensate for poor placement of contaminant sensors. The concepts and empirical findings have the potential to help in the design of sensor systems for more complex building systems. The research has broader relevance to additional environmental monitoring problems, fault detection and diagnostics, and system design.« less
Speed-difficulty trade-off in speech: Chinese versus English
Sun, Yao; Latash, Elizaveta M.; Mikaelian, Irina L.
2011-01-01
This study continues the investigation of the previously described speed-difficulty trade-off in picture description tasks. In particular, we tested a hypothesis that the Mandarin Chinese and American English are similar in showing logarithmic dependences between speech time and index of difficulty (ID), while they differ significantly in the amount of time needed to describe simple pictures, this difference increases for more complex pictures, and it is associated with a proportional difference in the number of syllables used. Subjects (eight Chinese speakers and eight English speakers) were tested in pairs. One subject (the Speaker) described simple pictures, while the other subject (the Performer) tried to reproduce the pictures based on the verbal description as quickly as possible with a set of objects. The Chinese speakers initiated speech production significantly faster than the English speakers. Speech time scaled linearly with ln(ID) in all subjects, but the regression coefficient was significantly higher in the English speakers as compared with the Chinese speakers. The number of errors was somewhat lower in the Chinese participants (not significantly). The Chinese pairs also showed a shorter delay between the initiation of speech and initiation of action by the Performer, shorter movement time by the Performer, and shorter overall performance time. The number of syllables scaled with ID, and the Chinese speakers used significantly smaller numbers of syllables. Speech rate was comparable between the two groups, about 3 syllables/s; it dropped for more complex pictures (higher ID). When asked to reproduce the same pictures without speaking, movement time scaled linearly with ln(ID); the Chinese performers were slower than the English performers. We conclude that natural languages show a speed-difficulty trade-off similar to Fitts’ law; the trade-offs in movement and speech production are likely to originate at a cognitive level. The time advantage of the Chinese participants originates not from similarity of the simple pictures and Chinese written characters and not from more sloppy performance. It is linked to using fewer syllables to transmit the same information. We suggest that natural languages may differ by informational density defined as the amount of information transmitted by a given number of syllables. PMID:21479658
Kawasaki, Masahiro; Uno, Yutaka; Mori, Jumpei; Kobata, Kenji; Kitajo, Keiichi
2014-01-01
Electroencephalogram (EEG) phase synchronization analyses can reveal large-scale communication between distant brain areas. However, it is not possible to identify the directional information flow between distant areas using conventional phase synchronization analyses. In the present study, we applied transcranial magnetic stimulation (TMS) to the occipital area in subjects who were resting with their eyes closed, and analyzed the spatial propagation of transient TMS-induced phase resetting by using the transfer entropy (TE), to quantify the causal and directional flow of information. The time-frequency EEG analysis indicated that the theta (5 Hz) phase locking factor (PLF) reached its highest value at the distant area (the motor area in this study), with a time lag that followed the peak of the transient PLF enhancements of the TMS-targeted area at the TMS onset. Phase-preservation index (PPI) analyses demonstrated significant phase resetting at the TMS-targeted area and distant area. Moreover, the TE from the TMS-targeted area to the distant area increased clearly during the delay that followed TMS onset. Interestingly, the time lags were almost coincident between the PLF and TE results (152 vs. 165 ms), which provides strong evidence that the emergence of the delayed PLF reflects the causal information flow. Such tendencies were observed only in the higher-intensity TMS condition, and not in the lower-intensity or sham TMS conditions. Thus, TMS may manipulate large-scale causal relationships between brain areas in an intensity-dependent manner. We demonstrated that single-pulse TMS modulated global phase dynamics and directional information flow among synchronized brain networks. Therefore, our results suggest that single-pulse TMS can manipulate both incoming and outgoing information in the TMS-targeted area associated with functional changes.
A framework for simultaneous aerodynamic design optimization in the presence of chaos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Günther, Stefanie, E-mail: stefanie.guenther@scicomp.uni-kl.de; Gauger, Nicolas R.; Wang, Qiqi
Integrating existing solvers for unsteady partial differential equations into a simultaneous optimization method is challenging due to the forward-in-time information propagation of classical time-stepping methods. This paper applies the simultaneous single-step one-shot optimization method to a reformulated unsteady constraint that allows for both forward- and backward-in-time information propagation. Especially in the presence of chaotic and turbulent flow, solving the initial value problem simultaneously with the optimization problem often scales poorly with the time domain length. The new formulation relaxes the initial condition and instead solves a least squares problem for the discrete partial differential equations. This enables efficient one-shot optimizationmore » that is independent of the time domain length, even in the presence of chaos.« less
Low NO sub x heavy fuel combustor concept program
NASA Technical Reports Server (NTRS)
Russell, P.; Beal, G.; Hinton, B.
1981-01-01
A gas turbine technology program to improve and optimize the staged rich lean low NOx combustor concept is described. Subscale combustor tests to develop the design information for optimization of the fuel preparation, rich burn, quick air quench, and lean burn steps of the combustion process were run. The program provides information for the design of high pressure full scale gas turbine combustors capable of providing environmentally clean combustion of minimally of minimally processed and synthetic fuels. It is concluded that liquid fuel atomization and mixing, rich zone stoichiometry, rich zone liner cooling, rich zone residence time, and quench zone stoichiometry are important considerations in the design and scale up of the rich lean combustor.
NASA Astrophysics Data System (ADS)
He, Jiayi; Shang, Pengjian; Xiong, Hui
2018-06-01
Stocks, as the concrete manifestation of financial time series with plenty of potential information, are often used in the study of financial time series. In this paper, we utilize the stock data to recognize their patterns through out the dissimilarity matrix based on modified cross-sample entropy, then three-dimensional perceptual maps of the results are provided through multidimensional scaling method. Two modified multidimensional scaling methods are proposed in this paper, that is, multidimensional scaling based on Kronecker-delta cross-sample entropy (MDS-KCSE) and multidimensional scaling based on permutation cross-sample entropy (MDS-PCSE). These two methods use Kronecker-delta based cross-sample entropy and permutation based cross-sample entropy to replace the distance or dissimilarity measurement in classical multidimensional scaling (MDS). Multidimensional scaling based on Chebyshev distance (MDSC) is employed to provide a reference for comparisons. Our analysis reveals a clear clustering both in synthetic data and 18 indices from diverse stock markets. It implies that time series generated by the same model are easier to have similar irregularity than others, and the difference in the stock index, which is caused by the country or region and the different financial policies, can reflect the irregularity in the data. In the synthetic data experiments, not only the time series generated by different models can be distinguished, the one generated under different parameters of the same model can also be detected. In the financial data experiment, the stock indices are clearly divided into five groups. Through analysis, we find that they correspond to five regions, respectively, that is, Europe, North America, South America, Asian-Pacific (with the exception of mainland China), mainland China and Russia. The results also demonstrate that MDS-KCSE and MDS-PCSE provide more effective divisions in experiments than MDSC.
Decision dynamics of departure times: Experiments and modeling
NASA Astrophysics Data System (ADS)
Sun, Xiaoyan; Han, Xiao; Bao, Jian-Zhang; Jiang, Rui; Jia, Bin; Yan, Xiaoyong; Zhang, Boyu; Wang, Wen-Xu; Gao, Zi-You
2017-10-01
A fundamental problem in traffic science is to understand user-choice behaviors that account for the emergence of complex traffic phenomena. Despite much effort devoted to theoretically exploring departure time choice behaviors, relatively large-scale and systematic experimental tests of theoretical predictions are still lacking. In this paper, we aim to offer a more comprehensive understanding of departure time choice behaviors in terms of a series of laboratory experiments under different traffic conditions and feedback information provided to commuters. In the experiment, the number of recruited players is much larger than the number of choices to better mimic the real scenario, in which a large number of commuters will depart simultaneously in a relatively small time window. Sufficient numbers of rounds are conducted to ensure the convergence of collective behavior. Experimental results demonstrate that collective behavior is close to the user equilibrium, regardless of different scales and traffic conditions. Moreover, the amount of feedback information has a negligible influence on collective behavior but has a relatively stronger effect on individual choice behaviors. Reinforcement learning and Fermi learning models are built to reproduce the experimental results and uncover the underlying mechanism. Simulation results are in good agreement with the experimentally observed collective behaviors.
SPACE AND TIME SCALES IN AMBIENT OZONE DATA. (R825260)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Extreme Precipitation and High-Impact Landslides
NASA Technical Reports Server (NTRS)
Kirschbaum, Dalia; Adler, Robert; Huffman, George; Peters-Lidard, Christa
2012-01-01
It is well known that extreme or prolonged rainfall is the dominant trigger of landslides; however, there remain large uncertainties in characterizing the distribution of these hazards and meteorological triggers at the global scale. Researchers have evaluated the spatiotemporal distribution of extreme rainfall and landslides at local and regional scale primarily using in situ data, yet few studies have mapped rainfall-triggered landslide distribution globally due to the dearth of landslide data and consistent precipitation information. This research uses a newly developed Global Landslide Catalog (GLC) and a 13-year satellite-based precipitation record from Tropical Rainfall Measuring Mission (TRMM) data. For the first time, these two unique products provide the foundation to quantitatively evaluate the co-occurence of precipitation and rainfall-triggered landslides globally. The GLC, available from 2007 to the present, contains information on reported rainfall-triggered landslide events around the world using online media reports, disaster databases, etc. When evaluating this database, we observed that 2010 had a large number of high-impact landslide events relative to previous years. This study considers how variations in extreme and prolonged satellite-based rainfall are related to the distribution of landslides over the same time scales for three active landslide areas: Central America, the Himalayan Arc, and central-eastern China. Several test statistics confirm that TRMM rainfall generally scales with the observed increase in landslide reports and fatal events for 2010 and previous years over each region. These findings suggest that the co-occurrence of satellite precipitation and landslide reports may serve as a valuable indicator for characterizing the spatiotemporal distribution of landslide-prone areas in order to establish a global rainfall-triggered landslide climatology. This research also considers the sources for this extreme rainfall, citing teleconnections from ENSO as likely contributors to regional precipitation variability. This work demonstrates the potential for using satellite-based precipitation estimates to identify potentially active landslide areas at the global scale in order to improve landslide cataloging and quantify landslide triggering at daily, monthly and yearly time scales.
The epidemic spreading model and the direction of information flow in brain networks.
Meier, J; Zhou, X; Hillebrand, A; Tewarie, P; Stam, C J; Van Mieghem, P
2017-05-15
The interplay between structural connections and emerging information flow in the human brain remains an open research problem. A recent study observed global patterns of directional information flow in empirical data using the measure of transfer entropy. For higher frequency bands, the overall direction of information flow was from posterior to anterior regions whereas an anterior-to-posterior pattern was observed in lower frequency bands. In this study, we applied a simple Susceptible-Infected-Susceptible (SIS) epidemic spreading model on the human connectome with the aim to reveal the topological properties of the structural network that give rise to these global patterns. We found that direct structural connections induced higher transfer entropy between two brain regions and that transfer entropy decreased with increasing distance between nodes (in terms of hops in the structural network). Applying the SIS model, we were able to confirm the empirically observed opposite information flow patterns and posterior hubs in the structural network seem to play a dominant role in the network dynamics. For small time scales, when these hubs acted as strong receivers of information, the global pattern of information flow was in the posterior-to-anterior direction and in the opposite direction when they were strong senders. Our analysis suggests that these global patterns of directional information flow are the result of an unequal spatial distribution of the structural degree between posterior and anterior regions and their directions seem to be linked to different time scales of the spreading process. Copyright © 2017 Elsevier Inc. All rights reserved.
The crack detection algorithm of pavement image based on edge information
NASA Astrophysics Data System (ADS)
Yang, Chunde; Geng, Mingyue
2018-05-01
As the images of pavement cracks are affected by a large amount of complicated noises, such as uneven illumination and water stains, the detected cracks are discontinuous and the main body information at the edge of the cracks is easily lost. In order to solve the problem, a crack detection algorithm in pavement image based on edge information is proposed. Firstly, the image is pre-processed by the nonlinear gray-scale transform function and reconstruction filter to enhance the linear characteristic of the crack. At the same time, an adaptive thresholding method is designed to coarsely extract the cracks edge according to the gray-scale gradient feature and obtain the crack gradient information map. Secondly, the candidate edge points are obtained according to the gradient information, and the edge is detected based on the single pixel percolation processing, which is improved by using the local difference between pixels in the fixed region. Finally, complete crack is obtained by filling the crack edge. Experimental results show that the proposed method can accurately detect pavement cracks and preserve edge information.
Percolation transport theory and relevance to soil formation, vegetation growth, and productivity
NASA Astrophysics Data System (ADS)
Hunt, A. G.; Ghanbarian, B.
2016-12-01
Scaling laws of percolation theory have been applied to generate the time dependence of vegetation growth rates (both intensively managed and natural) and soil formation rates. The soil depth is thus equal to the solute vertical transport distance, the soil production function, chemical weathering rates, and C and N storage rates are all given by the time derivative of the soil depth. Approximate numerical coefficients based on the maximum flow rates in soils have been proposed, leading to a broad understanding of such processes. What is now required is an accurate understanding of the variability of the coefficients in the scaling relationships. The present abstract focuses on the scaling relationship for solute transport and soil formation. A soil formation rate relates length, x, and time, t, scales, meaning that the missing coefficient must include information about fundamental space and time scales, x0 and t0. x0 is proposed to be a fundamental mineral heterogeneity scale, i.e. a median particle diameter. to is then found from the ratio of x0 and a fundamental flow rate, v0, which is identified with the net infiltration rate. The net infiltration rate is equal to precipitation P less evapotranspiration, ET, plus run-on less run-off. Using this hypothesis, it is possible to predict soil depths and formation rates as functions of time and P - ET, and the formation rate as a function of depth, soil calcic and gypsic horizon depths as functions of P-ET. It is also possible to determine when soils are in equilibrium, and predict relationships of erosion rates and soil formation rates.
Three-dimensional time dependent computation of turbulent flow
NASA Technical Reports Server (NTRS)
Kwak, D.; Reynolds, W. C.; Ferziger, J. H.
1975-01-01
The three-dimensional, primitive equations of motion are solved numerically for the case of isotropic box turbulence and the distortion of homogeneous turbulence by irrotational plane strain at large Reynolds numbers. A Gaussian filter is applied to governing equations to define the large scale field. This gives rise to additional second order computed scale stresses (Leonard stresses). The residual stresses are simulated through an eddy viscosity. Uniform grids are used, with a fourth order differencing scheme in space and a second order Adams-Bashforth predictor for explicit time stepping. The results are compared to the experiments and statistical information extracted from the computer generated data.
A New Apparatus to Evaluate Lubricants for Space Applications: The Spiral Orbit Tribometer (SOT)
NASA Technical Reports Server (NTRS)
Jones, William R., Jr.; Pepper, Stephen V.; Jansen, Mark J.; Nguyen, QuynhGiao N.; Kingsbury, Edward P.; Loewenthal, Stuart H.; Predmore, Roamer E.
2000-01-01
Lubricants used in space mechanisms must be thoroughly tested prior to their selection for critical applications. Traditionally, two types of tests have been used: accelerated and full-scale. Accelerated tests are rapid, economical, and provide useful information for gross screening of candidate lubricants. Although full-scale tests are more believable, because they mimic actual spacecraft conditions, they are expensive and time consuming. The spiral orbit tribometer compromises between the two extremes. It rapidly determines the rate of tribochemically induced lubricant consumption, which leads to finite test times, under realistic rolling/pivoting conditions that occur in angular contact bearings.
Multiplex Networks of Cortical and Hippocampal Neurons Revealed at Different Timescales
Timme, Nicholas; Ito, Shinya; Myroshnychenko, Maxym; Yeh, Fang-Chin; Hiolski, Emma; Hottowy, Pawel; Beggs, John M.
2014-01-01
Recent studies have emphasized the importance of multiplex networks – interdependent networks with shared nodes and different types of connections – in systems primarily outside of neuroscience. Though the multiplex properties of networks are frequently not considered, most networks are actually multiplex networks and the multiplex specific features of networks can greatly affect network behavior (e.g. fault tolerance). Thus, the study of networks of neurons could potentially be greatly enhanced using a multiplex perspective. Given the wide range of temporally dependent rhythms and phenomena present in neural systems, we chose to examine multiplex networks of individual neurons with time scale dependent connections. To study these networks, we used transfer entropy – an information theoretic quantity that can be used to measure linear and nonlinear interactions – to systematically measure the connectivity between individual neurons at different time scales in cortical and hippocampal slice cultures. We recorded the spiking activity of almost 12,000 neurons across 60 tissue samples using a 512-electrode array with 60 micrometer inter-electrode spacing and 50 microsecond temporal resolution. To the best of our knowledge, this preparation and recording method represents a superior combination of number of recorded neurons and temporal and spatial recording resolutions to any currently available in vivo system. We found that highly connected neurons (“hubs”) were localized to certain time scales, which, we hypothesize, increases the fault tolerance of the network. Conversely, a large proportion of non-hub neurons were not localized to certain time scales. In addition, we found that long and short time scale connectivity was uncorrelated. Finally, we found that long time scale networks were significantly less modular and more disassortative than short time scale networks in both tissue types. As far as we are aware, this analysis represents the first systematic study of temporally dependent multiplex networks among individual neurons. PMID:25536059
Graph Based Models for Unsupervised High Dimensional Data Clustering and Network Analysis
2015-01-01
ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for...algorithms we proposed improve the time e ciency signi cantly for large scale datasets. In the last chapter, we also propose an incremental reseeding...plume detection in hyper-spectral video data. These graph based clustering algorithms we proposed improve the time efficiency significantly for large
2005-11-01
more random. Autonomous systems can exchange entropy statistics for packet streams with no confidentiality concerns, potentially enabling timely and... analysis began with simulation results, which were validated by analysis of actual data from an Autonomous System (AS). A scale-free network is one...traffic—for example, time series of flux at given nodes and mean path length Outputs the time series from any node queried Calculates
Development of Vector Parabolic Equation Technique for Propagation in Urban and Tunnel Environments
2010-09-01
relativistic quantum mechanics J. Phys. A: Math. Gen. 16 1869–84 [8] Nottale L 1995 Scale relativity, fractal space- time and quantum mechanics Quantum...proportional to the “ time ” elapsed. By performing various approximations to the transfer function, several approximate absorbing boundary condi- tions...The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing
An effective online data monitoring and saving strategy for large-scale climate simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xian, Xiaochen; Archibald, Rick; Mayer, Benjamin
Large-scale climate simulation models have been developed and widely used to generate historical data and study future climate scenarios. These simulation models often have to run for a couple of months to understand the changes in the global climate over the course of decades. This long-duration simulation process creates a huge amount of data with both high temporal and spatial resolution information; however, how to effectively monitor and record the climate changes based on these large-scale simulation results that are continuously produced in real time still remains to be resolved. Due to the slow process of writing data to disk,more » the current practice is to save a snapshot of the simulation results at a constant, slow rate although the data generation process runs at a very high speed. This study proposes an effective online data monitoring and saving strategy over the temporal and spatial domains with the consideration of practical storage and memory capacity constraints. Finally, our proposed method is able to intelligently select and record the most informative extreme values in the raw data generated from real-time simulations in the context of better monitoring climate changes.« less
An effective online data monitoring and saving strategy for large-scale climate simulations
Xian, Xiaochen; Archibald, Rick; Mayer, Benjamin; ...
2018-01-22
Large-scale climate simulation models have been developed and widely used to generate historical data and study future climate scenarios. These simulation models often have to run for a couple of months to understand the changes in the global climate over the course of decades. This long-duration simulation process creates a huge amount of data with both high temporal and spatial resolution information; however, how to effectively monitor and record the climate changes based on these large-scale simulation results that are continuously produced in real time still remains to be resolved. Due to the slow process of writing data to disk,more » the current practice is to save a snapshot of the simulation results at a constant, slow rate although the data generation process runs at a very high speed. This study proposes an effective online data monitoring and saving strategy over the temporal and spatial domains with the consideration of practical storage and memory capacity constraints. Finally, our proposed method is able to intelligently select and record the most informative extreme values in the raw data generated from real-time simulations in the context of better monitoring climate changes.« less
Kotamäki, Niina; Thessler, Sirpa; Koskiaho, Jari; Hannukkala, Asko O.; Huitu, Hanna; Huttula, Timo; Havento, Jukka; Järvenpää, Markku
2009-01-01
Sensor networks are increasingly being implemented for environmental monitoring and agriculture to provide spatially accurate and continuous environmental information and (near) real-time applications. These networks provide a large amount of data which poses challenges for ensuring data quality and extracting relevant information. In the present paper we describe a river basin scale wireless sensor network for agriculture and water monitoring. The network, called SoilWeather, is unique and the first of this type in Finland. The performance of the network is assessed from the user and maintainer perspectives, concentrating on data quality, network maintenance and applications. The results showed that the SoilWeather network has been functioning in a relatively reliable way, but also that the maintenance and data quality assurance by automatic algorithms and calibration samples requires a lot of effort, especially in continuous water monitoring over large areas. We see great benefits on sensor networks enabling continuous, real-time monitoring, while data quality control and maintenance efforts highlight the need for tight collaboration between sensor and sensor network owners to decrease costs and increase the quality of the sensor data in large scale applications. PMID:22574050
Traffic-related particulate air pollution exposure in urban areas
NASA Astrophysics Data System (ADS)
Borrego, C.; Tchepel, O.; Costa, A. M.; Martins, H.; Ferreira, J.; Miranda, A. I.
In the last years, there has been an increase of scientific studies confirming that long- and short-term exposure to particulate matter (PM) pollution leads to adverse health effects. The development of a methodology for the determination of accumulated human exposure in urban areas is the main objective of the current work, combining information on concentrations at different microenvironments and population time-activity pattern data. A link between a mesoscale meteorological and dispersion model and a local scale air quality model was developed to define the boundary conditions for the local scale application. The time-activity pattern of the population was derived from statistical information for different sub-population groups and linked to digital city maps. Finally, the hourly PM 10 concentrations for indoor and outdoor microenvironments were estimated for the Lisbon city centre, which was chosen as the case-study, based on the local scale air quality model application for a selected period. This methodology is a first approach to estimate population exposure, calculated as the total daily values above the thresholds recommended for long- and short-term health effects. Obtained results reveal that in Lisbon city centre a large number of persons are exposed to PM levels exceeding the legislated limit value.
Bridging EO Research, Operations and Collaborative Learning
NASA Astrophysics Data System (ADS)
Scarth, Peter
2016-04-01
Building flexible and responsive processing and delivery systems is key to getting EO information used by researchers, policy agents and the public. There are typically three distinct processes we tackle to get product uptake: undertake research, operationalise the validated research, and deliver information and garner feedback in an appropriate way. In many cases however, the gaps between these process elements are large and lead to poor outcomes. Good research may be "lost" and not adopted, there may be resistance to uptake by government or NGOs of significantly better operational products based on EO data, and lack of accessibility means that there is no use of interactive science outputs to improve cross disciplinary science or to start a dialog with citizens. So one of the the most important tasks, if we wish to have broad uptake of EO information and accelerate further research, is to link these processes together in a formal but flexible way. One of the ways to operationalize research output is by building a platform that can take research code and scale it across much larger areas. In remote sensing, this is typically a system that has access to current and historical corrected imagery with a processing pipeline built over the top. To reduce the demand on high level scientific programmers and allowing cross disciplinary researchers to hack and play and refine, this pipeline needs to be easy to use, collaborative and link to existing tools to encourage code experimentation and reuse. It is also critical to have efficient, tight integration with information delivery and extension components so that the science relevant to your user is available quickly and efficiently. The rapid expansion of open data licensing has helped this process, but building top-down web portals and tools without flexibility and regard for end user needs has limited the use of EO information in many areas. This research reports on the operalization of a scale independent time series query API that allows the interrogation of the entire current processed Australian Landsat archive in web time. The system containerises data interrogation and time series tasks to allow easy scaling and expansion and is currently in operational use by several land management portals across the country to deliver EO land information products to government agents, NGOs and individual farmers. Plans to ingest and process the Sentinel 2 archive are well underway, and the logistics of scaling this globally using an open source project based on the Earth Engine Platform will be discussed.
Long-term wave measurements in a climate change perspective.
NASA Astrophysics Data System (ADS)
Pomaro, Angela; Bertotti, Luciana; Cavaleri, Luigi; Lionello, Piero; Portilla-Yandun, Jesus
2017-04-01
At present multi-decadal time series of wave data needed for climate studies are generally provided by long term model simulations (hindcasts) covering the area of interest. Examples, among many, at different scales are wave hindcasts adopting the wind fields of the ERA-Interim reanalysis of the European Centre for Medium-Range Weather Forecasts (ECMWF, Reading, U.K.) at the global level and by regional re-analysis as for the Mediterranean Sea (Lionello and Sanna, 2006). Valuable as they are, these estimates are necessarily affected by the approximations involved, the more so because of the problems encountered within modelling processes in small basins using coarse resolution wind fields (Cavaleri and Bertotti, 2004). On the contrary, multi-decadal observed time series are rare. They have the evident advantage of somehow representing the real evolution of the waves, without the shortcomings associated with the limitation of models in reproducing the actual processes and the real variability within the wave fields. Obviously, observed wave time series are not exempt of problems. They represent a very local information, hence their use to describe the wave evolution at large scale is sometimes arguable and, in general, it needs the support of model simulations assessing to which extent the local value is representative of a large scale evolution. Local effects may prevent the identification of trends that are indeed present at large scale. Moreover, a regular maintenance, accurate monitoring and metadata information are crucial issues when considering the reliability of a time series for climate applications. Of course, where available, especially if for several decades, measured data are of great value for a number of reasons and can be valuable clues to delve further into the physics of the processes of interest, especially if considering that waves, as an integrated product of the local climate, if available in an area sensitive to even limited changes of the large scale pattern, can provide related compact and meaningful information. In addition, the availability for the area of interest of a 20-year long dataset of directional spectra (in frequency and direction) offers an independent, but theoretically corresponding and significantly long dataset, allowing to penetrate the wave problem through different perspectives. In particular, we investigate the contribution of the individual wave systems that modulate the variability of waves in the Adriatic Sea. A characterization of wave conditions based on wave spectra in fact brings out a more detailed description of the different wave regimes, their associated meteorological conditions and their variation in time and geographical space.
Bridging Empirical and Physical Approaches for Landslide Monitoring and Early Warning
NASA Technical Reports Server (NTRS)
Kirschbaum, Dalia; Peters-Lidard, Christa; Adler, Robert; Kumar, Sujay; Harrison, Ken
2011-01-01
Rainfall-triggered landslides typically occur and are evaluated at local scales, using slope-stability models to calculate coincident changes in driving and resisting forces at the hillslope level in order to anticipate slope failures. Over larger areas, detailed high resolution landslide modeling is often infeasible due to difficulties in quantifying the complex interaction between rainfall infiltration and surface materials as well as the dearth of available in situ soil and rainfall estimates and accurate landslide validation data. This presentation will discuss how satellite precipitation and surface information can be applied within a landslide hazard assessment framework to improve landslide monitoring and early warning by considering two disparate approaches to landslide hazard assessment: an empirical landslide forecasting algorithm and a physical slope-stability model. The goal of this research is to advance near real-time landslide hazard assessment and early warning at larger spatial scales. This is done by employing high resolution surface and precipitation information within a probabilistic framework to provide more physically-based grounding to empirical landslide triggering thresholds. The empirical landslide forecasting tool, running in near real-time at http://trmm.nasa.gov, considers potential landslide activity at the global scale and relies on Tropical Rainfall Measuring Mission (TRMM) precipitation data and surface products to provide a near real-time picture of where landslides may be triggered. The physical approach considers how rainfall infiltration on a hillslope affects the in situ hydro-mechanical processes that may lead to slope failure. Evaluation of these empirical and physical approaches are performed within the Land Information System (LIS), a high performance land surface model processing and data assimilation system developed within the Hydrological Sciences Branch at NASA's Goddard Space Flight Center. LIS provides the capabilities to quantify uncertainty from model inputs and calculate probabilistic estimates for slope failures. Results indicate that remote sensing data can provide many of the spatiotemporal requirements for accurate landslide monitoring and early warning; however, higher resolution precipitation inputs will help to better identify small-scale precipitation forcings that contribute to significant landslide triggering. Future missions, such as the Global Precipitation Measurement (GPM) mission will provide more frequent and extensive estimates of precipitation at the global scale, which will serve as key inputs to significantly advance the accuracy of landslide hazard assessment, particularly over larger spatial scales.
ICLEA - The Virtual Institute of Integrated Climate and Landscape Evolution Analyses
NASA Astrophysics Data System (ADS)
Schwab, Markus; Brauer, Achim; Błaszkiewicz, Mirosław; Blume, Theresa; Raab, Thomas; Wilmking, Martin
2017-04-01
Since 2012, the partner of the virtual institute ICLEA from Germany and Poland view on past changes as natural experiments as a guidebook for better anticipation of future changes and their impacts. Since the natural evolution became increasingly superimposed by human impacts since the Neolithic we include an in-depth discussion of impacts of climate and environment change on societies and vice versa. Understanding causes and effects of present-day climate change on landscapes and the human habitat faces two main challenges, (I) too short time series of instrumental observation that do not cover the full range of variability since mechanisms of climate change and landscape evolution work on different time scales, which often not susceptible to human perception, and, (II) distinct regional differences due to the location with respect to oceanic/continental climatic influences, the geological underground, and the history and intensity of anthropogenic land-use. Both challenges are central for the ICLEA research strategy and demand a high degree of interdisciplinary. In particular, the need to link observations and measurements of ongoing changes with information from the past taken from natural archives requires joint work of scientists with very different time perspectives. On the one hand, scientists that work at geological time scales of thousands and more years and, on the other hand, those observing and investigating recent processes at short time scales. Five complementary work packages (WP) are established according to the key research aspects: WP 1 focused on monitoring mainly hydrology and soil moisture as well as meteorological parameters. WP 2 is linking present day and future monitoring data with the most recent past through analyzing satellite images. This WP will further provide larger spatial scales. WP 3-5 are focused on different natural archives to obtain a broad variety of high quality proxy data. Tree rings provide sub-seasonal data for the last centuries up to few millennia, varved lake sediments cover the entire research time interval at seasonal to decadal resolution and palaeosoils and geomorphological features also cover the entire period but not continuously and with lower resolution. Complementary information, like climate, tree ecophysiological and limnological data etc., are provided by cooperation with associated partners. In these five WP the partner focusing their research capacities and expertise in ICLEA. We offers young researchers an interdisciplinary and structured education and promote their early independence through coaching and mentoring. Postdoctoral rotation positions promote dissemination of information and expertise between disciplines. ICLEA results are published in about 80 peer reviewed scientific articles and available for the scientific community. The long-term mission of the Virtual Institute is to provide a substantiated data basis for sustained environmental maintenance based on a profound process understanding at all relevant time scales. Aim is to explore processes of climate and landscape evolution in an historical cultural landscape extending from northeastern Germany into northwestern Poland. The northern-central European lowlands is facilitated as a natural laboratory providing an ideal case for utilizing a systematic and holistic approach. Further information about ICLEA: www.iclea.de
Impact of stock market structure on intertrade time and price dynamics.
Ivanov, Plamen Ch; Yuen, Ainslie; Perakakis, Pandelis
2014-01-01
We analyse times between consecutive transactions for a diverse group of stocks registered on the NYSE and NASDAQ markets, and we relate the dynamical properties of the intertrade times with those of the corresponding price fluctuations. We report that market structure strongly impacts the scale-invariant temporal organisation in the transaction timing of stocks, which we have observed to have long-range power-law correlations. Specifically, we find that, compared to NYSE stocks, stocks registered on the NASDAQ exhibit significantly stronger correlations in their transaction timing on scales within a trading day. Further, we find that companies that transfer from the NASDAQ to the NYSE show a reduction in the correlation strength of transaction timing on scales within a trading day, indicating influences of market structure. We also report a persistent decrease in correlation strength of intertrade times with increasing average intertrade time and with corresponding decrease in companies' market capitalization-a trend which is less pronounced for NASDAQ stocks. Surprisingly, we observe that stronger power-law correlations in intertrade times are coupled with stronger power-law correlations in absolute price returns and higher price volatility, suggesting a strong link between the dynamical properties of intertrade times and the corresponding price fluctuations over a broad range of time scales. Comparing the NYSE and NASDAQ markets, we demonstrate that the stronger correlations we find in intertrade times for NASDAQ stocks are associated with stronger correlations in absolute price returns and with higher volatility, suggesting that market structure may affect price behavior through information contained in transaction timing. These findings do not support the hypothesis of universal scaling behavior in stock dynamics that is independent of company characteristics and stock market structure. Further, our results have implications for utilising transaction timing patterns in price prediction and risk management optimization on different stock markets.
Impact of Stock Market Structure on Intertrade Time and Price Dynamics
Ivanov, Plamen Ch.; Yuen, Ainslie; Perakakis, Pandelis
2014-01-01
We analyse times between consecutive transactions for a diverse group of stocks registered on the NYSE and NASDAQ markets, and we relate the dynamical properties of the intertrade times with those of the corresponding price fluctuations. We report that market structure strongly impacts the scale-invariant temporal organisation in the transaction timing of stocks, which we have observed to have long-range power-law correlations. Specifically, we find that, compared to NYSE stocks, stocks registered on the NASDAQ exhibit significantly stronger correlations in their transaction timing on scales within a trading day. Further, we find that companies that transfer from the NASDAQ to the NYSE show a reduction in the correlation strength of transaction timing on scales within a trading day, indicating influences of market structure. We also report a persistent decrease in correlation strength of intertrade times with increasing average intertrade time and with corresponding decrease in companies' market capitalization–a trend which is less pronounced for NASDAQ stocks. Surprisingly, we observe that stronger power-law correlations in intertrade times are coupled with stronger power-law correlations in absolute price returns and higher price volatility, suggesting a strong link between the dynamical properties of intertrade times and the corresponding price fluctuations over a broad range of time scales. Comparing the NYSE and NASDAQ markets, we demonstrate that the stronger correlations we find in intertrade times for NASDAQ stocks are associated with stronger correlations in absolute price returns and with higher volatility, suggesting that market structure may affect price behavior through information contained in transaction timing. These findings do not support the hypothesis of universal scaling behavior in stock dynamics that is independent of company characteristics and stock market structure. Further, our results have implications for utilising transaction timing patterns in price prediction and risk management optimization on different stock markets. PMID:24699376
SLIDE - a web-based tool for interactive visualization of large-scale -omics data.
Ghosh, Soumita; Datta, Abhik; Tan, Kaisen; Choi, Hyungwon
2018-06-28
Data visualization is often regarded as a post hoc step for verifying statistically significant results in the analysis of high-throughput data sets. This common practice leaves a large amount of raw data behind, from which more information can be extracted. However, existing solutions do not provide capabilities to explore large-scale raw datasets using biologically sensible queries, nor do they allow user interaction based real-time customization of graphics. To address these drawbacks, we have designed an open-source, web-based tool called Systems-Level Interactive Data Exploration, or SLIDE to visualize large-scale -omics data interactively. SLIDE's interface makes it easier for scientists to explore quantitative expression data in multiple resolutions in a single screen. SLIDE is publicly available under BSD license both as an online version as well as a stand-alone version at https://github.com/soumitag/SLIDE. Supplementary Information are available at Bioinformatics online.
Principles of Temporal Processing Across the Cortical Hierarchy.
Himberger, Kevin D; Chien, Hsiang-Yun; Honey, Christopher J
2018-05-02
The world is richly structured on multiple spatiotemporal scales. In order to represent spatial structure, many machine-learning models repeat a set of basic operations at each layer of a hierarchical architecture. These iterated spatial operations - including pooling, normalization and pattern completion - enable these systems to recognize and predict spatial structure, while robust to changes in the spatial scale, contrast and noisiness of the input signal. Because our brains also process temporal information that is rich and occurs across multiple time scales, might the brain employ an analogous set of operations for temporal information processing? Here we define a candidate set of temporal operations, and we review evidence that they are implemented in the mammalian cerebral cortex in a hierarchical manner. We conclude that multiple consecutive stages of cortical processing can be understood to perform temporal pooling, temporal normalization and temporal pattern completion. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Entropic Barriers for Two-Dimensional Quantum Memories
NASA Astrophysics Data System (ADS)
Brown, Benjamin J.; Al-Shimary, Abbas; Pachos, Jiannis K.
2014-03-01
Comprehensive no-go theorems show that information encoded over local two-dimensional topologically ordered systems cannot support macroscopic energy barriers, and hence will not maintain stable quantum information at finite temperatures for macroscopic time scales. However, it is still well motivated to study low-dimensional quantum memories due to their experimental amenability. Here we introduce a grid of defect lines to Kitaev's quantum double model where different anyonic excitations carry different masses. This setting produces a complex energy landscape which entropically suppresses the diffusion of excitations that cause logical errors. We show numerically that entropically suppressed errors give rise to superexponential inverse temperature scaling and polynomial system size scaling for small system sizes over a low-temperature regime. Curiously, these entropic effects are not present below a certain low temperature. We show that we can vary the system to modify this bound and potentially extend the described effects to zero temperature.
Grünberger, Alexander; Paczia, Nicole; Probst, Christopher; Schendzielorz, Georg; Eggeling, Lothar; Noack, Stephan; Wiechert, Wolfgang; Kohlheyer, Dietrich
2012-05-08
In the continuously growing field of industrial biotechnology the scale-up from lab to industrial scale is still a major hurdle to develop competitive bioprocesses. During scale-up the productivity of single cells might be affected by bioreactor inhomogeneity and population heterogeneity. Currently, these complex interactions are difficult to investigate. In this report, design, fabrication and operation of a disposable picolitre cultivation system is described, in which environmental conditions can be well controlled on a short time scale and bacterial microcolony growth experiments can be observed by time-lapse microscopy. Three exemplary investigations will be discussed emphasizing the applicability and versatility of the device. Growth and analysis of industrially relevant bacteria with single cell resolution (in particular Escherichia coli and Corynebacterium glutamicum) starting from one single mother cell to densely packed cultures is demonstrated. Applying the picolitre bioreactor, 1.5-fold increased growth rates of C. glutamicum wild type cells were observed compared to typical 1 litre lab-scale batch cultivation. Moreover, the device was used to analyse and quantify the morphological changes of an industrially relevant l-lysine producer C. glutamicum after artificially inducing starvation conditions. Instead of a one week lab-scale experiment, only 1 h was sufficient to reveal the same information. Furthermore, time lapse microscopy during 24 h picolitre cultivation of an arginine producing strain containing a genetically encoded fluorescence sensor disclosed time dependent single cell productivity and growth, which was not possible with conventional methods.
Anderson, R.N.; Boulanger, A.; Bagdonas, E.P.; Xu, L.; He, W.
1996-12-17
The invention utilizes 3-D and 4-D seismic surveys as a means of deriving information useful in petroleum exploration and reservoir management. The methods use both single seismic surveys (3-D) and multiple seismic surveys separated in time (4-D) of a region of interest to determine large scale migration pathways within sedimentary basins, and fine scale drainage structure and oil-water-gas regions within individual petroleum producing reservoirs. Such structure is identified using pattern recognition tools which define the regions of interest. The 4-D seismic data sets may be used for data completion for large scale structure where time intervals between surveys do not allow for dynamic evolution. The 4-D seismic data sets also may be used to find variations over time of small scale structure within individual reservoirs which may be used to identify petroleum drainage pathways, oil-water-gas regions and, hence, attractive drilling targets. After spatial orientation, and amplitude and frequency matching of the multiple seismic data sets, High Amplitude Event (HAE) regions consistent with the presence of petroleum are identified using seismic attribute analysis. High Amplitude Regions are grown and interconnected to establish plumbing networks on the large scale and reservoir structure on the small scale. Small scale variations over time between seismic surveys within individual reservoirs are identified and used to identify drainage patterns and bypassed petroleum to be recovered. The location of such drainage patterns and bypassed petroleum may be used to site wells. 22 figs.
Anderson, Roger N.; Boulanger, Albert; Bagdonas, Edward P.; Xu, Liqing; He, Wei
1996-01-01
The invention utilizes 3-D and 4-D seismic surveys as a means of deriving information useful in petroleum exploration and reservoir management. The methods use both single seismic surveys (3-D) and multiple seismic surveys separated in time (4-D) of a region of interest to determine large scale migration pathways within sedimentary basins, and fine scale drainage structure and oil-water-gas regions within individual petroleum producing reservoirs. Such structure is identified using pattern recognition tools which define the regions of interest. The 4-D seismic data sets may be used for data completion for large scale structure where time intervals between surveys do not allow for dynamic evolution. The 4-D seismic data sets also may be used to find variations over time of small scale structure within individual reservoirs which may be used to identify petroleum drainage pathways, oil-water-gas regions and, hence, attractive drilling targets. After spatial orientation, and amplitude and frequency matching of the multiple seismic data sets, High Amplitude Event (HAE) regions consistent with the presence of petroleum are identified using seismic attribute analysis. High Amplitude Regions are grown and interconnected to establish plumbing networks on the large scale and reservoir structure on the small scale. Small scale variations over time between seismic surveys within individual reservoirs are identified and used to identify drainage patterns and bypassed petroleum to be recovered. The location of such drainage patterns and bypassed petroleum may be used to site wells.
The Use of Census Migration Data to Approximate Human Movement Patterns across Temporal Scales
Wesolowski, Amy; Buckee, Caroline O.; Pindolia, Deepa K.; Eagle, Nathan; Smith, David L.; Garcia, Andres J.; Tatem, Andrew J.
2013-01-01
Human movement plays a key role in economies and development, the delivery of services, and the spread of infectious diseases. However, it remains poorly quantified partly because reliable data are often lacking, particularly for low-income countries. The most widely available are migration data from human population censuses, which provide valuable information on relatively long timescale relocations across countries, but do not capture the shorter-scale patterns, trips less than a year, that make up the bulk of human movement. Census-derived migration data may provide valuable proxies for shorter-term movements however, as substantial migration between regions can be indicative of well connected places exhibiting high levels of movement at finer time scales, but this has never been examined in detail. Here, an extensive mobile phone usage data set for Kenya was processed to extract movements between counties in 2009 on weekly, monthly, and annual time scales and compared to data on change in residence from the national census conducted during the same time period. We find that the relative ordering across Kenyan counties for incoming, outgoing and between-county movements shows strong correlations. Moreover, the distributions of trip durations from both sources of data are similar, and a spatial interaction model fit to the data reveals the relationships of different parameters over a range of movement time scales. Significant relationships between census migration data and fine temporal scale movement patterns exist, and results suggest that census data can be used to approximate certain features of movement patterns across multiple temporal scales, extending the utility of census-derived migration data. PMID:23326367
Turbulent transport measurements with a laser Doppler velocimeter.
NASA Technical Reports Server (NTRS)
Edwards, R. V.; Angus, J. C.; Dunning, J. W., Jr.
1972-01-01
The power spectrum of phototube current from a laser Doppler velocimeter operating in the heterodyne mode has been computed. The spectral width and shape predicted by the theory are in agreement with experiment. For normal operating parameters the time-average spectrum contains information only for times shorter than the Lagrangian-integral time scale of the turbulence. To examine the long-time behavior, one must use either extremely small scattering angles, much-longer-wavelength radiation, or a different mode of signal analysis, e.g., FM detection.
Research on the F/A-18E/F Using a 22%-Dynamically-Scaled Drop Model
NASA Technical Reports Server (NTRS)
Croom, M.; Kenney, H.; Murri, D.; Lawson, K.
2000-01-01
Research on the F/A-18E/F configuration was conducted using a 22%-dynamically-scaled drop model to study flight dynamics in the subsonic regime. Several topics were investigated including longitudinal response, departure/spin resistance, developed spins and recoveries, and the failing leaf mode. Comparisons to full-scale flight test results were made and show the drop model strongly correlates to the airplane even under very dynamic conditions. The capability to use the drop model to expand on the information gained from full-scale flight testing is also discussed. Finally, a preliminary analysis of an unusual inclined spinning motion, dubbed the "cartwheel", is presented here for the first time.
An improved global dynamic routing strategy for scale-free network with tunable clustering
NASA Astrophysics Data System (ADS)
Sun, Lina; Huang, Ning; Zhang, Yue; Bai, Yannan
2016-08-01
An efficient routing strategy can deliver packets quickly to improve the network capacity. Node congestion and transmission path length are inevitable real-time factors for a good routing strategy. Existing dynamic global routing strategies only consider the congestion of neighbor nodes and the shortest path, which ignores other key nodes’ congestion on the path. With the development of detection methods and techniques, global traffic information is readily available and important for the routing choice. Reasonable use of this information can effectively improve the network routing. So, an improved global dynamic routing strategy is proposed, which considers the congestion of all nodes on the shortest path and incorporates the waiting time of the most congested node into the path. We investigate the effectiveness of the proposed routing for scale-free network with different clustering coefficients. The shortest path routing strategy and the traffic awareness routing strategy only considering the waiting time of neighbor node are analyzed comparatively. Simulation results show that network capacity is greatly enhanced compared with the shortest path; congestion state increase is relatively slow compared with the traffic awareness routing strategy. Clustering coefficient increase will not only reduce the network throughput, but also result in transmission average path length increase for scale-free network with tunable clustering. The proposed routing is favorable to ease network congestion and network routing strategy design.
Structural similitude and design of scaled down laminated models
NASA Technical Reports Server (NTRS)
Simitses, G. J.; Rezaeepazhand, J.
1993-01-01
The excellent mechanical properties of laminated composite structures make them prime candidates for wide variety of applications in aerospace, mechanical and other branches of engineering. The enormous design flexibility of advanced composites is obtained at the cost of large number of design parameters. Due to complexity of the systems and lack of complete design based informations, designers tend to be conservative in their design. Furthermore, any new design is extensively evaluated experimentally until it achieves the necessary reliability, performance and safety. However, the experimental evaluation of composite structures are costly and time consuming. Consequently, it is extremely useful if a full-scale structure can be replaced by a similar scaled-down model which is much easier to work with. Furthermore, a dramatic reduction in cost and time can be achieved, if available experimental data of a specific structure can be used to predict the behavior of a group of similar systems. This study investigates problems associated with the design of scaled models. Such study is important since it provides the necessary scaling laws, and the factors which affect the accuracy of the scale models. Similitude theory is employed to develop the necessary similarity conditions (scaling laws). Scaling laws provide relationship between a full-scale structure and its scale model, and can be used to extrapolate the experimental data of a small, inexpensive, and testable model into design information for a large prototype. Due to large number of design parameters, the identification of the principal scaling laws by conventional method (dimensional analysis) is tedious. Similitude theory based on governing equations of the structural system is more direct and simpler in execution. The difficulty of making completely similar scale models often leads to accept certain type of distortion from exact duplication of the prototype (partial similarity). Both complete and partial similarity are discussed. The procedure consists of systematically observing the effect of each parameter and corresponding scaling laws. Then acceptable intervals and limitations for these parameters and scaling laws are discussed. In each case, a set of valid scaling factors and corresponding response scaling laws that accurately predict the response of prototypes from experimental models is introduced. The examples used include rectangular laminated plates under destabilizing loads, applied individually, vibrational characteristics of same plates, as well as cylindrical bending of beam-plates.
Li, Xiaomeng; Dou, Qi; Chen, Hao; Fu, Chi-Wing; Qi, Xiaojuan; Belavý, Daniel L; Armbrecht, Gabriele; Felsenberg, Dieter; Zheng, Guoyan; Heng, Pheng-Ann
2018-04-01
Intervertebral discs (IVDs) are small joints that lie between adjacent vertebrae. The localization and segmentation of IVDs are important for spine disease diagnosis and measurement quantification. However, manual annotation is time-consuming and error-prone with limited reproducibility, particularly for volumetric data. In this work, our goal is to develop an automatic and accurate method based on fully convolutional networks (FCN) for the localization and segmentation of IVDs from multi-modality 3D MR data. Compared with single modality data, multi-modality MR images provide complementary contextual information, which contributes to better recognition performance. However, how to effectively integrate such multi-modality information to generate accurate segmentation results remains to be further explored. In this paper, we present a novel multi-scale and modality dropout learning framework to locate and segment IVDs from four-modality MR images. First, we design a 3D multi-scale context fully convolutional network, which processes the input data in multiple scales of context and then merges the high-level features to enhance the representation capability of the network for handling the scale variation of anatomical structures. Second, to harness the complementary information from different modalities, we present a random modality voxel dropout strategy which alleviates the co-adaption issue and increases the discriminative capability of the network. Our method achieved the 1st place in the MICCAI challenge on automatic localization and segmentation of IVDs from multi-modality MR images, with a mean segmentation Dice coefficient of 91.2% and a mean localization error of 0.62 mm. We further conduct extensive experiments on the extended dataset to validate our method. We demonstrate that the proposed modality dropout strategy with multi-modality images as contextual information improved the segmentation accuracy significantly. Furthermore, experiments conducted on extended data collected from two different time points demonstrate the efficacy of our method on tracking the morphological changes in a longitudinal study. Copyright © 2018 Elsevier B.V. All rights reserved.
Integrating remote sensing, geographic information system and modeling for estimating crop yield
NASA Astrophysics Data System (ADS)
Salazar, Luis Alonso
This thesis explores various aspects of the use of remote sensing, geographic information system and digital signal processing technologies for broad-scale estimation of crop yield in Kansas. Recent dry and drought years in the Great Plains have emphasized the need for new sources of timely, objective and quantitative information on crop conditions. Crop growth monitoring and yield estimation can provide important information for government agencies, commodity traders and producers in planning harvest, storage, transportation and marketing activities. The sooner this information is available the lower the economic risk translating into greater efficiency and increased return on investments. Weather data is normally used when crop yield is forecasted. Such information, to provide adequate detail for effective predictions, is typically feasible only on small research sites due to expensive and time-consuming collections. In order for crop assessment systems to be economical, more efficient methods for data collection and analysis are necessary. The purpose of this research is to use satellite data which provides 50 times more spatial information about the environment than the weather station network in a short amount of time at a relatively low cost. Specifically, we are going to use Advanced Very High Resolution Radiometer (AVHRR) based vegetation health (VH) indices as proxies for characterization of weather conditions.
Building a Continental Scale Land Cover Monitoring Framework for Australia
NASA Astrophysics Data System (ADS)
Thankappan, Medhavy; Lymburner, Leo; Tan, Peter; McIntyre, Alexis; Curnow, Steven; Lewis, Adam
2012-04-01
Land cover information is critical for national reporting and decision making in Australia. A review of information requirements for reporting on national environmental indicators identified the need for consistent land cover information to be compared against a baseline. A Dynamic Land Cover Dataset (DLCD) for Australia has been developed by Geoscience Australia and the Australian Bureau of Agriculture and Resource Economics and Sciences (ABARES) recently, to provide a comprehensive and consistent land cover information baseline to enable monitoring and reporting for sustainable farming practices, water resource management, soil erosion, and forests at national and regional scales. The DLCD was produced from the analysis of Enhanced Vegetation Index (EVI) data at 250-metre resolution derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) for the period from 2000 to 2008. The EVI time series data for each pixel was modelled as 12 coefficients based on the statistical, phenological and seasonal characteristics. The time series were then clustered in coefficients spaces and labelled using ancillary information on vegetation and land use at the catchment scale. The accuracy of the DLCD was assessed using field survey data over 25,000 locations provided by vegetation and land management agencies in State and Territory jurisdictions, and by ABARES. The DLCD is seen as the first in a series of steps to build a framework for national land cover monitoring in Australia. A robust methodology to provide annual updates to the DLCD is currently being developed at Geoscience Australia. There is also a growing demand from the user community for land cover information at better spatial resolution than currently available through the DLCD. Global land cover mapping initiatives that rely on Earth observation data offer many opportunities for national and international programs to work in concert and deliver better outcomes by streamlining efforts on development and validation of land cover products. Among the upcoming missions, the Global Monitoring for Environment and Security (GMES) Sentinel-2 satellites are seen as an important source of optical data for updating land cover information in Australia. This paper outlines the DLCD development, key applications that inform nationally significant issues, further work on updating the DLCD that would enable transition to a national land cover monitoring framework, challenges and approaches to delivering land cover information at higher spatial resolutions on a continental scale, and the potential value of data from the Sentinel-2 mission in supporting land cover monitoring in Australia and globally.
Multiscale Observation System for Sea Ice Drift and Deformation
NASA Astrophysics Data System (ADS)
Lensu, M.; Haapala, J. J.; Heiler, I.; Karvonen, J.; Suominen, M.
2011-12-01
The drift and deformation of sea ice cover is most commonly followed from successive SAR images. The time interval between the images is seldom less than one day which provides rather crude approximation of the motion fields as ice can move tens of kilometers per day. This is particulary so from the viewpoint of operative services, seeking to provide real time information for ice navigating ships and other end users, as leads are closed and opened or ridge fields created in time scales of one hour or less. The ice forecast models are in a need of better temporal resolution for ice motion data as well. We present experiences from a multiscale monitoring system set up to the Bay of Bothnia, the northernmost basin of the Baltic Sea. The basin generates difficult ice conditions every winter while the ports are kept open with the help of an icebreaker fleet. The key addition to SAR imagery is the use of coastal radars for the monitoring of coastal ice fields. An independent server is used to tap the radar signal and process it to suit ice monitoring purposes. This is done without interfering the basic use of the radars, the ship traffic monitoring. About 20 images per minute are captured and sent to the headquarters for motion field extraction, website animation and distribution. This provides very detailed real time picture of the ice movement and deformation within 20 km range. The real time movements are followed in addition with ice drifter arrays, and using AIS ship identification data, from which the translation of ship cannels due to ice drift can be found out. To the operative setup is associated an extensive research effort that uses the data for ice drift model enhancement. The Baltic ice models seek to forecast conditions relevant to ship traffic, especilly hazardous ones like severe ice compression. The main missing link here is downscaling, or the relation of local scale ice dynamics and kinematics to the ice model scale behaviour. The data flow when combined with SAR images gives information on how large scale ice cover motions manifest as local scale deformations. The research includes also ice stress measurements for relating the kinematic state and modeled stresses to local scale ice cover stresses, and ice thickness mappings with profiling sonars and EM methods. Downscaling results based on four-month campaing during winter 2011 are presented.
The physics of flocking: Correlation as a compass from experiments to theory
NASA Astrophysics Data System (ADS)
Cavagna, Andrea; Giardina, Irene; Grigera, Tomás S.
2018-01-01
Collective behavior in biological systems is a complex topic, to say the least. It runs wildly across scales in both space and time, involving taxonomically vastly different organisms, from bacteria and cell clusters, to insect swarms and up to vertebrate groups. It entails concepts as diverse as coordination, emergence, interaction, information, cooperation, decision-making, and synchronization. Amid this jumble, however, we cannot help noting many similarities between collective behavior in biological systems and collective behavior in statistical physics, even though none of these organisms remotely looks like an Ising spin. Such similarities, though somewhat qualitative, are startling, and regard mostly the emergence of global dynamical patterns qualitatively different from individual behavior, and the development of system-level order from local interactions. It is therefore tempting to describe collective behavior in biology within the conceptual framework of statistical physics, in the hope to extend to this new fascinating field at least part of the great predictive power of theoretical physics. In this review we propose that the conceptual cornerstone of this ambitious program be that of correlation. To illustrate this idea we address the case of collective behavior in bird flocks. Two key threads emerge, as two sides of one single story: the presence of scale-free correlations and the dynamical mechanism of information transfer. We discuss first static correlations in starling flocks, in particular the experimental finding of their scale-free nature, the formulation of models that account for this fact using maximum entropy, and the relation of scale-free correlations to information transfer. This is followed by a dynamic treatment of information propagation (propagation of turns across a flock), starting with a discussion of experimental results and following with possible theoretical explanations of those, which require the addition of behavioral inertia to existing theories of flocking. We finish with the definition and analysis of space-time correlations and their relevance to the detection of inertial behavior in the absence of external perturbations.
A Bayesian method for assessing multiscalespecies-habitat relationships
Stuber, Erica F.; Gruber, Lutz F.; Fontaine, Joseph J.
2017-01-01
ContextScientists face several theoretical and methodological challenges in appropriately describing fundamental wildlife-habitat relationships in models. The spatial scales of habitat relationships are often unknown, and are expected to follow a multi-scale hierarchy. Typical frequentist or information theoretic approaches often suffer under collinearity in multi-scale studies, fail to converge when models are complex or represent an intractable computational burden when candidate model sets are large.ObjectivesOur objective was to implement an automated, Bayesian method for inference on the spatial scales of habitat variables that best predict animal abundance.MethodsWe introduce Bayesian latent indicator scale selection (BLISS), a Bayesian method to select spatial scales of predictors using latent scale indicator variables that are estimated with reversible-jump Markov chain Monte Carlo sampling. BLISS does not suffer from collinearity, and substantially reduces computation time of studies. We present a simulation study to validate our method and apply our method to a case-study of land cover predictors for ring-necked pheasant (Phasianus colchicus) abundance in Nebraska, USA.ResultsOur method returns accurate descriptions of the explanatory power of multiple spatial scales, and unbiased and precise parameter estimates under commonly encountered data limitations including spatial scale autocorrelation, effect size, and sample size. BLISS outperforms commonly used model selection methods including stepwise and AIC, and reduces runtime by 90%.ConclusionsGiven the pervasiveness of scale-dependency in ecology, and the implications of mismatches between the scales of analyses and ecological processes, identifying the spatial scales over which species are integrating habitat information is an important step in understanding species-habitat relationships. BLISS is a widely applicable method for identifying important spatial scales, propagating scale uncertainty, and testing hypotheses of scaling relationships.
Evidence-based adaptation and scale-up of a mobile phone health information service.
L'Engle, Kelly; Plourde, Kate F; Zan, Trinity
2017-01-01
The research base recommending the use of mobile phone interventions for health improvement is growing at a rapid pace. The use of mobile phones to deliver health behavior change and maintenance interventions in particular is gaining a robust evidence base across geographies, populations, and health topics. However, research on best practices for successfully scaling mHealth interventions is not keeping pace, despite the availability of frameworks for adapting and scaling health programs. m4RH-Mobile for Reproductive Health-is an SMS, or text message-based, health information service that began in two countries and over a period of 7 years has been adapted and scaled to new population groups and new countries. Success can be attributed to following key principles for scaling up health programs, including continuous stakeholder engagement; ongoing monitoring, evaluation, and research including extensive content and usability testing with the target audience; strategic dissemination of results; and use of marketing and sustainability principles for social initiatives. This article investigates how these factors contributed to vertical, horizontal, and global scale-up of the m4RH program. Vertical scale of m4RH is demonstrated in Tanzania, where the early engagement of stakeholders including the Ministry of Health catalyzed expansion of m4RH content and national-level program reach. Ongoing data collection has provided real-time data for decision-making, information about the user base, and peer-reviewed publications, yielding government endorsement and partner hand-off for sustainability of the m4RH platform. Horizontal scale-up and adaptation of m4RH has occurred through expansion to new populations in Rwanda, Uganda, and Tanzania, where best practices for design and implementation of mHealth programs were followed to ensure the platform meets the needs of target populations. m4RH also has been modified and packaged for global scale-up through licensing and toolkit development, research into new business/distribution models, and serving as the foundation for derivative NGO and quasi-governmental mHealth platforms. The m4RH platform provides an excellent case study of how to apply best practices to successfully scale up mobile phone interventions for health improvement. Applying principles of scale can inform the successful scale-up, sustainability, and potential impact of mHealth programs across health topics and settings.
Continuous information flow fluctuations
NASA Astrophysics Data System (ADS)
Rosinberg, Martin Luc; Horowitz, Jordan M.
2016-10-01
Information plays a pivotal role in the thermodynamics of nonequilibrium processes with feedback. However, much remains to be learned about the nature of information fluctuations in small-scale devices and their relation with fluctuations in other thermodynamics quantities, like heat and work. Here we derive a series of fluctuation theorems for information flow and partial entropy production in a Brownian particle model of feedback cooling and extend them to arbitrary driven diffusion processes. We then analyze the long-time behavior of the feedback-cooling model in detail. Our results provide insights into the structure and origin of large deviations of information and thermodynamic quantities in autonomous Maxwell's demons.
Scale-free models for the structure of business firm networks
NASA Astrophysics Data System (ADS)
Kitsak, Maksim; Riccaboni, Massimo; Havlin, Shlomo; Pammolli, Fabio; Stanley, H. Eugene
2010-03-01
We study firm collaborations in the life sciences and the information and communication technology sectors. We propose an approach to characterize industrial leadership using k -shell decomposition, with top-ranking firms in terms of market value in higher k -shell layers. We find that the life sciences industry network consists of three distinct components: a “nucleus,” which is a small well-connected subgraph, “tendrils,” which are small subgraphs consisting of small degree nodes connected exclusively to the nucleus, and a “bulk body,” which consists of the majority of nodes. Industrial leaders, i.e., the largest companies in terms of market value, are in the highest k -shells of both networks. The nucleus of the life sciences sector is very stable: once a firm enters the nucleus, it is likely to stay there for a long time. At the same time we do not observe the above three components in the information and communication technology sector. We also conduct a systematic study of these three components in random scale-free networks. Our results suggest that the sizes of the nucleus and the tendrils in scale-free networks decrease as the exponent of the power-law degree distribution λ increases, and disappear for λ≥3 . We compare the k -shell structure of random scale-free model networks with two real-world business firm networks in the life sciences and in the information and communication technology sectors. We argue that the observed behavior of the k -shell structure in the two industries is consistent with the coexistence of both preferential and random agreements in the evolution of industrial networks.
Bayen, Eléonore; Jourdan, Claire; Ghout, Idir; Darnoux, Emmanuelle; Azerad, Sylvie; Vallat-Azouvi, Claire; Weiss, Jean-Jacques; Aegerter, Philippe; Pradat-Diehl, Pascale; Joël, Marie-Eve; Azouvi, Philippe
2016-01-01
Prospective assessment of informal caregiver (IC) burden 4 years after the traumatic brain injury of a relative. Longitudinal cohort study (metropolitan Paris, France). Home dwelling adults (N = 98) with initially severe traumatic brain injury and their primary ICs. Informal caregiver objective burden (Resource Utilization in Dementia measuring Informal Care Time [ICT]), subjective burden (Zarit Burden Inventory), monetary self-valuation of ICT (Willingness-to-pay, Willingness-to-accept). Informal caregivers were women (81%) assisting men (80%) of mean age of 37 years. Fifty-five ICs reported no objective burden (ICT = 0) and no/low subjective burden (average Zarit Burden Inventory = 12.1). Forty-three ICs reported a major objective burden (average ICT = 5.6 h/d) and a moderate/severe subjective burden (average Zarit Burden Inventory = 30.3). In multivariate analyses, higher objective burden was associated with poorer Glasgow Outcome Scale-Extended scores, with more severe cognitive disorders (Neurobehavioral Rating Scale-revised) and with no coresidency status; higher subjective burden was associated with poorer Glasgow Outcome Scale-Extended scores, more Neurobehavioral Rating Scale-revised disorders, drug-alcohol abuse, and involvement in litigation. Economic valuation showed that on average, ICs did not value their ICT as free and preferred to pay a mean Willingness-to-pay = &OV0556;17 per hour to be replaced instead of being paid for providing care themselves (Willingness-to-accept = &OV0556;12). Four years after a severe traumatic brain injury, 44% of ICs experienced a heavy multidimensional burden.
Action detection by double hierarchical multi-structure space-time statistical matching model
NASA Astrophysics Data System (ADS)
Han, Jing; Zhu, Junwei; Cui, Yiyin; Bai, Lianfa; Yue, Jiang
2018-03-01
Aimed at the complex information in videos and low detection efficiency, an actions detection model based on neighboring Gaussian structure and 3D LARK features is put forward. We exploit a double hierarchical multi-structure space-time statistical matching model (DMSM) in temporal action localization. First, a neighboring Gaussian structure is presented to describe the multi-scale structural relationship. Then, a space-time statistical matching method is proposed to achieve two similarity matrices on both large and small scales, which combines double hierarchical structural constraints in model by both the neighboring Gaussian structure and the 3D LARK local structure. Finally, the double hierarchical similarity is fused and analyzed to detect actions. Besides, the multi-scale composite template extends the model application into multi-view. Experimental results of DMSM on the complex visual tracker benchmark data sets and THUMOS 2014 data sets show the promising performance. Compared with other state-of-the-art algorithm, DMSM achieves superior performances.
Action detection by double hierarchical multi-structure space–time statistical matching model
NASA Astrophysics Data System (ADS)
Han, Jing; Zhu, Junwei; Cui, Yiyin; Bai, Lianfa; Yue, Jiang
2018-06-01
Aimed at the complex information in videos and low detection efficiency, an actions detection model based on neighboring Gaussian structure and 3D LARK features is put forward. We exploit a double hierarchical multi-structure space-time statistical matching model (DMSM) in temporal action localization. First, a neighboring Gaussian structure is presented to describe the multi-scale structural relationship. Then, a space-time statistical matching method is proposed to achieve two similarity matrices on both large and small scales, which combines double hierarchical structural constraints in model by both the neighboring Gaussian structure and the 3D LARK local structure. Finally, the double hierarchical similarity is fused and analyzed to detect actions. Besides, the multi-scale composite template extends the model application into multi-view. Experimental results of DMSM on the complex visual tracker benchmark data sets and THUMOS 2014 data sets show the promising performance. Compared with other state-of-the-art algorithm, DMSM achieves superior performances.
Complex dynamics of our economic life on different scales: insights from search engine query data.
Preis, Tobias; Reith, Daniel; Stanley, H Eugene
2010-12-28
Search engine query data deliver insight into the behaviour of individuals who are the smallest possible scale of our economic life. Individuals are submitting several hundred million search engine queries around the world each day. We study weekly search volume data for various search terms from 2004 to 2010 that are offered by the search engine Google for scientific use, providing information about our economic life on an aggregated collective level. We ask the question whether there is a link between search volume data and financial market fluctuations on a weekly time scale. Both collective 'swarm intelligence' of Internet users and the group of financial market participants can be regarded as a complex system of many interacting subunits that react quickly to external changes. We find clear evidence that weekly transaction volumes of S&P 500 companies are correlated with weekly search volume of corresponding company names. Furthermore, we apply a recently introduced method for quantifying complex correlations in time series with which we find a clear tendency that search volume time series and transaction volume time series show recurring patterns.
Peck, Steven L
2014-10-01
It is becoming clear that handling the inherent complexity found in ecological systems is an essential task for finding ways to control insect pests of tropical livestock such as tsetse flies, and old and new world screwworms. In particular, challenging multivalent management programs, such as Area Wide Integrated Pest Management (AW-IPM), face daunting problems of complexity at multiple spatial scales, ranging from landscape level processes to those of smaller scales such as the parasite loads of individual animals. Daunting temporal challenges also await resolution, such as matching management time frames to those found on ecological and even evolutionary temporal scales. How does one deal with representing processes with models that involve multiple spatial and temporal scales? Agent-based models (ABM), combined with geographic information systems (GIS), may allow for understanding, predicting and managing pest control efforts in livestock pests. This paper argues that by incorporating digital ecologies in our management efforts clearer and more informed decisions can be made. I also point out the power of these models in making better predictions in order to anticipate the range of outcomes possible or likely. Copyright © 2014 International Atomic Energy Agency 2014. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Darmenova, K.; Higgins, G.; Kiley, H.; Apling, D.
2010-12-01
Current General Circulation Models (GCMs) provide a valuable estimate of both natural and anthropogenic climate changes and variability on global scales. At the same time, future climate projections calculated with GCMs are not of sufficient spatial resolution to address regional needs. Many climate impact models require information at scales of 50 km or less, so dynamical downscaling is often used to estimate the smaller-scale information based on larger scale GCM output. To address current deficiencies in local planning and decision making with respect to regional climate change, our research is focused on performing a dynamical downscaling with the Weather Research and Forecasting (WRF) model and developing decision aids that translate the regional climate data into actionable information for users. Our methodology involves development of climatological indices of extreme weather and heating/cooling degree days based on WRF ensemble runs initialized with the NCEP-NCAR reanalysis and the European Center/Hamburg Model (ECHAM5). Results indicate that the downscale simulations provide the necessary detailed output required by state and local governments and the private sector to develop climate adaptation plans. In addition we evaluated the WRF performance in long-term climate simulations over the Southwestern US and validated against observational datasets.
Ram K. Deo; Matthew B. Russell; Grant M. Domke; Christopher W. Woodall; Michael J. Falkowski; Warren B. Cohen
2017-01-01
The publicly accessible archive of Landsat imagery and increasing regional-scale LiDAR acquisitions offer an opportunity to periodically estimate aboveground forest biomass (AGB) from 1990 to the present to alignwith the reporting needs ofNationalGreenhouseGas Inventories (NGHGIs). This study integrated Landsat time-series data, a state-wide LiDAR dataset, and a recent...
NASA Astrophysics Data System (ADS)
Katsoulakis, Markos A.; Vlachos, Dionisios G.
2003-11-01
We derive a hierarchy of successively coarse-grained stochastic processes and associated coarse-grained Monte Carlo (CGMC) algorithms directly from the microscopic processes as approximations in larger length scales for the case of diffusion of interacting particles on a lattice. This hierarchy of models spans length scales between microscopic and mesoscopic, satisfies a detailed balance, and gives self-consistent fluctuation mechanisms whose noise is asymptotically identical to the microscopic MC. Rigorous, detailed asymptotics justify and clarify these connections. Gradient continuous time microscopic MC and CGMC simulations are compared under far from equilibrium conditions to illustrate the validity of our theory and delineate the errors obtained by rigorous asymptotics. Information theory estimates are employed for the first time to provide rigorous error estimates between the solutions of microscopic MC and CGMC, describing the loss of information during the coarse-graining process. Simulations under periodic boundary conditions are used to verify the information theory error estimates. It is shown that coarse-graining in space leads also to coarse-graining in time by q2, where q is the level of coarse-graining, and overcomes in part the hydrodynamic slowdown. Operation counting and CGMC simulations demonstrate significant CPU savings in continuous time MC simulations that vary from q3 for short potentials to q4 for long potentials. Finally, connections of the new coarse-grained stochastic processes to stochastic mesoscopic and Cahn-Hilliard-Cook models are made.
NASA Astrophysics Data System (ADS)
Boubakri, S.; Rhinane, H.
2017-11-01
The monitoring of water quality is, in most cases, managed in the laboratory and not on real time bases. Besides this process being lengthy, it doesn't provide the required specifications to describe the evolution of the quality parameters that are of interest. This study presents the integration of Geographic Information Systems (GIS) with wireless sensor networks (WSN) aiming to create a system able to detect the parameters like temperature, salinity and conductivity in a Moroccan catchment scale and transmit information to the support station. This Information is displayed and evaluated in a GIS using maps and spatial dashboard to monitor the water quality in real time.
Optimizing height presentation for aircraft cockpit displays
NASA Astrophysics Data System (ADS)
Jordan, Chris S.; Croft, D.; Selcon, Stephen J.; Markin, H.; Jackson, M.
1997-02-01
This paper describes an experiment conducted to investigate the type of display symbology that most effectively conveys height information to users of head-down plan-view radar displays. The experiment also investigated the use of multiple information sources (redundancy) in the design of such displays. Subjects were presented with eight different height display formats. These formats were constructed from a control, and/or one, two, or three sources of redundant information. The three formats were letter coding, analogue scaling, and toggling (spatially switching the position of the height information from above to below the aircraft symbol). Subjects were required to indicate altitude awareness via a four-key, forced-choice keyboard response. Error scores and response times were taken as performance measures. There were three main findings. First, there was a significant performance advantage when the altitude information was presented above and below the symbol to aid the representation of height information. Second, the analogue scale, a line whose length indicated altitude, proved significantly detrimental to performance. Finally, no relationship was found between the number of redundant information sources employed and performance. The implications for future aircraft and displays are discussed in relation to current aircraft tactical displays and in the context of perceptual psychological theory.
Crowe, A S; Booty, W G
1995-05-01
A multi-level pesticide assessment methodology has been developed to permit regulatory personnel to undertake a variety of assessments on the potential for pesticide used in agricultural areas to contaminate the groundwater regime at an increasingly detailed geographical scale of investigation. A multi-level approach accounts for a variety of assessment objectives and detail required in the assessment, the restrictions on the availability and accuracy of data, the time available to undertake the assessment, and the expertise of the decision maker. The level 1: regional scale is designed to prioritize districts having a potentially high risk for groundwater contamination from the application of a specific pesticide for a particular crop. The level 2: local scale is used to identify critical areas for groundwater contamination, at a soil polygon scale, within a district. A level 3: soil profile scale allows the user to evaluate specific factors influencing pesticide leaching and persistence, and to determine the extent and timing of leaching, through the simulation of the migration of a pesticide within a soil profile. Because of the scale of investigation, limited amount of data required, and qualitative nature of the assessment results, the level 1 and level 2 assessment are designed primarily for quick and broad guidance related to management practices. A level 3 assessment is more complex, requires considerably more data and expertise on the part of the user, and hence is designed to verify the potential for contamination identified during the level 1 or 2 assessment. The system combines environmental modelling, geographical information systems, extensive databases, data management systems, expert systems, and pesticide assessment models, to form an environmental information system for assessing the potential for pesticides to contaminate groundwater.
National Earthquake Information Center Seismic Event Detections on Multiple Scales
NASA Astrophysics Data System (ADS)
Patton, J.; Yeck, W. L.; Benz, H.; Earle, P. S.; Soto-Cordero, L.; Johnson, C. E.
2017-12-01
The U.S. Geological Survey National Earthquake Information Center (NEIC) monitors seismicity on local, regional, and global scales using automatic picks from more than 2,000 near-real time seismic stations. This presents unique challenges in automated event detection due to the high variability in data quality, network geometries and density, and distance-dependent variability in observed seismic signals. To lower the overall detection threshold while minimizing false detection rates, NEIC has begun to test the incorporation of new detection and picking algorithms, including multiband (Lomax et al., 2012) and kurtosis (Baillard et al., 2014) pickers, and a new bayesian associator (Glass 3.0). The Glass 3.0 associator allows for simultaneous processing of variably scaled detection grids, each with a unique set of nucleation criteria (e.g., nucleation threshold, minimum associated picks, nucleation phases) to meet specific monitoring goals. We test the efficacy of these new tools on event detection in networks of various scales and geometries, compare our results with previous catalogs, and discuss lessons learned. For example, we find that on local and regional scales, rapid nucleation of small events may require event nucleation with both P and higher-amplitude secondary phases (e.g., S or Lg). We provide examples of the implementation of a scale-independent associator for an induced seismicity sequence (local-scale), a large aftershock sequence (regional-scale), and for monitoring global seismicity. Baillard, C., Crawford, W. C., Ballu, V., Hibert, C., & Mangeney, A. (2014). An automatic kurtosis-based P-and S-phase picker designed for local seismic networks. Bulletin of the Seismological Society of America, 104(1), 394-409. Lomax, A., Satriano, C., & Vassallo, M. (2012). Automatic picker developments and optimization: FilterPicker - a robust, broadband picker for real-time seismic monitoring and earthquake early-warning, Seism. Res. Lett. , 83, 531-540, doi: 10.1785/gssrl.83.3.531.
NASA Astrophysics Data System (ADS)
Becker-Reshef, Inbal
In recent years there has been a dramatic increase in the demand for timely, comprehensive global agricultural intelligence. The issue of food security has rapidly risen to the top of government agendas around the world as the recent lack of food access led to unprecedented food prices, hunger, poverty, and civil conflict. Timely information on global crop production is indispensable for combating the growing stress on the world's crop production, for stabilizing food prices, developing effective agricultural policies, and for coordinating responses to regional food shortages. Earth Observations (EO) data offer a practical means for generating such information as they provide global, timely, cost-effective, and synoptic information on crop condition and distribution. Their utility for crop production forecasting has long been recognized and demonstrated across a wide range of scales and geographic regions. Nevertheless it is widely acknowledged that EO data could be better utilized within the operational monitoring systems and thus there is a critical need for research focused on developing practical robust methods for agricultural monitoring. Within this context this dissertation focused on advancing EO-based methods for crop yield forecasting and on demonstrating the potential relevance for adopting EO-based crop forecasts for providing timely reliable agricultural intelligence. This thesis made contributions to this field by developing and testing a robust EO-based method for wheat production forecasting at state to national scales using available and easily accessible data. The model was developed in Kansas (KS) using coarse resolution normalized difference vegetation index (NDVI) time series data in conjunction with out-of-season wheat masks and was directly applied in Ukraine to assess its transferability. The model estimated yields within 7% in KS and 10% in Ukraine of final estimates 6 weeks prior to harvest. The relevance of adopting such methods to provide timely reliable information to crop commodity markets is demonstrated through a 2010 case study.
Peak and Tail Scaling of Breakthrough Curves in Hydrologic Tracer Tests
NASA Astrophysics Data System (ADS)
Aquino, T.; Aubeneau, A. F.; Bolster, D.
2014-12-01
Power law tails, a marked signature of anomalous transport, have been observed in solute breakthrough curves time and time again in a variety of hydrologic settings, including in streams. However, due to the low concentrations at which they occur they are notoriously difficult to measure with confidence. This leads us to ask if there are other associated signatures of anomalous transport that can be sought. We develop a general stochastic transport framework and derive an asymptotic relation between the tail scaling of a breakthrough curve for a conservative tracer at a fixed downstream position and the scaling of the peak concentration of breakthrough curves as a function of downstream position, demonstrating that they provide equivalent information. We then quantify the relevant spatiotemporal scales for the emergence of this asymptotic regime, where the relationship holds, in the context of a very simple model that represents transport in an idealized river. We validate our results using random walk simulations. The potential experimental benefits and limitations of these findings are discussed.
What Is a Complex Innovation System?
Katz, J. Sylvan
2016-01-01
Innovation systems are sometimes referred to as complex systems, something that is intuitively understood but poorly defined. A complex system dynamically evolves in non-linear ways giving it unique properties that distinguish it from other systems. In particular, a common signature of complex systems is scale-invariant emergent properties. A scale-invariant property can be identified because it is solely described by a power law function, f(x) = kxα, where the exponent, α, is a measure of scale-invariance. The focus of this paper is to describe and illustrate that innovation systems have properties of a complex adaptive system. In particular scale-invariant emergent properties indicative of their complex nature that can be quantified and used to inform public policy. The global research system is an example of an innovation system. Peer-reviewed publications containing knowledge are a characteristic output. Citations or references to these articles are an indirect measure of the impact the knowledge has on the research community. Peer-reviewed papers indexed in Scopus and in the Web of Science were used as data sources to produce measures of sizes and impact. These measures are used to illustrate how scale-invariant properties can be identified and quantified. It is demonstrated that the distribution of impact has a reasonable likelihood of being scale-invariant with scaling exponents that tended toward a value of less than 3.0 with the passage of time and decreasing group sizes. Scale-invariant correlations are shown between the evolution of impact and size with time and between field impact and sizes at points in time. The recursive or self-similar nature of scale-invariance suggests that any smaller innovation system within the global research system is likely to be complex with scale-invariant properties too. PMID:27258040
Spatial and Temporal Scaling of Thermal Infrared Remote Sensing Data
NASA Technical Reports Server (NTRS)
Quattrochi, Dale A.; Goel, Narendra S.
1995-01-01
Although remote sensing has a central role to play in the acquisition of synoptic data obtained at multiple spatial and temporal scales to facilitate our understanding of local and regional processes as they influence the global climate, the use of thermal infrared (TIR) remote sensing data in this capacity has received only minimal attention. This results from some fundamental challenges that are associated with employing TIR data collected at different space and time scales, either with the same or different sensing systems, and also from other problems that arise in applying a multiple scaled approach to the measurement of surface temperatures. In this paper, we describe some of the more important problems associated with using TIR remote sensing data obtained at different spatial and temporal scales, examine why these problems appear as impediments to using multiple scaled TIR data, and provide some suggestions for future research activities that may address these problems. We elucidate the fundamental concept of scale as it relates to remote sensing and explore how space and time relationships affect TIR data from a problem-dependency perspective. We also describe how linearity and non-linearity observation versus parameter relationships affect the quantitative analysis of TIR data. Some insight is given on how the atmosphere between target and sensor influences the accurate measurement of surface temperatures and how these effects will be compounded in analyzing multiple scaled TIR data. Last, we describe some of the challenges in modeling TIR data obtained at different space and time scales and discuss how multiple scaled TIR data can be used to provide new and important information for measuring and modeling land-atmosphere energy balance processes.
SEPARATING DIFFERENT SCALES OF MOTION IN TIME SERIES OF METEOROLOGICAL VARIABLES. (R825260)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Ground-based measurements of ionospheric dynamics
NASA Astrophysics Data System (ADS)
Kouba, Daniel; Chum, Jaroslav
2018-05-01
Different methods are used to research and monitor the ionospheric dynamics using ground measurements: Digisonde Drift Measurements (DDM) and Continuous Doppler Sounding (CDS). For the first time, we present comparison between both methods on specific examples. Both methods provide information about the vertical drift velocity component. The DDM provides more information about the drift velocity vector and detected reflection points. However, the method is limited by the relatively low time resolution. In contrast, the strength of CDS is its high time resolution. The discussed methods can be used for real-time monitoring of medium scale travelling ionospheric disturbances. We conclude that it is advantageous to use both methods simultaneously if possible. The CDS is then applied for the disturbance detection and analysis, and the DDM is applied for the reflection height control.
Cao, Gang; Li, Hai-Ou; Tu, Tao; Wang, Li; Zhou, Cheng; Xiao, Ming; Guo, Guang-Can; Jiang, Hong-Wen; Guo, Guo-Ping
2013-01-01
A basic requirement for quantum information processing is the ability to universally control the state of a single qubit on timescales much shorter than the coherence time. Although ultrafast optical control of a single spin has been achieved in quantum dots, scaling up such methods remains a challenge. Here we demonstrate complete control of the quantum-dot charge qubit on the picosecond scale, orders of magnitude faster than the previously measured electrically controlled charge- or spin-based qubits. We observe tunable qubit dynamics in a charge-stability diagram, in a time domain, and in a pulse amplitude space of the driven pulse. The observations are well described by Landau–Zener–Stückelberg interference. These results establish the feasibility of a full set of all-electrical single-qubit operations. Although our experiment is carried out in a solid-state architecture, the technique is independent of the physical encoding of the quantum information and has the potential for wider applications. PMID:23360992
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spittler, T.E.; Sydnor, R.H.; Manson, M.W.
1990-01-01
The Loma Prieta earthquake of October 17, 1989 triggered landslides throughout the Santa Cruz Mountains in central California. The California Department of Conservation, Division of Mines and Geology (DMG) responded to a request for assistance from the County of Santa Cruz, Office of Emergency Services to evaluate the geologic hazard from major reactivated large landslides. DMG prepared a set of geologic maps showing the landslide features that resulted from the October 17 earthquake. The principal purpose of large-scale mapping of these landslides is: (1) to provide county officials with regional landslide information that can be used for timely recovery ofmore » damaged areas; (2) to identify disturbed ground which is potentially vulnerable to landslide movement during winter rains; (3) to provide county planning officials with timely geologic information that will be used for effective land-use decisions; (4) to document regional landslide features that may not otherwise be available for individual site reconstruction permits and for future development.« less
Large-Scale Earthquake Countermeasures Act and the Earthquake Prediction Council in Japan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rikitake, T.
1979-08-07
The Large-Scale Earthquake Countermeasures Act was enacted in Japan in December 1978. This act aims at mitigating earthquake hazards by designating an area to be an area under intensified measures against earthquake disaster, such designation being based on long-term earthquake prediction information, and by issuing an earthquake warnings statement based on imminent prediction information, when possible. In an emergency case as defined by the law, the prime minister will be empowered to take various actions which cannot be taken at ordinary times. For instance, he may ask the Self-Defense Force to come into the earthquake-threatened area before the earthquake occurrence.more » A Prediction Council has been formed in order to evaluate premonitory effects that might be observed over the Tokai area, which was designated an area under intensified measures against earthquake disaster some time in June 1979. An extremely dense observation network has been constructed over the area.« less
Steiner, Amanda R W; Petkus, Andrew J; Nguyen, Hoang; Wetherell, Julie Loebach
2013-08-01
Information processing bias was evaluated in a sample of 25 older adults with generalized anxiety disorder (GAD) over the course of 12 weeks of escitalopram pharmacotherapy. Using the CANTAB Affective Go/No Go test, treatment response (as measured by the Hamilton Anxiety Rating Scale, Penn State Worry Questionnaire, and Generalized Anxiety Disorder Severity Scale) was predicted from a bias score (i.e., difference score between response latencies for negative and positive words) using mixed-models regression. A more positive bias score across time predicted better response to treatment. Faster responses to positive words relative to negative words were associated with greater symptomatic improvement over time as reflected by scores on the GADSS. There was a trend toward significance for PSWQ scores and no significant effects related to HAMA outcomes. These preliminary findings offer further insights into the role of biased cognitive processing of emotional material in the manifestation of late-life anxiety symptoms. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Feldmann, P.; Gessner, M.; Gabbrielli, M.; Klempt, C.; Santos, L.; Pezzè, L.; Smerzi, A.
2018-03-01
Recent experiments demonstrated the generation of entanglement by quasiadiabatically driving through quantum phase transitions of a ferromagnetic spin-1 Bose-Einstein condensate in the presence of a tunable quadratic Zeeman shift. We analyze, in terms of the Fisher information, the interferometric value of the entanglement accessible by this approach. In addition to the Twin-Fock phase studied experimentally, we unveil a second regime, in the broken axisymmetry phase, which provides Heisenberg scaling of the quantum Fisher information and can be reached on shorter time scales. We identify optimal unitary transformations and an experimentally feasible optimal measurement prescription that maximize the interferometric sensitivity. We further ascertain that the Fisher information is robust with respect to nonadiabaticity and measurement noise. Finally, we show that the quasiadiabatic entanglement preparation schemes admit higher sensitivities than dynamical methods based on fast quenches.
Are large-scale flow experiments informing the science and management of freshwater ecosystems?
Olden, Julian D.; Konrad, Christopher P.; Melis, Theodore S.; Kennard, Mark J.; Freeman, Mary C.; Mims, Meryl C.; Bray, Erin N.; Gido, Keith B.; Hemphill, Nina P.; Lytle, David A.; McMullen, Laura E.; Pyron, Mark; Robinson, Christopher T.; Schmidt, John C.; Williams, John G.
2013-01-01
Greater scientific knowledge, changing societal values, and legislative mandates have emphasized the importance of implementing large-scale flow experiments (FEs) downstream of dams. We provide the first global assessment of FEs to evaluate their success in advancing science and informing management decisions. Systematic review of 113 FEs across 20 countries revealed that clear articulation of experimental objectives, while not universally practiced, was crucial for achieving management outcomes and changing dam-operating policies. Furthermore, changes to dam operations were three times less likely when FEs were conducted primarily for scientific purposes. Despite the recognized importance of riverine flow regimes, four-fifths of FEs involved only discrete flow events. Over three-quarters of FEs documented both abiotic and biotic outcomes, but only one-third examined multiple taxonomic responses, thus limiting how FE results can inform holistic dam management. Future FEs will present new opportunities to advance scientifically credible water policies.
Small-scale modification to the lensing kernel
NASA Astrophysics Data System (ADS)
Hadzhiyska, Boryana; Spergel, David; Dunkley, Joanna
2018-02-01
Calculations of the cosmic microwave background (CMB) lensing power implemented into the standard cosmological codes such as camb and class usually treat the surface of last scatter as an infinitely thin screen. However, since the CMB anisotropies are smoothed out on scales smaller than the diffusion length due to the effect of Silk damping, the photons which carry information about the small-scale density distribution come from slightly earlier times than the standard recombination time. The dominant effect is the scale dependence of the mean redshift associated with the fluctuations during recombination. We find that fluctuations at k =0.01 Mpc-1 come from a characteristic redshift of z ≈1090 , while fluctuations at k =0.3 Mpc-1 come from a characteristic redshift of z ≈1130 . We then estimate the corrections to the lensing kernel and the related power spectra due to this effect. We conclude that neglecting it would result in a deviation from the true value of the lensing kernel at the half percent level at small CMB scales. For an all-sky, noise-free experiment, this corresponds to a ˜0.1 σ shift in the observed temperature power spectrum on small scales (2500 ≲l ≲4000 ).
Capturing remote mixing due to internal tides using multi-scale modeling tool: SOMAR-LES
NASA Astrophysics Data System (ADS)
Santilli, Edward; Chalamalla, Vamsi; Scotti, Alberto; Sarkar, Sutanu
2016-11-01
Internal tides that are generated during the interaction of an oscillating barotropic tide with the bottom bathymetry dissipate only a fraction of their energy near the generation region. The rest is radiated away in the form of low- high-mode internal tides. These internal tides dissipate energy at remote locations when they interact with the upper ocean pycnocline, continental slope, and large scale eddies. Capturing the wide range of length and time scales involved during the life-cycle of internal tides is computationally very expensive. A recently developed multi-scale modeling tool called SOMAR-LES combines the adaptive grid refinement features of SOMAR with the turbulence modeling features of a Large Eddy Simulation (LES) to capture multi-scale processes at a reduced computational cost. Numerical simulations of internal tide generation at idealized bottom bathymetries are performed to demonstrate this multi-scale modeling technique. Although each of the remote mixing phenomena have been considered independently in previous studies, this work aims to capture remote mixing processes during the life cycle of an internal tide in more realistic settings, by allowing multi-level (coarse and fine) grids to co-exist and exchange information during the time stepping process.
NASA Astrophysics Data System (ADS)
Wu, Zikai; Hou, Baoyu; Zhang, Hongjuan; Jin, Feng
2014-04-01
Deterministic network models have been attractive media for discussing dynamical processes' dependence on network structural features. On the other hand, the heterogeneity of weights affect dynamical processes taking place on networks. In this paper, we present a family of weighted expanded Koch networks based on Koch networks. They originate from a r-polygon, and each node of current generation produces m r-polygons including the node and whose weighted edges are scaled by factor w in subsequent evolutionary step. We derive closed-form expressions for average weighted shortest path length (AWSP). In large network, AWSP stays bounded with network order growing (0 < w < 1). Then, we focus on a special random walks and trapping issue on the networks. In more detail, we calculate exactly the average receiving time (ART). ART exhibits a sub-linear dependence on network order (0 < w < 1), which implies that nontrivial weighted expanded Koch networks are more efficient than un-weighted expanded Koch networks in receiving information. Besides, efficiency of receiving information at hub nodes is also dependent on parameters m and r. These findings may pave the way for controlling information transportation on general weighted networks.
Fractal Folding and Medium Viscoelasticity Contribute Jointly to Chromosome Dynamics
NASA Astrophysics Data System (ADS)
Polovnikov, K. E.; Gherardi, M.; Cosentino-Lagomarsino, M.; Tamm, M. V.
2018-02-01
Chromosomes are key players of cell physiology, their dynamics provides valuable information about its physical organization. In both prokaryotes and eukaryotes, the short-time motion of chromosomal loci has been described with a Rouse model in a simple or viscoelastic medium. However, little emphasis has been put on the influence of the folded organization of chromosomes on the local dynamics. Clearly, stress propagation, and thus dynamics, must be affected by such organization, but a theory allowing us to extract such information from data, e.g., on two-point correlations, is lacking. Here, we describe a theoretical framework able to answer this general polymer dynamics question. We provide a scaling analysis of the stress-propagation time between two loci at a given arclength distance along the chromosomal coordinate. The results suggest a precise way to assess folding information from the dynamical coupling of chromosome segments. Additionally, we realize this framework in a specific model of a polymer whose long-range interactions are designed to make it fold in a fractal way and immersed in a medium characterized by subdiffusive fractional Langevin motion with a tunable scaling exponent. This allows us to derive explicit analytical expressions for the correlation functions.
Sensitivity to timing and order in human visual cortex
Singer, Jedediah M.; Madsen, Joseph R.; Anderson, William S.
2014-01-01
Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. PMID:25429116
Rodríguez-González, Ana María; Rodríguez-Míguez, Eva; Duarte-Pérez, Ana; Díaz-Sanisidro, Eduardo; Barbosa-Álvarez, Ángel; Clavería, Ana
2017-03-01
To describe the burden of informal carers of dependent people and to identify related variables. Descriptive observational cross-sectional study. Primary Health Care in the southern area of Pontevedra. 97 caregivers of dependent persons. We collected socioeconomic data and health conditions from caregivers and dependent persons, time spent on the daily care and caregiver burden (Zarit abbreviate) through a personal interview. Besides the description of the sample-including their burden level-, a contrast mean was used to identify characteristics that influenced in punctuation of Zarit scale. A logistic regression was used to analyse characteristics that increase the likelihood to experiment burden. 61.9% of caregivers are subject to intense burden. The item on the scale which contributes most to the caregiver burden is the lack of time for oneself, followed by the negative effects of interpersonal relationships. Contrast means shows that degree of relationship, number of care hours, caregiver health and aggressiveness of dependent persons produce significant differences in Zarit scale. Physic and psychological health of caregivers and aggressiveness of dependent persons is associated with the likelihood of developing caregiver burden. Informal caregivers of dependent persons show a high level of burden, both related to their characteristics and those of the dependent persons. Caregiver burden rethinks the need for public policies focused on dependence to adopt an integrative caregiver-dependent vision. Copyright © 2016 Elsevier España, S.L.U. All rights reserved.
Efficient Storage Scheme of Covariance Matrix during Inverse Modeling
NASA Astrophysics Data System (ADS)
Mao, D.; Yeh, T. J.
2013-12-01
During stochastic inverse modeling, the covariance matrix of geostatistical based methods carries the information about the geologic structure. Its update during iterations reflects the decrease of uncertainty with the incorporation of observed data. For large scale problem, its storage and update cost too much memory and computational resources. In this study, we propose a new efficient storage scheme for storage and update. Compressed Sparse Column (CSC) format is utilized to storage the covariance matrix, and users can assign how many data they prefer to store based on correlation scales since the data beyond several correlation scales are usually not very informative for inverse modeling. After every iteration, only the diagonal terms of the covariance matrix are updated. The off diagonal terms are calculated and updated based on shortened correlation scales with a pre-assigned exponential model. The correlation scales are shortened by a coefficient, i.e. 0.95, every iteration to show the decrease of uncertainty. There is no universal coefficient for all the problems and users are encouraged to try several times. This new scheme is tested with 1D examples first. The estimated results and uncertainty are compared with the traditional full storage method. In the end, a large scale numerical model is utilized to validate this new scheme.
NASA Astrophysics Data System (ADS)
Lima, C. H.; Lall, U.
2010-12-01
Flood frequency statistical analysis most often relies on stationary assumptions, where distribution moments (e.g. mean, standard deviation) and associated flood quantiles do not change over time. In this sense, one expects that flood magnitudes and their frequency of occurrence will remain constant as observed in the historical information. However, evidence of inter-annual and decadal climate variability and anthropogenic change as well as an apparent increase in the number and magnitude of flood events across the globe have made the stationary assumption questionable. Here, we show how to estimate flood quantiles (e.g. 100-year flood) at ungauged basins without needing to consider stationarity. A statistical model based on the well known flow-area scaling law is proposed to estimate flood flows at ungauged basins. The slope and intercept scaling law coefficients are assumed time varying and a hierarchical Bayesian model is used to include climate information and reduce parameter uncertainties. Cross-validated results from 34 streamflow gauges located in a nested Basin in Brazil show that the proposed model is able to estimate flood quantiles at ungauged basins with remarkable skills compared with data based estimates using the full record. The model as developed in this work is also able to simulate sequences of flood flows considering global climate changes provided an appropriate climate index developed from the General Circulation Model is used as a predictor. The time varying flood frequency estimates can be used for pricing insurance models, and in a forecast mode for preparations for flooding, and finally, for timing infrastructure investments and location. Non-stationary 95% interval estimation for the 100-year Flood (shaded gray region) and 95% interval for the 100-year flood estimated from data (horizontal dashed and solid lines). The average distribution of the 100-year flood is shown in green in the right side.
Concept utilizing telex network for operational management requirements
NASA Astrophysics Data System (ADS)
Kowalczyk, E.
1982-09-01
The simplest and least expensive means ensuring fast transmission of documented (recorded) information on a country-wide scale for all kinds of users is the telex network, which is fully automated in Poland at this time (except for foreign message traffic) with more than 17,000 subscribers. As a digital network, the telex network constitutes, in principle, a base network that is ready for use to transmit information as part of remote data transmission systems.
A three-item scale for the early prediction of stroke recovery.
Baird, A E; Dambrosia, J; Janket, S; Eichbaum, Q; Chaves, C; Silver, B; Barber, P A; Parsons, M; Darby, D; Davis, S; Caplan, L R; Edelman, R E; Warach, S
2001-06-30
Accurate assessment of prognosis in the first hours of stroke is desirable for best patient management. We aimed to assess whether the extent of ischaemic brain injury on magnetic reasonance diffusion-weighted imaging (MR DWI) could provide additional prognostic information to clinical factors. In a three-phase study we studied 66 patients from a North American teaching hospital who had: MR DWI within 36 h of stroke onset; the National Institutes of Health Stroke Scale (NIHSS) score measured at the time of scanning; and the Barthel Index measured no later than 3 months after stroke. We used logistic regression to derive a predictive model for good recovery. This logistic regression model was applied to an independent series of 63 patients from an Australian teaching hospital, and we then developed a three-item scale for the early prediction of stroke recovery. Combined measurements of the NIHSS score (p=0.01), time in hours from stroke onset to MR DWI (p=0.02), and the volume of ischaemic brain tissue on MR DWI (p=0.04) gave the best prediction of stroke recovery. The model was externally validated on the Australian sample with 0.77 sensitivity and 0.88 specificity. Three likelihood levels for stroke recovery-low (0-2), medium (3-4), and high (5-7)-were identified on the three-item scale. The combination of clinical and MR DWI factors provided better prediction of stroke recovery than any factor alone, shortly after admission to hospital. This information was incorporated into a three-item scale for clinical use.
NASA Astrophysics Data System (ADS)
Mori, Shintaro; Hisakado, Masato
2015-05-01
We propose a finite-size scaling analysis method for binary stochastic processes X(t) in { 0,1} based on the second moment correlation length ξ for the autocorrelation function C(t). The purpose is to clarify the critical properties and provide a new data analysis method for information cascades. As a simple model to represent the different behaviors of subjects in information cascade experiments, we assume that X(t) is a mixture of an independent random variable that takes 1 with probability q and a random variable that depends on the ratio z of the variables taking 1 among recent r variables. We consider two types of the probability f(z) that the latter takes 1: (i) analog [f(z) = z] and (ii) digital [f(z) = θ(z - 1/2)]. We study the universal functions of scaling for ξ and the integrated correlation time τ. For finite r, C(t) decays exponentially as a function of t, and there is only one stable renormalization group (RG) fixed point. In the limit r to ∞ , where X(t) depends on all the previous variables, C(t) in model (i) obeys a power law, and the system becomes scale invariant. In model (ii) with q ≠ 1/2, there are two stable RG fixed points, which correspond to the ordered and disordered phases of the information cascade phase transition with the critical exponents β = 1 and ν|| = 2.
NASA Astrophysics Data System (ADS)
Cao, Yang; Zhang, Jing; Yang, Mingxiang; Lei, Xiaohui
2017-07-01
At present, most of Defense Meteorological Satellite Program's Operational Linescan System (DMSP/OLS) night-time light data are applied to large-scale regional development assessment, while there are little for the study of earthquake and other disasters. This study has extracted night-time light information before and after earthquake within Wenchuan county with adoption of DMSP/OLS night-time light data. The analysis results show that the night-time light index and average intensity of Wenchuan county were decreased by about 76% and 50% respectively from the year of 2007 to 2008. From the year of 2008 to 2011, the two indicators were increased by about 200% and 556% respectively. These research results show that the night-time light data can be used to extract the information of earthquake and evaluate the occurrence of earthquakes and other disasters.
Sako, Binta; Leerlooijer, Joanne N; Lelisa, Azeb; Hailemariam, Abebe; Brouwer, Inge D; Tucker Brown, Amal; Osendarp, Saskia J M
2018-04-01
Child malnutrition remains high in Ethiopia, and inadequate complementary feeding is a contributing factor. In this context, a community-based intervention was designed to provide locally made complementary food for children 6-23 months, using a bartering system, in four Ethiopian regions. After a pilot phase, the intervention was scaled up from 8 to 180 localities. We conducted a process evaluation to determine enablers and barriers for the scaling up of this intervention. Eight study sites were selected to perform 52 key informant interviews and 31 focus group discussions with purposely selected informants. For analysis, we used a framework describing six elements of successful scaling up: socio-political context, attributes of the intervention, attributes of the implementers, appropriate delivery strategy, the adopting community, and use of research to inform the scale-up process. A strong political will, alignment of the intervention with national priorities, and integration with the health care system were instrumental in the scaling up. The participatory approach in decision-making reinforced ownership at community level, and training about complementary feeding motivated mothers and women's groups to participate. However, the management of the complex intervention, limited human resources, and lack of incentives for female volunteers proved challenging. In the bartering model, the barter rate was accepted, but the bartering was hindered by unavailability of cereals and limited financial and material resources to contribute, threatening the project's sustainability. Scaling up strategies for nutrition interventions require sufficient time, thorough planning, and assessment of the community's capacity to contribute human, financial, and material resources. © 2017 The Authors. Maternal and Child Nutrition Published by John Wiley & Sons, Ltd.
Flaxman, Abraham D; Stewart, Andrea; Joseph, Jonathan C; Alam, Nurul; Alam, Sayed Saidul; Chowdhury, Hafizur; Mooney, Meghan D; Rampatige, Rasika; Remolador, Hazel; Sanvictores, Diozele; Serina, Peter T; Streatfield, Peter Kim; Tallo, Veronica; Murray, Christopher J L; Hernandez, Bernardo; Lopez, Alan D; Riley, Ian Douglas
2018-02-01
There is increasing interest in using verbal autopsy to produce nationally representative population-level estimates of causes of death. However, the burden of processing a large quantity of surveys collected with paper and pencil has been a barrier to scaling up verbal autopsy surveillance. Direct electronic data capture has been used in other large-scale surveys and can be used in verbal autopsy as well, to reduce time and cost of going from collected data to actionable information. We collected verbal autopsy interviews using paper and pencil and using electronic tablets at two sites, and measured the cost and time required to process the surveys for analysis. From these cost and time data, we extrapolated costs associated with conducting large-scale surveillance with verbal autopsy. We found that the median time between data collection and data entry for surveys collected on paper and pencil was approximately 3 months. For surveys collected on electronic tablets, this was less than 2 days. For small-scale surveys, we found that the upfront costs of purchasing electronic tablets was the primary cost and resulted in a higher total cost. For large-scale surveys, the costs associated with data entry exceeded the cost of the tablets, so electronic data capture provides both a quicker and cheaper method of data collection. As countries increase verbal autopsy surveillance, it is important to consider the best way to design sustainable systems for data collection. Electronic data capture has the potential to greatly reduce the time and costs associated with data collection. For long-term, large-scale surveillance required by national vital statistical systems, electronic data capture reduces costs and allows data to be available sooner.
Framework for Real-Time All-Hazards Global Situational Awareness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Omitaomu, Olufemi A; Fernandez, Steven J; Bhaduri, Budhendra L
Information systems play a pivotal role in emergency response by making consequence analysis models based on up-to-date data available to decision makers. While consequence analysis models have been used for years on local scales, their application on national and global scales has been constrained by lack of non-proprietary data. This chapter describes how this has changed using a framework for real-time all-hazards situational awareness called the Energy Awareness and Resiliency Standardized Services (EARSS) as an example. EARSS is a system of systems developed to collect non-proprietary data from diverse open content sources to develop a geodatabase of critical infrastructures allmore » over the world. The EARSS system shows that it is feasible to provide global disaster alerts by producing valuable information such as texting messages about detected hazards, emailing reports about affected areas, estimating an expected number of impacted people and their demographic characteristics, identifying critical infrastructures that may be affected, and analyzing potential downstream effects. This information is provided in real-time to federal agencies and subscribers all over the world for decision making in humanitarian assistance and emergency response. The system also uses live streams of power outages, weather, and satellite surveillance data as events unfold. This, in turn, is combined with other public domain or open content information, such as media reports and postings on social networking websites, for complete coverage of the situation as events unfold. Working with up-to-date information from the EARSS system, emergency responders on the ground could pre-position their staff and resources, such as emergency generators and ice, where they are most needed.« less
Temporal Organization of Sound Information in Auditory Memory.
Song, Kun; Luo, Huan
2017-01-01
Memory is a constructive and organizational process. Instead of being stored with all the fine details, external information is reorganized and structured at certain spatiotemporal scales. It is well acknowledged that time plays a central role in audition by segmenting sound inputs into temporal chunks of appropriate length. However, it remains largely unknown whether critical temporal structures exist to mediate sound representation in auditory memory. To address the issue, here we designed an auditory memory transferring study, by combining a previously developed unsupervised white noise memory paradigm with a reversed sound manipulation method. Specifically, we systematically measured the memory transferring from a random white noise sound to its locally temporal reversed version on various temporal scales in seven experiments. We demonstrate a U-shape memory-transferring pattern with the minimum value around temporal scale of 200 ms. Furthermore, neither auditory perceptual similarity nor physical similarity as a function of the manipulating temporal scale can account for the memory-transferring results. Our results suggest that sounds are not stored with all the fine spectrotemporal details but are organized and structured at discrete temporal chunks in long-term auditory memory representation.
Sriyudthsak, Kansuporn; Shiraishi, Fumihide; Hirai, Masami Yokota
2016-01-01
The high-throughput acquisition of metabolome data is greatly anticipated for the complete understanding of cellular metabolism in living organisms. A variety of analytical technologies have been developed to acquire large-scale metabolic profiles under different biological or environmental conditions. Time series data are useful for predicting the most likely metabolic pathways because they provide important information regarding the accumulation of metabolites, which implies causal relationships in the metabolic reaction network. Considerable effort has been undertaken to utilize these data for constructing a mathematical model merging system properties and quantitatively characterizing a whole metabolic system in toto. However, there are technical difficulties between benchmarking the provision and utilization of data. Although, hundreds of metabolites can be measured, which provide information on the metabolic reaction system, simultaneous measurement of thousands of metabolites is still challenging. In addition, it is nontrivial to logically predict the dynamic behaviors of unmeasurable metabolite concentrations without sufficient information on the metabolic reaction network. Yet, consolidating the advantages of advancements in both metabolomics and mathematical modeling remain to be accomplished. This review outlines the conceptual basis of and recent advances in technologies in both the research fields. It also highlights the potential for constructing a large-scale mathematical model by estimating model parameters from time series metabolome data in order to comprehensively understand metabolism at the systems level.
High-throughput cocrystal slurry screening by use of in situ Raman microscopy and multi-well plate.
Kojima, Takashi; Tsutsumi, Shunichirou; Yamamoto, Katsuhiko; Ikeda, Yukihiro; Moriwaki, Toshiya
2010-10-31
Cocrystal has attracted much attention in order to improve poor physicochemical properties, since cocrystal former crystallize with the ionic drugs as well as nonionic drugs. Cocrystal screening was usually conducted by crystallization, slurry and co-grinding techniques, however sensitivity, cost and time for screening were limited because of issues such as dissociation of cocrystal during crystallization and cost and time required for slurry and co-grinding methods. To overcome these issues, novel high-throughput cocrystal slurry screening was developed by using in situ Raman microscope and a multi-well plate. Cocrystal screening of indomethacin was conducted with 46 cocrystal formers and potential cocrystals were prepared on a large scale for the characterization with powder X-ray diffractometry, thermal analysis, and Raman microscopy and (1)H NMR spectroscopy. Compared with the characterization of scale-up cocrystals, the cocrystal screening indicated that indomethacin structured novel cocrystals with D/L-mandelic acid, nicotinamide, lactamide and benzamide which was not obtained in the screening with crystallization technique previously reported. In addition, the screening provided not only information of cocrystal formation within a day but also information of equilibrium of cocrystal formation and polymorphic transformation in one screening. Information obtained in this screening allows effective solid form selection by saving cost and time for the development. Copyright © 2010 Elsevier B.V. All rights reserved.
Integrated simulation of continuous-scale and discrete-scale radiative transfer in metal foams
NASA Astrophysics Data System (ADS)
Xia, Xin-Lin; Li, Yang; Sun, Chuang; Ai, Qing; Tan, He-Ping
2018-06-01
A novel integrated simulation of radiative transfer in metal foams is presented. It integrates the continuous-scale simulation with the direct discrete-scale simulation in a single computational domain. It relies on the coupling of the real discrete-scale foam geometry with the equivalent continuous-scale medium through a specially defined scale-coupled zone. This zone holds continuous but nonhomogeneous volumetric radiative properties. The scale-coupled approach is compared to the traditional continuous-scale approach using volumetric radiative properties in the equivalent participating medium and to the direct discrete-scale approach employing the real 3D foam geometry obtained by computed tomography. All the analyses are based on geometrical optics. The Monte Carlo ray-tracing procedure is used for computations of the absorbed radiative fluxes and the apparent radiative behaviors of metal foams. The results obtained by the three approaches are in tenable agreement. The scale-coupled approach is fully validated in calculating the apparent radiative behaviors of metal foams composed of very absorbing to very reflective struts and that composed of very rough to very smooth struts. This new approach leads to a reduction in computational time by approximately one order of magnitude compared to the direct discrete-scale approach. Meanwhile, it can offer information on the local geometry-dependent feature and at the same time the equivalent feature in an integrated simulation. This new approach is promising to combine the advantages of the continuous-scale approach (rapid calculations) and direct discrete-scale approach (accurate prediction of local radiative quantities).
NASA Astrophysics Data System (ADS)
Song, Z. N.; Sui, H. G.
2018-04-01
High resolution remote sensing images are bearing the important strategic information, especially finding some time-sensitive-targets quickly, like airplanes, ships, and cars. Most of time the problem firstly we face is how to rapidly judge whether a particular target is included in a large random remote sensing image, instead of detecting them on a given image. The problem of time-sensitive-targets target finding in a huge image is a great challenge: 1) Complex background leads to high loss and false alarms in tiny object detection in a large-scale images. 2) Unlike traditional image retrieval, what we need to do is not just compare the similarity of image blocks, but quickly find specific targets in a huge image. In this paper, taking the target of airplane as an example, presents an effective method for searching aircraft targets in large scale optical remote sensing images. Firstly, we used an improved visual attention model utilizes salience detection and line segment detector to quickly locate suspected regions in a large and complicated remote sensing image. Then for each region, without region proposal method, a single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation is adopted to search small airplane objects. Unlike sliding window and region proposal-based techniques, we can do entire image (region) during training and test time so it implicitly encodes contextual information about classes as well as their appearance. Experimental results show the proposed method is quickly identify airplanes in large-scale images.
Taking it to the streets: delivering on deployment.
Carr, Dafna; Welch, Vickie; Fabik, Trish; Hirji, Nadir; O'Connor, Casey
2009-01-01
From inception to deployment, the Wait Time Information System (WTIS) project faced significant challenges associated with time, scope and complexity. It involved not only the creation and deployment of two large-scale province-wide systems (the WTIS and Ontario's Client Registry/Enterprise Master Patient Index) within aggressive time frames, but also the active engagement of 82 Ontario hospitals, scores of healthcare leaders and several thousand clinicians who would eventually be using the new technology and its data. The provincial WTIS project team (see Figure 1) also had to be able to adapt and evolve their planning in an environment that was changing day-by-day. This article looks at the factors that allowed the team to take the WTIS out to the field and shares the approach, processes and tools used to deploy this complex and ambitious information management and information technology (IM/IT) initiative.
Multifractal Approach to Time Clustering of Earthquakes. Application to Mt. Vesuvio Seismicity
NASA Astrophysics Data System (ADS)
Codano, C.; Alonzo, M. L.; Vilardo, G.
The clustering structure of the Vesuvian earthquakes occurring is investigated by means of statistical tools: the inter-event time distribution, the running mean and the multifractal analysis. The first cannot clearly distinguish between a Poissonian process and a clustered one due to the difficulties of clearly distinguishing between an exponential distribution and a power law one. The running mean test reveals the clustering of the earthquakes, but looses information about the structure of the distribution at global scales. The multifractal approach can enlighten the clustering at small scales, while the global behaviour remains Poissonian. Subsequently the clustering of the events is interpreted in terms of diffusive processes of the stress in the earth crust.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, S. L.
1998-08-25
Fluid Catalytic Cracking (FCC) technology is the most important process used by the refinery industry to convert crude oil to valuable lighter products such as gasoline. Process development is generally very time consuming especially when a small pilot unit is being scaled-up to a large commercial unit because of the lack of information to aide in the design of scaled-up units. Such information can now be obtained by analysis based on the pilot scale measurements and computer simulation that includes controlling physics of the FCC system. A Computational fluid dynamic (CFD) code, ICRKFLO, has been developed at Argonne National Laboratorymore » (ANL) and has been successfully applied to the simulation of catalytic petroleum cracking risers. It employs hybrid hydrodynamic-chemical kinetic coupling techniques, enabling the analysis of an FCC unit with complex chemical reaction sets containing tens or hundreds of subspecies. The code has been continuously validated based on pilot-scale experimental data. It is now being used to investigate the effects of scaled-up FCC units. Among FCC operating conditions, the feed injection conditions are found to have a strong impact on the product yields of scaled-up FCC units. The feed injection conditions appear to affect flow and heat transfer patterns and the interaction of hydrodynamics and cracking kinetics causes the product yields to change accordingly.« less
Scaling behavior of online human activity
NASA Astrophysics Data System (ADS)
Zhao, Zhi-Dan; Cai, Shi-Min; Huang, Junming; Fu, Yan; Zhou, Tao
2012-11-01
The rapid development of the Internet technology enables humans to explore the web and record the traces of online activities. From the analysis of these large-scale data sets (i.e., traces), we can get insights about the dynamic behavior of human activity. In this letter, the scaling behavior and complexity of human activity in the e-commerce, such as music, books, and movies rating, are comprehensively investigated by using the detrended fluctuation analysis technique and the multiscale entropy method. Firstly, the interevent time series of rating behaviors of these three types of media show similar scaling properties with exponents ranging from 0.53 to 0.58, which implies that the collective behaviors of rating media follow a process embodying self-similarity and long-range correlation. Meanwhile, by dividing the users into three groups based on their activities (i.e., rating per unit time), we find that the scaling exponents of the interevent time series in the three groups are different. Hence, these results suggest that a stronger long-range correlations exist in these collective behaviors. Furthermore, their information complexities vary in the three groups. To explain the differences of the collective behaviors restricted to the three groups, we study the dynamic behavior of human activity at the individual level, and find that the dynamic behaviors of a few users have extremely small scaling exponents associated with long-range anticorrelations. By comparing the interevent time distributions of four representative users, we can find that the bimodal distributions may bring forth the extraordinary scaling behaviors. These results of the analysis of the online human activity in the e-commerce may not only provide insight into its dynamic behaviors but may also be applied to acquire potential economic interest.
Charrois, Jeffrey W A; Hrudey, Steve E
2007-02-01
North American drinking water utilities are increasingly incorporating alternative disinfectants, such as chloramines, in order to comply with disinfection by-product (DBP) regulations. N-Nitrosodimethylamine (NDMA) is a non-halogenated DBP, associated with chloramination, having a drinking water unit risk two to three orders of magnitude greater than currently regulated halogenated DBPs. We quantified NDMA from two full-scale chloraminating water treatment plants in Alberta between 2003 and 2005 as well as conducted bench-scale chloramination/breakpoint experiments to assess NDMA formation. Distribution system NDMA concentrations varied and tended to increase with increasing distribution residence time. Bench-scale disinfection experiments resulted in peak NDMA production near the theoretical monochloramine maximum in the sub-breakpoint region of the disinfection curve. Breakpoints for the raw and partially treated waters tested ranged from 1.9:1 to 2.4:1 (Cl(2):total NH(3)-N, M:M). Bench-scale experiments with free-chlorine contact (2h) before chloramination resulted in significant reductions in NDMA formation (up to 93%) compared to no free-chlorine contact time. Risk-tradeoff issues involving alternative disinfection methods and unregulated DBPs, such as NDMA, are emerging as a major water quality and public health information gap.
McCannon, Melinda; O'Neal, Pamela V
2003-08-01
A national survey was conducted to determine the information technology skills nurse administrators consider critical for new nurses entering the work force. The sample consisted of 2,000 randomly selected members of the American Organization of Nurse Executives. Seven hundred fifty-two usable questionnaires were returned, for a response rate of 38%. The questionnaire used a 5-point Likert scale and consisted of 17 items that assessed various technology skills and demographic information. The questionnaire was developed and pilot tested with content experts to establish content validity. Descriptive analysis of the data revealed that using e-mail effectively, operating basic Windows applications, and searching databases were critical information technology skills. The most critical information technology skill involved knowing nursing-specific software, such as bedside charting and computer-activated medication dispensers. To effectively prepare nursing students with technology skills needed at the time of entry into practice, nursing faculty need to incorporate information technology skills into undergraduate nursing curricula.
St Clair, James J. H.; Burns, Zackory T.; Bettaney, Elaine M.; Morrissey, Michael B.; Otis, Brian; Ryder, Thomas B.; Fleischer, Robert C.; James, Richard; Rutz, Christian
2015-01-01
Social-network dynamics have profound consequences for biological processes such as information flow, but are notoriously difficult to measure in the wild. We used novel transceiver technology to chart association patterns across 19 days in a wild population of the New Caledonian crow—a tool-using species that may socially learn, and culturally accumulate, tool-related information. To examine the causes and consequences of changing network topology, we manipulated the environmental availability of the crows' preferred tool-extracted prey, and simulated, in silico, the diffusion of information across field-recorded time-ordered networks. Here we show that network structure responds quickly to environmental change and that novel information can potentially spread rapidly within multi-family communities, especially when tool-use opportunities are plentiful. At the same time, we report surprisingly limited social contact between neighbouring crow communities. Such scale dependence in information-flow dynamics is likely to influence the evolution and maintenance of material cultures. PMID:26529116
The role of fanatics in consensus formation
NASA Astrophysics Data System (ADS)
Gündüç, Semra
2015-08-01
A model of opinion dynamics with two types of agents as social actors are presented, using the Ising thermodynamic model as the dynamics template. The agents are considered as opportunists which live at sites and interact with the neighbors, or fanatics/missionaries which move from site to site randomly in persuasion of converting agents of opposite opinion with the help of opportunists. Here, the moving agents act as an external influence on the opportunists to convert them to the opposite opinion. It is shown by numerical simulations that such dynamics of opinion formation may explain some details of consensus formation even when one of the opinions are held by a minority. Regardless the distribution of the opinion, different size societies exhibit different opinion formation behavior and time scales. In order to understand general behavior, the scaling relations obtained by comparing opinion formation processes observed in societies with varying population and number of randomly moving agents are studied. For the proposed model two types of scaling relations are observed. In fixed size societies, increasing the number of randomly moving agents give a scaling relation for the time scale of the opinion formation process. The second type of scaling relation is due to the size dependent information propagation in finite but large systems, namely finite-size scaling.
Spatiotemporal patterns of drought at various time scales in Shandong Province of Eastern China
NASA Astrophysics Data System (ADS)
Zuo, Depeng; Cai, Siyang; Xu, Zongxue; Li, Fulin; Sun, Wenchao; Yang, Xiaojing; Kan, Guangyuan; Liu, Pin
2018-01-01
The temporal variations and spatial patterns of drought in Shandong Province of Eastern China were investigated by calculating the standardized precipitation evapotranspiration index (SPEI) at 1-, 3-, 6-, 12-, and 24-month time scales. Monthly precipitation and air temperature time series during the period 1960-2012 were collected at 23 meteorological stations uniformly distributed over the region. The non-parametric Mann-Kendall test was used to explore the temporal trends of precipitation, air temperature, and the SPEI drought index. S-mode principal component analysis (PCA) was applied to identify the spatial patterns of drought. The results showed that an insignificant decreasing trend in annual total precipitation was detected at most stations, a significant increase of annual average air temperature occurred at all the 23 stations, and a significant decreasing trend in the SPEI was mainly detected at the coastal stations for all the time scales. The frequency of occurrence of extreme and severe drought at different time scales generally increased with decades; higher frequency and larger affected area of extreme and severe droughts occurred as the time scale increased, especially for the northwest of Shandong Province and Jiaodong peninsular. The spatial pattern of drought for SPEI-1 contains three regions: eastern Jiaodong Peninsular and northwestern and southern Shandong. As the time scale increased to 3, 6, and 12 months, the order of the three regions was transformed into another as northwestern Shandong, eastern Jiaodong Peninsular, and southern Shandong. For SPEI-24, the location identified by REOF1 was slightly shifted from northwestern Shandong to western Shandong, and REOF2 and REOF3 identified another two weak patterns in the south edge and north edge of Jiaodong Peninsular, respectively. The potential causes of drought and the impact of drought on agriculture in the study area have also been discussed. The temporal variations and spatial patterns of drought obtained in this study provide valuable information for water resources planning and drought disaster prevention and mitigation in Eastern China.
Iskarous, Khalil; Mooshammer, Christine; Hoole, Phil; Recasens, Daniel; Shadle, Christine H.; Saltzman, Elliot; Whalen, D. H.
2013-01-01
Coarticulation and invariance are two topics at the center of theorizing about speech production and speech perception. In this paper, a quantitative scale is proposed that places coarticulation and invariance at the two ends of the scale. This scale is based on physical information flow in the articulatory signal, and uses Information Theory, especially the concept of mutual information, to quantify these central concepts of speech research. Mutual Information measures the amount of physical information shared across phonological units. In the proposed quantitative scale, coarticulation corresponds to greater and invariance to lesser information sharing. The measurement scale is tested by data from three languages: German, Catalan, and English. The relation between the proposed scale and several existing theories of coarticulation is discussed, and implications for existing theories of speech production and perception are presented. PMID:23927125
ICLEA - The Virtual Institute of Integrated Climate and Landscape Evolution Analyses
NASA Astrophysics Data System (ADS)
Schwab, Markus J.; Brauer, Achim; Błaszkiewicz, Mirosław; Blume, Theresa; Itzerott, Sibylle; Raab, Thomas; Wilmking, Martin; Iclea Team
2016-04-01
In the Virtual Institute ICLEA we view on past changes as natural experiments as a guidebook for better anticipation of future changes and their impacts. Since the natural evolution became increasingly superimposed by human impacts since the Neolithic we include an in-depth discussion of impacts of climate and environment change on societies and vice versa. The partner focusing their research capacities and expertise in ICLEA and offers young researchers an interdisciplinary and structured education and promote their early independence through coaching and mentoring. Training, Research and Analytical workshops between research partners of ICLEA are an important measure to qualify young researchers. Understanding causes and effects of present-day climate change on landscapes and the human habitat faces two main challenges, (I) too short time series of instrumental observation that do not cover the full range of variability since mechanisms of climate change and landscape evolution work on different time scales, which often not susceptible to human perception, and, (II) distinct regional differences due to the location with respect to oceanic/continental climatic influences, the geological underground, and the history and intensity of anthropogenic land-use. Both challenges are central for the ICLEA research strategy and demand a high degree of interdisciplinary. In particular, the need to link observations and measurements of ongoing changes with information from the past taken from natural archives requires joint work of scientists with very different time perspectives. On the one hand, scientists that work at geological time scales of thousands and more years and, on the other hand, those observing and investigating recent processes at short time scales. The long-term mission of the Virtual Institute is to provide a substantiated data basis for sustained environmental maintenance based on a profound process understanding at all relevant time scales. Aim is to explore processes of climate and landscape evolution in an historical cultural landscape extending from northeastern Germany into northwestern Poland. The northern-central European lowlands will be facilitated as a natural laboratory providing an ideal case for utilizing a systematic and holistic approach. Five complementary work packages (WP) are established according to the key research aspects: WP 1 focused on monitoring mainly hydrology and soil moisture as well as meteorological parameters. WP 2 is linking present day and future monitoring data with the most recent past through analyzing satellite images. This WP will further provide larger spatial scales. WP 3-5 are focused on different natural archives to obtain a broad variety of high quality proxy data. Tree rings provide sub-seasonal data for the last centuries up to few millennia, varved lake sediments cover the entire research time interval at seasonal to decadal resolution and palaeosoils and geomorphological features also cover the entire period but not continuously and with lower resolution. Complementary information, like climate, tree ecophysiological and limnological data etc., are provided by cooperation with associated partners. Further information about ICLEA: www.iclea.de
Very high resolution time-lapse photography for plant and ecosystems research
USDA-ARS?s Scientific Manuscript database
Very high resolution gigapixel photography increasingly is being used to support a broad range of ecosystem and physical process research because it offers an inexpensive means of simultaneously collecting information at a range of spatial scales. Recently, methods have been developed to incorporate...
Investigation of the near subsurface using acoustic to seismic coupling
USDA-ARS?s Scientific Manuscript database
Agricultural, hydrological and civil engineering applications have realized a need for information of the near subsurface over large areas. In order to obtain this spatially distributed data over such scales, the measurement technique must be highly mobile with a short acquisition time. Therefore, s...
Power Grid Data Analysis with R and Hadoop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hafen, Ryan P.; Gibson, Tara D.; Kleese van Dam, Kerstin
This book chapter presents an approach to analysis of large-scale time-series sensor information based on our experience with power grid data. We use the R-Hadoop Integrated Programming Environment (RHIPE) to analyze a 2TB data set and present code and results for this analysis.
Patient satisfaction with nursing care in an urban and suburban emergency department.
Wright, Greg; Causey, Sherry; Dienemann, Jacqueline; Guiton, Paula; Coleman, Frankie Sue; Nussbaum, Marcy
2013-10-01
Patient satisfaction is an important outcome measurement in the emergency department (ED). When unavoidable, the negative effect of patient wait time may be lessened by communicating expected wait time, affective support, health information, decisional control, and competent providers. This controlled quasi-experimental design used a convenience sample. The patient questionnaire included demographics, expected and perceived wait time, receiving of comfort items, information and engaging activities and their perceived helpfulness for coping with waiting, and the Consumer Emergency Care Satisfaction Scale measure of patient satisfaction with nursing. Systematic offering of comfort items, clinical information, and engaging activities were statistically analyzed for impact on perceived wait times, helpfulness in waiting, and satisfaction with nursing care. Interventions were supported by the data as helpful for coping with waiting and were significantly related to nursing care satisfaction. Interventions were less helpful for suburban patients who were also less satisfied. Nurses can influence patient satisfaction in the ED through communication and caring behaviors.
Liu, Hesheng; Agam, Yigal; Madsen, Joseph R.; Kreiman, Gabriel
2010-01-01
Summary The difficulty of visual recognition stems from the need to achieve high selectivity while maintaining robustness to object transformations within hundreds of milliseconds. Theories of visual recognition differ in whether the neuronal circuits invoke recurrent feedback connections or not. The timing of neurophysiological responses in visual cortex plays a key role in distinguishing between bottom-up and top-down theories. Here we quantified at millisecond resolution the amount of visual information conveyed by intracranial field potentials from 912 electrodes in 11 human subjects. We could decode object category information from human visual cortex in single trials as early as 100 ms post-stimulus. Decoding performance was robust to depth rotation and scale changes. The results suggest that physiological activity in the temporal lobe can account for key properties of visual recognition. The fast decoding in single trials is compatible with feed-forward theories and provides strong constraints for computational models of human vision. PMID:19409272
The scaling of geographic ranges: implications for species distribution models
Yackulic, Charles B.; Ginsberg, Joshua R.
2016-01-01
There is a need for timely science to inform policy and management decisions; however, we must also strive to provide predictions that best reflect our understanding of ecological systems. Species distributions evolve through time and reflect responses to environmental conditions that are mediated through individual and population processes. Species distribution models that reflect this understanding, and explicitly model dynamics, are likely to give more accurate predictions.
NASA Astrophysics Data System (ADS)
Perron, Aurelien; Roehling, John D.; Turchi, Patrice E. A.; Fattebert, Jean-Luc; McKeown, Joseph T.
2018-01-01
A combination of dynamic transmission electron microscopy (DTEM) experiments and CALPHAD-informed phase-field simulations was used to study rapid solidification in Cu-Ni thin-film alloys. Experiments—conducted in the DTEM—consisted of in situ laser melting and determination of the solidification kinetics by monitoring the solid-liquid interface and the overall microstructure evolution (time-resolved measurements) during the solidification process. Modelling of the Cu-Ni alloy microstructure evolution was based on a phase-field model that included realistic Gibbs energies and diffusion coefficients from the CALPHAD framework (thermodynamic and mobility databases). DTEM and post mortem experiments highlighted the formation of microsegregation-free columnar grains with interface velocities varying from ˜0.1 to ˜0.6 m s-1. After an ‘incubation’ time, the velocity of the planar solid-liquid interface accelerated until solidification was complete. In addition, a decrease of the temperature gradient induced a decrease in the interface velocity. The modelling strategy permitted the simulation (in 1D and 2D) of the solidification process from the initially diffusion-controlled to the nearly partitionless regimes. Finally, results of DTEM experiments and phase-field simulations (grain morphology, solute distribution, and solid-liquid interface velocity) were consistent at similar time (μs) and spatial scales (μm).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perron, Aurelien; Roehling, John D.; Turchi, Patrice E. A.
A combination of dynamic transmission electron microscopy (DTEM) experiments and CALPHAD-informed phase-field simulations was used to study rapid solidification in Cu–Ni thin-film alloys. Experiments—conducted in the DTEM—consisted of in situ laser melting and determination of the solidification kinetics by monitoring the solid–liquid interface and the overall microstructure evolution (time-resolved measurements) during the solidification process. Modelling of the Cu–Ni alloy microstructure evolution was based on a phase-field model that included realistic Gibbs energies and diffusion coefficients from the CALPHAD framework (thermodynamic and mobility databases). DTEM and post mortem experiments highlighted the formation of microsegregation-free columnar grains with interface velocities varying frommore » ~0.1 to ~0.6 m s –1. After an 'incubation' time, the velocity of the planar solid–liquid interface accelerated until solidification was complete. In addition, a decrease of the temperature gradient induced a decrease in the interface velocity. The modelling strategy permitted the simulation (in 1D and 2D) of the solidification process from the initially diffusion-controlled to the nearly partitionless regimes. Lastly, results of DTEM experiments and phase-field simulations (grain morphology, solute distribution, and solid–liquid interface velocity) were consistent at similar time (μs) and spatial scales (μm).« less
Perron, Aurelien; Roehling, John D.; Turchi, Patrice E. A.; ...
2017-12-05
A combination of dynamic transmission electron microscopy (DTEM) experiments and CALPHAD-informed phase-field simulations was used to study rapid solidification in Cu–Ni thin-film alloys. Experiments—conducted in the DTEM—consisted of in situ laser melting and determination of the solidification kinetics by monitoring the solid–liquid interface and the overall microstructure evolution (time-resolved measurements) during the solidification process. Modelling of the Cu–Ni alloy microstructure evolution was based on a phase-field model that included realistic Gibbs energies and diffusion coefficients from the CALPHAD framework (thermodynamic and mobility databases). DTEM and post mortem experiments highlighted the formation of microsegregation-free columnar grains with interface velocities varying frommore » ~0.1 to ~0.6 m s –1. After an 'incubation' time, the velocity of the planar solid–liquid interface accelerated until solidification was complete. In addition, a decrease of the temperature gradient induced a decrease in the interface velocity. The modelling strategy permitted the simulation (in 1D and 2D) of the solidification process from the initially diffusion-controlled to the nearly partitionless regimes. Lastly, results of DTEM experiments and phase-field simulations (grain morphology, solute distribution, and solid–liquid interface velocity) were consistent at similar time (μs) and spatial scales (μm).« less
The Triggering of Large-Scale Waves by CME Initiation
NASA Astrophysics Data System (ADS)
Forbes, Terry
Studies of the large-scale waves generated at the onset of a coronal mass ejection (CME) can provide important information about the processes in the corona that trigger and drive CMEs. The size of the region where the waves originate can indicate the location of the magnetic forces that drive the CME outward, and the rate at which compressive waves steepen into shocks can provide a measure of how the driving forces develop in time. However, in practice it is difficult to separate the effects of wave formation from wave propagation. The problem is particularly acute for the corona because of the multiplicity of wave modes (e.g. slow versus fast MHD waves) and the highly nonuniform structure of the solar atmosphere. At the present time large-scale numerical simulations provide the best hope for deconvolving wave propagation and formation effects from one another.
Faithfulness of Recurrence Plots: A Mathematical Proof
NASA Astrophysics Data System (ADS)
Hirata, Yoshito; Komuro, Motomasa; Horai, Shunsuke; Aihara, Kazuyuki
It is practically known that a recurrence plot, a two-dimensional visualization of time series data, can contain almost all information related to the underlying dynamics except for its spatial scale because we can recover a rough shape for the original time series from the recurrence plot even if the original time series is multivariate. We here provide a mathematical proof that the metric defined by a recurrence plot [Hirata et al., 2008] is equivalent to the Euclidean metric under mild conditions.
NASA Astrophysics Data System (ADS)
Gattano, C.; Lambert, S.; Bizouard, C.
2017-12-01
In the context of selecting sources defining the celestial reference frame, we compute astrometric time series of all VLBI radio-sources from observations in the International VLBI Service database. The time series are then analyzed with Allan variance in order to estimate the astrometric stability. From results, we establish a new classification that takes into account the whole multi-time scales information. The algorithm is flexible on the definition of ``stable source" through an adjustable threshold.
NASA Astrophysics Data System (ADS)
Albers, D. J.; Hripcsak, George
2010-02-01
Statistical physics and information theory is applied to the clinical chemistry measurements present in a patient database containing 2.5 million patients' data over a 20-year period. Despite the seemingly naive approach of aggregating all patients over all times (with respect to particular clinical chemistry measurements), both a diurnal signal in the decay of the time-delayed mutual information and the presence of two sub-populations with differing health are detected. This provides a proof in principle that the highly fragmented data in electronic health records has potential for being useful in defining disease and human phenotypes.
On the use of variability time-scales as an early classifier of radio transients and variables
NASA Astrophysics Data System (ADS)
Pietka, M.; Staley, T. D.; Pretorius, M. L.; Fender, R. P.
2017-11-01
We have shown previously that a broad correlation between the peak radio luminosity and the variability time-scales, approximately L ∝ τ5, exists for variable synchrotron emitting sources and that different classes of astrophysical sources occupy different regions of luminosity and time-scale space. Based on those results, we investigate whether the most basic information available for a newly discovered radio variable or transient - their rise and/or decline rate - can be used to set initial constraints on the class of events from which they originate. We have analysed a sample of ≈800 synchrotron flares, selected from light curves of ≈90 sources observed at 5-8 GHz, representing a wide range of astrophysical phenomena, from flare stars to supermassive black holes. Selection of outbursts from the noisy radio light curves has been done automatically in order to ensure reproducibility of results. The distribution of rise/decline rates for the selected flares is modelled as a Gaussian probability distribution for each class of object, and further convolved with estimated areal density of that class in order to correct for the strong bias in our sample. We show in this way that comparing the measured variability time-scale of a radio transient/variable of unknown origin can provide an early, albeit approximate, classification of the object, and could form part of a suite of measurements used to provide early categorization of such events. Finally, we also discuss the effect scintillating sources will have on our ability to classify events based on their variability time-scales.
Matching Fishers’ Knowledge and Landing Data to Overcome Data Missing in Small-Scale Fisheries
Damasio, Ludmila de Melo Alves; Lopes, Priscila F. M.; Guariento, Rafael D.; Carvalho, Adriana R.
2015-01-01
Background In small-scale fishery, information provided by fishers has been useful to complement current and past lack of knowledge on species and environment. Methodology Through interviews, 82 fishers from the largest fishing communities on the north and south borders of a Brazilian northeastern coastal state provided estimates of the catch per unit effort (CPUE) and rank of species abundance of their main target fishes for three time points: current year (2013 at the time of the research), 10, and 20 years past. This information was contrasted to other available data sources: scientific sampling of fish landing (2013), governmental statistics (2003), and information provided by expert fishers (1993), respectively. Principal Findings Fishers were more accurate when reporting information about their maximum CPUE for 2013, but except for three species, which they estimated accurately, fishers overestimated their mean CPUE per species. Fishers were also accurate at establishing ranks of abundance of their main target species for all periods. Fishers' beliefs that fish abundance has not changed over the last 10 years (2003–2013) were corroborated by governmental and scientific landing data. Conclusions The comparison between official and formal landing records and fishers' perceptions revealed that fishers are accurate when reporting maximum CPUE, but not when reporting mean CPUE. Moreover, fishers are less precise the less common a species is in their catches, suggesting that they could provide better information for management purposes on their current target species. PMID:26176538
Dynamical downscaling of wind fields for wind power applications
NASA Astrophysics Data System (ADS)
Mengelkamp, H.-T.; Huneke, S.; Geyer, J.
2010-09-01
Dynamical downscaling of wind fields for wind power applications H.-T. Mengelkamp*,**, S. Huneke**, J, Geyer** *GKSS Research Center Geesthacht GmbH **anemos Gesellschaft für Umweltmeteorologie mbH Investments in wind power require information on the long-term mean wind potential and its temporal variations on daily to annual and decadal time scales. This information is rarely available at specific wind farm sites. Short-term on-site measurements usually are only performed over a 12 months period. These data have to be set into the long-term perspective through correlation to long-term consistent wind data sets. Preliminary wind information is often asked for to select favourable wind sites over regional and country wide scales. Lack of high-quality wind measurements at weather stations was the motivation to start high resolution wind field simulations The simulations are basically a refinement of global scale reanalysis data by means of high resolution simulations with an atmospheric mesoscale model using high-resolution terrain and land-use data. The 3-dimensional representation of the atmospheric state available every six hours at 2.5 degree resolution over the globe, known as NCAR/NCEP reanalysis data, forms the boundary conditions for continuous simulations with the non-hydrostatic atmospheric mesoscale model MM5. MM5 is nested in itself down to a horizontal resolution of 5 x 5 km². The simulation is performed for different European countries and covers the period 2000 to present and is continuously updated. Model variables are stored every 10 minutes for various heights. We have analysed the wind field primarily. The wind data set is consistent in space and time and provides information on the regional distribution of the long-term mean wind potential, the temporal variability of the wind potential, the vertical variation of the wind potential, and the temperature, and pressure distribution (air density). In the context of wind power these data are used • as an initial estimate of wind and energy potential • for the long-term correlation of wind measurements and turbine production data • to provide wind potential maps on a regional to country wide scale • to provide input data sets for simulation models • to determine the spatial correlation of the wind field in portfolio calculations • to calculate the wind turbine energy loss during prescribed downtimes • to provide information on the temporal variations of the wind and wind turbine energy production The time series of wind speed and wind direction are compared to measurements at offshore and onshore locations.
On the Assessment of Global Terrestrial Reference Frame Temporal Variations
NASA Astrophysics Data System (ADS)
Ampatzidis, Dimitrios; Koenig, Rolf; Zhu, Shengyuan
2015-04-01
Global Terrestrial Reference Frames (GTRFs) as the International Terrestrial Reference Frame (ITRF) provide reliable 4-D position information (3-D coordinates and their evolution through time). The given 3-D velocities play a significant role in precise position acquisition and are estimated from long term coordinate time series from the space-geodetic techniques DORIS, GNSS, SLR, and VLBI. GTRFs temporal evolution is directly connected with their internal stability: The more intense and inhomogeneous velocity field, the less stable TRF is derived. The assessment of the quality of the GTRF is mainly realized by comparing it to each individual technique's reference frame. E.g the comparison of GTRFs to SLR-only based TRF gives the sense of the ITRF stability with respect to the Geocenter and scale and their associated rates respectively. In addition, the comparison of ITRF to the VLBI-only based TRF can be used for the scale validation. However, till now there is not any specified methodology for the total assessment (in terms of origin, orientation and scale respectively) of the temporal evolution and GTRFs associated accuracy. We present a new alternative diagnostic tool for the assessment of GTRFs temporal evolution based on the well-known time-dependent Helmert type transformation formula (three shifts, three rotations and scale rates respectively). The advantage of the new methodology relies on the fact that it uses the full velocity field of the TRF and therefore all points not just the ones common to different techniques. It also examines simultaneously rates of origin, orientation and scale. The methodology is presented and implemented to the two existing GTRFs on the market (ITRF and DTRF which is computed from DGFI) , the results are discussed. The results also allow to compare directly each GTRF dynamic behavior. Furthermore, the correlations of the estimated parameters can also provide useful information to the proposed GTRFs assessment scheme.
Multiscale decoding for reliable brain-machine interface performance over time.
Han-Lin Hsieh; Wong, Yan T; Pesaran, Bijan; Shanechi, Maryam M
2017-07-01
Recordings from invasive implants can degrade over time, resulting in a loss of spiking activity for some electrodes. For brain-machine interfaces (BMI), such a signal degradation lowers control performance. Achieving reliable performance over time is critical for BMI clinical viability. One approach to improve BMI longevity is to simultaneously use spikes and other recording modalities such as local field potentials (LFP), which are more robust to signal degradation over time. We have developed a multiscale decoder that can simultaneously model the different statistical profiles of multi-scale spike/LFP activity (discrete spikes vs. continuous LFP). This decoder can also run at multiple time-scales (millisecond for spikes vs. tens of milliseconds for LFP). Here, we validate the multiscale decoder for estimating the movement of 7 major upper-arm joint angles in a non-human primate (NHP) during a 3D reach-to-grasp task. The multiscale decoder uses motor cortical spike/LFP recordings as its input. We show that the multiscale decoder can improve decoding accuracy by adding information from LFP to spikes, while running at the fast millisecond time-scale of the spiking activity. Moreover, this improvement is achieved using relatively few LFP channels, demonstrating the robustness of the approach. These results suggest that using multiscale decoders has the potential to improve the reliability and longevity of BMIs.
Distributed resource allocation under communication constraints
NASA Astrophysics Data System (ADS)
Dodin, Pierre; Nimier, Vincent
2001-03-01
This paper deals with a study of the multi-sensor management problem for multi-target tracking. The collaboration between many sensors observing the same target means that they are able to fuse their data during the information process. Then one must take into account this possibility to compute the optimal association sensors-target at each step of time. In order to solve this problem for real large scale system, one must both consider the information aspect and the control aspect of the problem. To unify these problems, one possibility is to use a decentralized filtering algorithm locally driven by an assignment algorithm. The decentralized filtering algorithm we use in our model is the filtering algorithm of Grime, which relaxes the usual full-connected hypothesis. By full-connected, one means that the information in a full-connected system is totally distributed everywhere at the same moment, which is unacceptable for a real large scale system. We modelize the distributed assignment decision with the help of a greedy algorithm. Each sensor performs a global optimization, in order to estimate other information sets. A consequence of the relaxation of the full- connected hypothesis is that the sensors' information set are not the same at each step of time, producing an information dis- symmetry in the system. The assignment algorithm uses a local knowledge of this dis-symmetry. By testing the reactions and the coherence of the local assignment decisions of our system, against maneuvering targets, we show that it is still possible to manage with decentralized assignment control even though the system is not full-connected.
The challenges associated with developing science-based landscape scale management plans
Szaro, Robert C.; Boyce, D.A.; Puchlerz, T.
2005-01-01
Planning activities over large landscapes poses a complex of challenges when trying to balance the implementation of a conservation strategy while still allowing for a variety of consumptive and nonconsumptive uses. We examine a case in southeast Alaska to illustrate the breadth of these challenges and an approach to developing a science-based resource plan. Not only was the planning area, the Tongass National Forest, USA, exceptionally large (approximately 17 million acres or 6.9 million ha), but it also is primarily an island archipelago environment. The water system surrounding and going through much of the forest provides access to facilitate the movement of people, animals, and plants but at the same time functions as a barrier to others. This largest temperate rainforest in the world is an exceptional example of the complexity of managing at such a scale but also illustrates the role of science in the planning process. As we enter the 21st century, the list of questions needing scientific investigation has not only changed dramatically, but the character of the questions also has changed. Questions are contentious, cover broad scales in space and time, and are highly complex and interdependent. The provision of unbiased and objective information to all stakeholders is an important step in informed decision-making.
Single-shot optical recording with sub-picosecond resolution spans record nanosecond lengths
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muir, Ryan; Heebner, John
With the advent of electronics, oscilloscopes and photodiodes are now routinely capable of measuring events well below nanosecond resolution. However, these electronic instruments do not currently measure events below 10 ps resolution. From Walden’s observation that there is an engineering tradeoff between electronic bit depth and temporal resolution in analog-to-digital converters, this technique is projected to have extremely poor fidelity if it is extended to record single events with picosecond resolution. While this constraint may be circumvented with extensive signal averaging or other multiple measurements approaches, rare events and nonrepetitive events cannot be observed with this technique. Techniques capable ofmore » measuring information in a single shot are often required. There is a general lack of available technologies that are easily scalable to long records with sub-picosecond resolution, and are simultaneously versatile in wavelength of operation. Since it is difficult to scale electronic methods to shorter resolutions, we instead aim to scale optical methods to longer records. Demonstrated optical recording methods that have achieved 1 ps resolution and long recording lengths rely on either time scaling to slow down the temporal information or, like Wien, perform time-to-space mapping so that fast events may be captured with a conventional camera.« less
Single-shot optical recording with sub-picosecond resolution spans record nanosecond lengths
Muir, Ryan; Heebner, John
2018-01-18
With the advent of electronics, oscilloscopes and photodiodes are now routinely capable of measuring events well below nanosecond resolution. However, these electronic instruments do not currently measure events below 10 ps resolution. From Walden’s observation that there is an engineering tradeoff between electronic bit depth and temporal resolution in analog-to-digital converters, this technique is projected to have extremely poor fidelity if it is extended to record single events with picosecond resolution. While this constraint may be circumvented with extensive signal averaging or other multiple measurements approaches, rare events and nonrepetitive events cannot be observed with this technique. Techniques capable ofmore » measuring information in a single shot are often required. There is a general lack of available technologies that are easily scalable to long records with sub-picosecond resolution, and are simultaneously versatile in wavelength of operation. Since it is difficult to scale electronic methods to shorter resolutions, we instead aim to scale optical methods to longer records. Demonstrated optical recording methods that have achieved 1 ps resolution and long recording lengths rely on either time scaling to slow down the temporal information or, like Wien, perform time-to-space mapping so that fast events may be captured with a conventional camera.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curry, Judith
This project addressed the challenge of providing weather and climate information to support the operation, management and planning for wind-energy systems. The need for forecast information is extending to longer projection windows with increasing penetration of wind power into the grid and also with diminishing reserve margins to meet peak loads during significant weather events. Maintenance planning and natural gas trading is being influenced increasingly by anticipation of wind generation on timescales of weeks to months. Future scenarios on decadal time scales are needed to support assessment of wind farm siting, government planning, long-term wind purchase agreements and the regulatorymore » environment. The challenge of making wind forecasts on these longer time scales is associated with a wide range of uncertainties in general circulation and regional climate models that make them unsuitable for direct use in the design and planning of wind-energy systems. To address this challenge, CFAN has developed a hybrid statistical/dynamical forecasting scheme for delivering probabilistic forecasts on time scales from one day to seven months using what is arguably the best forecasting system in the world (European Centre for Medium Range Weather Forecasting, ECMWF). The project also provided a framework to assess future wind power through developing scenarios of interannual to decadal climate variability and change. The Phase II research has successfully developed an operational wind power forecasting system for the U.S., which is being extended to Europe and possibly Asia.« less
NASA Technical Reports Server (NTRS)
Skakun, Sergii; Vermote, Eric; Roger, Jean-Claude; Franch, Belen
2017-01-01
Timely and accurate information on crop yield and production is critical to many applications within agriculture monitoring. Thanks to its coverage and temporal resolution, coarse spatial resolution satellite imagery has always been a source of valuable information for yield forecasting and assessment at national and regional scales. With availability of free images acquired by Landsat-8 and Sentinel-2 remote sensing satellites, it becomes possible to provide temporal resolution of an image every 3-5 days, and therefore, to develop next generation agriculture products at higher spatial resolution (10-30 m). This paper explores the combined use of Landsat-8 and Sentinel-2A for winter crop mapping and winter wheat yield assessment at regional scale. For the former, we adapt a previously developed approach for the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument at 250 m resolution that allows automatic mapping of winter crops taking into account a priori knowledge on crop calendar. For the latter, we use a generalized winter wheat yield forecasting model that is based on estimation of the peak Normalized Difference Vegetation Index (NDVI) from MODIS image time-series, and further downscaled to be applicable at 30 m resolution. We show that integration of Landsat-8 and Sentinel-2A improves both winter crop mapping and winter wheat yield assessment. In particular, the error of winter wheat yield estimates can be reduced up to 1.8 times compared to using a single satellite.
Liu, Jing-Ying; Liu, Yan-Hui; Yang, Ji-Peng
2014-01-01
The aim of this study was to explore the relationships among study engagement, learning adaptability, and time management disposition in a sample of Chinese baccalaureate nursing students. A convenient sample of 467 baccalaureate nursing students was surveyed in two universities in Tianjin, China. Students completed a questionnaire that included their demographic information, Chinese Utrecht Work Engagement Scale-Student Questionnaire, Learning Adaptability Scale, and Adolescence Time Management Disposition Scale. One-way analysis of variance tests were used to assess the relationship between certain characteristics of baccalaureate nursing students. Pearson correlation was performed to test the correlation among study engagement, learning adaptability, and time management disposition. Hierarchical linear regression analyses were performed to explore the mediating role of time management disposition. The results revealed that study engagement (F = 7.20, P < .01) and learning adaptability (F = 4.41, P < .01) differed across grade groups. Learning adaptability (r = 0.382, P < .01) and time management disposition (r = 0.741, P < .01) were positively related with study engagement. Time management disposition had a partially mediating effect on the relationship between study engagement and learning adaptability. The findings implicate that educators should not only promote interventions to increase engagement of baccalaureate nursing students but also focus on development, investment in adaptability, and time management. Copyright © 2014 Elsevier Inc. All rights reserved.
Making sense of past climate changes
NASA Astrophysics Data System (ADS)
Masson-Delmotte, Valérie; Schulz, Michael
2014-05-01
This presentation will summarize the paleoclimate perspective in IPCC AR5, which combines information from natural archives, paleoclimate simulations using both the CMIP5 framework and other simulations, model-data comparisons for model evaluation at hemispheric to regional scales, detection - attribution, and process studies throughout timescales such as polar amplification, carbon cycle or sea level change. It will highlight new findings and coordinated efforts which, within the scientific community, have allowed new information to emerge on time for AR5. It will also stress the aspects which could not be covered or assessed as well as suggestions for further inclusion of paleoclimate information to inform projections.
Multiple shooting shadowing for sensitivity analysis of chaotic dynamical systems
NASA Astrophysics Data System (ADS)
Blonigan, Patrick J.; Wang, Qiqi
2018-02-01
Sensitivity analysis methods are important tools for research and design with simulations. Many important simulations exhibit chaotic dynamics, including scale-resolving turbulent fluid flow simulations. Unfortunately, conventional sensitivity analysis methods are unable to compute useful gradient information for long-time-averaged quantities in chaotic dynamical systems. Sensitivity analysis with least squares shadowing (LSS) can compute useful gradient information for a number of chaotic systems, including simulations of chaotic vortex shedding and homogeneous isotropic turbulence. However, this gradient information comes at a very high computational cost. This paper presents multiple shooting shadowing (MSS), a more computationally efficient shadowing approach than the original LSS approach. Through an analysis of the convergence rate of MSS, it is shown that MSS can have lower memory usage and run time than LSS.
Whitehead, Lisa
2009-01-01
Fatigue is a common symptom associated with a wide range of chronic diseases. A large number of instruments have been developed to measure fatigue. An assessment regarding the reliability, validity, and utility of fatigue measures is time-consuming for the clinician and researcher, and few reviews exist on which to draw such information. The aim of this article is to present a critical review of fatigue measures, the populations in which the scales have been used, and the extent to which the psychometric properties of each instrument have been evaluated to provide clinicians and researchers with information on which to base decisions. Seven databases were searched for all articles that measured fatigue and offered an insight into the psychometric properties of the scales used over the period 1980-2007. Criteria for judging the "ideal" measure were developed to encompass scale usability, clinical/research utility, and the robustness of psychometric properties. Twenty-two fatigue measures met the inclusion criteria and were evaluated. A further 17 measures met some of the criteria, but have not been tested beyond initial development, and are reviewed briefly at the end of the article. The review did not identify any instrument that met all the criteria of an ideal instrument. However, a small number of short instruments demonstrated good psychometric properties (Fatigue Severity Scale [FSS], Fatigue Impact Scale [FIS], and Brief Fatigue Inventory [BFI]), and three comprehensive instruments demonstrated the same (Fatigue Symptom Inventory [FSI], Multidimensional Assessment of Fatigue [MAF], and Multidimensional Fatigue Symptom Inventory [MFSI]). Only four measures (BFI, FSS, FSI, and MAF) demonstrated the ability to detect change over time. The clinician and researcher also should consider the populations in which the scale has been used previously to assess its validity with their own patient group, and assess the content of a scale to ensure that the key qualitative aspects of fatigue of the population of interest are covered.
Silberg, Tamar; Tal-Jacobi, Dana; Levav, Miriam; Brezner, Amichai; Rassovsky, Yuri
2015-01-01
Gathering information from parents and teachers following paediatric traumatic brain injury (TBI) has substantial clinical value for diagnostic decisions. Yet, a multi-informant approach has rarely been addressed when evaluating children at the chronic stage post-injury. In the current study, the goals were to examine (1) differences between parents' and teachers' reports on a child's emotional and behavioural problems and (2) the effect of time elapsed since injury on each rater's report. A sample of 42 parents and 42 teachers of children following severe TBI completed two standard rating scales. Receiver Operating Characteristic (ROC) curves were used to determine whether time elapsed since injury reliably distinguished children falling above and below clinical levels. Emotional-behavioural scores of children following severe TBI fell within normal range, according to both teachers and parents. Significant differences were found between parents' reports relatively close to the time of injury and 2 years post-injury. However, no such differences were observed in teachers' ratings. Parents and teachers of children following severe TBI differ in their reports on a child's emotional and behavioural problems. The present study not only underscores the importance of multiple informants, but also highlights, for the first time, the possibility that informants' perceptions may vary across time.
How the Website Usability Elements Impact Performance
NASA Astrophysics Data System (ADS)
Aljukhadar, Muhammad; Senecal, Sylvain
This research builds on the results of a large scale study in which participants performed an informational task on one of 59 websites spanning various industries to examine how the website usability elements (graphical attractiveness, information, interactivity, trust, and ease of use) drive users’ attitudes and intentions toward the website and how these effects vary according to site experience and end product tangibility. Results show that while the effects of site interactivity and graphical attractiveness were more influential for services sites, the effects of site information and trust were stronger for tangibles sites. Alternatively, compared to returning site visitors, first-time visitors perceived the website as less easy to use, needed more time to accomplish the online task, and based positive attitudes and intentions more strongly on the site information and interactivity. The results of a second study performed in a proximate culture largely corroborate these findings.
Getting "Ready" for an Emergency: Emergency Preparedness Series--Part 3
ERIC Educational Resources Information Center
Apel, Laura
2009-01-01
This article presents part 3 of a series of articles giving timely information about potential emergency situations and offering suggestions and new technology that exceptional families can use to prepare for emergencies--everything from localized to large scale emergencies, everything from natural disasters to terrorist attacks. In 2003 the…
The Need for a Harmonized Repository for Next-Generation Human Activity Data
Multi-tiered human time-activity-location data can inform many efforts to describe human exposures to air pollutants and other chemicals on a range of temporal and spatial scales. In the last decade, EPA's Consolidated Human Activity Database (CHAD) has served as a harmonized rep...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-22
... agenda most likely to achieve high rates of technological innovation. The goals of AMTech include...; Collapsing the time scale of technological innovation; Fostering a robust U.S. innovation system through... key players across the entire innovation lifecycle, AMTech consortia will work toward eliminating...
How-to-Do-It. How Long Is a Long Time?
ERIC Educational Resources Information Center
McComas, William F.
1990-01-01
Presented is an activity designed to help students understand and appreciate the scale and order of the geologic timetable and begin to infer a relationship between biologic, chemical, and geological events. Procedures, background information, student worksheets with answers, and a list of materials are included. (CW)
A Participatory Simulation for Informal Education in Restoration Ecology
ERIC Educational Resources Information Center
Tomlinson, Bill; Baumer, Eric; Yau, Man Lok; Carpenter, F. Lynn; Black, Rebecca
2008-01-01
Constructivist pedagogical approaches have become common in many science curricula. However, while sciences such as physics and chemistry lend themselves to compelling opportunities for interaction (explosions, reactions, objects in motion), certain systems sciences are more challenging for learners to engage with on a short time scale. Applying…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meschede, Dieter; Ueberholz, Bernd; Gomer, Victor
1999-06-11
We are experimenting with individual neutral cesium atoms stored in a magneto-optical trap. The atoms are detected by their resonance fluorescence, and fluorescence fluctuations contain signatures of the atomic internal and external degrees of freedom. This noninvasive probe provides a rich source of information about atomic dynamics at all relevant time scales.
Evapotranspiration estimates derived using multi-platform remote sensing in a semiarid region
USDA-ARS?s Scientific Manuscript database
Evapotranspiration (ET) is a key component of the water balance, especially in arid and semiarid regions. The current study takes advantage of spatially-distributed, near real-time information provided by satellite remote sensing to develop a regional scale ET product derived from remotely-sensed ob...
Reaction Time and Self-Report Psychopathological Assessment: Convergent and Discriminant Validity.
ERIC Educational Resources Information Center
Holden, Ronald R.; Fekken, G. Cynthia
The processing of incoming psychological information along the network, or schemata, of self-knowledge was studied to determine the convergent and discriminant validity of the patterns of schemata-specific response latencies. Fifty-three female and 52 male university students completed the Basic Personality Inventory (BPI). BPI scales assess…
Shang, Jianyuan; Geva, Eitan
2007-04-26
The quenching rate of a fluorophore attached to a macromolecule can be rather sensitive to its conformational state. The decay of the corresponding fluorescence lifetime autocorrelation function can therefore provide unique information on the time scales of conformational dynamics. The conventional way of measuring the fluorescence lifetime autocorrelation function involves evaluating it from the distribution of delay times between photoexcitation and photon emission. However, the time resolution of this procedure is limited by the time window required for collecting enough photons in order to establish this distribution with sufficient signal-to-noise ratio. Yang and Xie have recently proposed an approach for improving the time resolution, which is based on the argument that the autocorrelation function of the delay time between photoexcitation and photon emission is proportional to the autocorrelation function of the square of the fluorescence lifetime [Yang, H.; Xie, X. S. J. Chem. Phys. 2002, 117, 10965]. In this paper, we show that the delay-time autocorrelation function is equal to the autocorrelation function of the square of the fluorescence lifetime divided by the autocorrelation function of the fluorescence lifetime. We examine the conditions under which the delay-time autocorrelation function is approximately proportional to the autocorrelation function of the square of the fluorescence lifetime. We also investigate the correlation between the decay of the delay-time autocorrelation function and the time scales of conformational dynamics. The results are demonstrated via applications to a two-state model and an off-lattice model of a polypeptide.
Neural code alterations and abnormal time patterns in Parkinson’s disease
NASA Astrophysics Data System (ADS)
Andres, Daniela Sabrina; Cerquetti, Daniel; Merello, Marcelo
2015-04-01
Objective. The neural code used by the basal ganglia is a current question in neuroscience, relevant for the understanding of the pathophysiology of Parkinson’s disease. While a rate code is known to participate in the communication between the basal ganglia and the motor thalamus/cortex, different lines of evidence have also favored the presence of complex time patterns in the discharge of the basal ganglia. To gain insight into the way the basal ganglia code information, we studied the activity of the globus pallidus pars interna (GPi), an output node of the circuit. Approach. We implemented the 6-hydroxydopamine model of Parkinsonism in Sprague-Dawley rats, and recorded the spontaneous discharge of single GPi neurons, in head-restrained conditions at full alertness. Analyzing the temporal structure function, we looked for characteristic scales in the neuronal discharge of the GPi. Main results. At a low-scale, we observed the presence of dynamic processes, which allow the transmission of time patterns. Conversely, at a middle-scale, stochastic processes force the use of a rate code. Regarding the time patterns transmitted, we measured the word length and found that it is increased in Parkinson’s disease. Furthermore, it showed a positive correlation with the frequency of discharge, indicating that an exacerbation of this abnormal time pattern length can be expected, as the dopamine depletion progresses. Significance. We conclude that a rate code and a time pattern code can co-exist in the basal ganglia at different temporal scales. However, their normal balance is progressively altered and replaced by pathological time patterns in Parkinson’s disease.
Turbulent transport measurements with a laser Doppler velocimeter
NASA Technical Reports Server (NTRS)
Edwards, R. V.; Angus, J. C.; Dunning, J. W., Jr.
1972-01-01
The power spectrum of phototube current from a laser Doppler velocimeter operating in the heterodyne mode has been computed. The spectrum is obtained in terms of the space time correlation function of the fluid. The spectral width and shape predicted by the theory are in agreement with experiment. For normal operating parameters the time average spectrum contains information only for times shorter than the Lagrangian integral time scale of the turbulence. To examine the long time behavior, one must use either extremely small scattering angles, much longer wavelength radiation or a different mode of signal analysis, e.g., FM detection.
Dickson, Kim E; Kinney, Mary V; Moxon, Sarah G; Ashton, Joanne; Zaka, Nabila; Simen-Kapeu, Aline; Sharma, Gaurav; Kerber, Kate J; Daelmans, Bernadette; Gülmezoglu, A; Mathai, Matthews; Nyange, Christabel; Baye, Martina; Lawn, Joy E
2015-01-01
The Every Newborn Action Plan (ENAP) and Ending Preventable Maternal Mortality targets cannot be achieved without high quality, equitable coverage of interventions at and around the time of birth. This paper provides an overview of the methodology and findings of a nine paper series of in-depth analyses which focus on the specific challenges to scaling up high-impact interventions and improving quality of care for mothers and newborns around the time of birth, including babies born small and sick. The bottleneck analysis tool was applied in 12 countries in Africa and Asia as part of the ENAP process. Country workshops engaged technical experts to complete a tool designed to synthesise "bottlenecks" hindering the scale up of maternal-newborn intervention packages across seven health system building blocks. We used quantitative and qualitative methods and literature review to analyse the data and present priority actions relevant to different health system building blocks for skilled birth attendance, emergency obstetric care, antenatal corticosteroids (ACS), basic newborn care, kangaroo mother care (KMC), treatment of neonatal infections and inpatient care of small and sick newborns. The 12 countries included in our analysis account for the majority of global maternal (48%) and newborn (58%) deaths and stillbirths (57%). Our findings confirm previously published results that the interventions with the most perceived bottlenecks are facility-based where rapid emergency care is needed, notably inpatient care of small and sick newborns, ACS, treatment of neonatal infections and KMC. Health systems building blocks with the highest rated bottlenecks varied for different interventions. Attention needs to be paid to the context specific bottlenecks for each intervention to scale up quality care. Crosscutting findings on health information gaps inform two final papers on a roadmap for improvement of coverage data for newborns and indicate the need for leadership for effective audit systems. Achieving the Sustainable Development Goal targets for ending preventable mortality and provision of universal health coverage will require large-scale approaches to improving quality of care. These analyses inform the development of systematic, targeted approaches to strengthening of health systems, with a focus on overcoming specific bottlenecks for the highest impact interventions.
2015-01-01
Background The Every Newborn Action Plan (ENAP) and Ending Preventable Maternal Mortality targets cannot be achieved without high quality, equitable coverage of interventions at and around the time of birth. This paper provides an overview of the methodology and findings of a nine paper series of in-depth analyses which focus on the specific challenges to scaling up high-impact interventions and improving quality of care for mothers and newborns around the time of birth, including babies born small and sick. Methods The bottleneck analysis tool was applied in 12 countries in Africa and Asia as part of the ENAP process. Country workshops engaged technical experts to complete a tool designed to synthesise "bottlenecks" hindering the scale up of maternal-newborn intervention packages across seven health system building blocks. We used quantitative and qualitative methods and literature review to analyse the data and present priority actions relevant to different health system building blocks for skilled birth attendance, emergency obstetric care, antenatal corticosteroids (ACS), basic newborn care, kangaroo mother care (KMC), treatment of neonatal infections and inpatient care of small and sick newborns. Results The 12 countries included in our analysis account for the majority of global maternal (48%) and newborn (58%) deaths and stillbirths (57%). Our findings confirm previously published results that the interventions with the most perceived bottlenecks are facility-based where rapid emergency care is needed, notably inpatient care of small and sick newborns, ACS, treatment of neonatal infections and KMC. Health systems building blocks with the highest rated bottlenecks varied for different interventions. Attention needs to be paid to the context specific bottlenecks for each intervention to scale up quality care. Crosscutting findings on health information gaps inform two final papers on a roadmap for improvement of coverage data for newborns and indicate the need for leadership for effective audit systems. Conclusions Achieving the Sustainable Development Goal targets for ending preventable mortality and provision of universal health coverage will require large-scale approaches to improving quality of care. These analyses inform the development of systematic, targeted approaches to strengthening of health systems, with a focus on overcoming specific bottlenecks for the highest impact interventions. PMID:26390820
Bioclimatic predictors for supporting ecological applications in the conterminous United States
O'Donnel, Michael S.; Ignizio, Drew A.
2012-01-01
The U.S. Geological Survey (USGS) has developed climate indices, referred to as bioclimatic predictors, which highlight climate conditions best related to species physiology. A set of 20 bioclimatic predictors were developed as Geographic Information Systems (GIS) continuous raster surfaces for each year between 1895 and 2009. The Parameter-elevation Regression on Independent Slopes Model (PRISM) and down-scaled PRISM data, which included both averaged multi-year and averaged monthly climate summaries, was used to develop these multi-scale bioclimatic predictors. Bioclimatic predictors capture information about annual conditions (annual mean temperature, annual precipitation, annual range in temperature and precipitation), as well as seasonal mean climate conditions and intra-year seasonality (temperature of the coldest and warmest months, precipitation of the wettest and driest quarters). Examining climate over time is useful when quantifying the effects of climate changes on species' distributions for past, current, and forecasted scenarios. These data, which have not been readily available to scientists, can provide biologists and ecologists with relevant and multi-scaled climate data to augment research on the responses of species to changing climate conditions. The relationships established between species demographics and distributions with bioclimatic predictors can inform land managers of climatic effects on species during decisionmaking processes.
Extended-Range Forecasts at Climate Prediction Center: Current Status and Future Plans
NASA Astrophysics Data System (ADS)
Kumar, A.
2016-12-01
Motivated by a user need to provide forecast information on extended-range time-scales (i.e., weeks 2-4), in recent years Climate Prediction Center (CPC) has made considerable efforts towards developing and testing the feasibility for developing the required forecasts. The forecasts targeting this particular time-scale face a unique challenge in that while the forecast skill due to atmospheric initial conditions is small (because of rapid decay in the memory associated with the atmospheric initial conditions), short time averages for which forecasts are made do not benefit from skill associated with anomalous boundary conditions either. Despite these challenges, CPC has embarked on providing an experimental outlook for weeks 3-4 average. The talk will summarize the current status of CPC's current suite of extended-range forecast products, and further, will discuss some future plans.
Impacts of rural land-use on overland flow and sediment transport
NASA Astrophysics Data System (ADS)
Fraser, S. L.; Jackson, B. M.; Norton, K. P.
2013-12-01
The loss of fertile topsoil over time, due to erosive processes, could have a major impact on New Zealand's economy as well as being devastating to individual land owners. Improved management of land use is needed to provide protection of soil from erosion by overland flow and aeolian processes. Effects of soil erosion and sedimentation result in an annual nationwide cost of NZ$123 million. Many previous New Zealand studies have focused on large scale soil movement from land sliding and gully erosion, including identifying risk areas. However, long term small scale erosion and degradation has been largely overlooked in the literature. Although small scale soil erosion is less apparent than mass movement, cumulative small scale soil loss over many years may have a significant impact for future land productivity. One approach to assessing the role of soil degradation is through the application of landscape models. Due to the time consuming collection of data and limited scales over which data can be collected, many models created are unique to a particular land type, land use or locality. Collection of additional datasets can broaden the use of such models by informing model representation and enhancing parameterisation. The Land Use Capability Index (LUCI), developed by Jackson et al (2013) is an example of a model that will benefit from additional data sets. LUCI is a multi-criteria GIS tool, designed to inform land management decisions by identifying areas of potential change, based on land characteristics and land use options. LUCI topographically routes overland flow and sediment using existing land characteristic maps and additionally incorporating sub-field scale data. The model then has the ability to utilise these data to enhance prediction at landscape scale. This study focuses on the influence of land use on small scale sediment transport and enhancing process representation and parameterisation to improve predictive ability of models, such as LUCI. Data are currently being collected in a small catchment at the foothills of the Tararua ranges, lower North Island of New Zealand. Gurlach traps are utilised in a step like array on a number of hillslopes to provide a comprehensive dataset of overland flow and sediment volume for different magnitude rainfall events. ArcGIS is used to calculate a contributing area to each trap. The study provides quantitative data linking overland flow to event magnitude for the rural land uses of pasture versus regenerating native forest at multiple slope angles. These data along with measured soil depth/slope relationships and stream monitoring data are used to inform process representation and parameterisation of LUCI at hillslope scale. LUCI is then used to explore implications at landscape scale. The data and modelling are intended to provide information to help in long-term land management decisions. Jackson, B., Pagella, T., Sinclair, F., Orellana, B., Henshaw, A., Reynolds, B., McIntyre, N., Wheater, H., and Eycott, A. 2013. Polyscape: A GIS mapping framework providing efficient and spatially explicit landscape-scale valuation of multiple ecosystem services. Landscape and Urban Planning, 112(0): 74-88
ERIC Educational Resources Information Center
Fichten, Catherine S.; Asuncion, Jennison V.; Nguyen, Mai N.; Wolforth, Joan; Budd, Jillian; Barile, Maria; Gaulin, Chris; Martiniello, Natalie; Tibbs, Anthony; Ferraro, Vittoria; Amsel, Rhonda
2009-01-01
Data on how well information and communication technology (ICT) needs of 1354 Canadian college and university students with disabilities are met on and off campus were collected using the newly developed Positives Scale (Postsecondary Information Technology Initiative Scale). The measure contains 26 items which use a 6-point Likert scale (1 =…
Real-time modeling of primitive environments through wavelet sensors and Hebbian learning
NASA Astrophysics Data System (ADS)
Vaccaro, James M.; Yaworsky, Paul S.
1999-06-01
Modeling the world through sensory input necessarily provides a unique perspective for the observer. Given a limited perspective, objects and events cannot always be encoded precisely but must involve crude, quick approximations to deal with sensory information in a real- time manner. As an example, when avoiding an oncoming car, a pedestrian needs to identify the fact that a car is approaching before ascertaining the model or color of the vehicle. In our methodology, we use wavelet-based sensors with self-organized learning to encode basic sensory information in real-time. The wavelet-based sensors provide necessary transformations while a rank-based Hebbian learning scheme encodes a self-organized environment through translation, scale and orientation invariant sensors. Such a self-organized environment is made possible by combining wavelet sets which are orthonormal, log-scale with linear orientation and have automatically generated membership functions. In earlier work we used Gabor wavelet filters, rank-based Hebbian learning and an exponential modulation function to encode textural information from images. Many different types of modulation are possible, but based on biological findings the exponential modulation function provided a good approximation of first spike coding of `integrate and fire' neurons. These types of Hebbian encoding schemes (e.g., exponential modulation, etc.) are useful for quick response and learning, provide several advantages over contemporary neural network learning approaches, and have been found to quantize data nonlinearly. By combining wavelets with Hebbian learning we can provide a real-time front-end for modeling an intelligent process, such as the autonomous control of agents in a simulated environment.
Atmospheric CO2 variations on millennial-scale during MIS 6
NASA Astrophysics Data System (ADS)
Shin, Jinhwa; Grilli, Roberto; Chappellaz, Jérôme; Teste, Grégory; Nehrbass-Ahles, Christoph; Schmidely, Loïc; Schmitt, Jochen; Stocker, Thomas; Fischer, Hubertus
2017-04-01
Understanding natural carbon cycle / climate feedbacks on various time scales is highly important for predicting future climate changes. Paleoclimate records of Antarctic temperatures, relative sea level and foraminiferal isotope and pollen records in sediment cores from the Portuguese margin have shown climate variations on millennial time scale over the Marine Isotope Stage 6 (MIS 6; from approximately 135 to 190 kyr BP). These proxy data suggested iceberg calving in the North Atlantic result in cooling in the Northern hemisphere and warming in Antarctica by changes in the Atlantic Meridional Overturning Circulation, which is explained by a bipolar see-saw trend in the ocean (Margari et al., 2010). Atmospheric CO2 reconstruction from Antarctic ice cores can provide key information on how atmospheric CO2 concentrations are linked to millennial-scale climate changes. However, existing CO2 records cannot be used to address this relationship because of the lack of suitable temporal resolution. In this work, we will present a new CO2 record with an improved time resolution, obtained from the Dome C ice core (75˚ 06'S, 123˚ 24'E) spanning the MIS 6 period, using dry extraction methods. We will examine millennial-scale features in atmospheric CO2, and their possible links with other proxies covering MIS 6. Margari, V., Skinner, L. C., Tzedakis, P. C., Ganopolski, A., Vautravers, M., and Shackleton, N. J.: The nature of millennial scale climate variability during the past two glacial periods, Nat.Geosci., 3, 127-131, 2010.
NASA Astrophysics Data System (ADS)
Juniati, E.; Arrofiqoh, E. N.
2017-09-01
Information extraction from remote sensing data especially land cover can be obtained by digital classification. In practical some people are more comfortable using visual interpretation to retrieve land cover information. However, it is highly influenced by subjectivity and knowledge of interpreter, also takes time in the process. Digital classification can be done in several ways, depend on the defined mapping approach and assumptions on data distribution. The study compared several classifiers method for some data type at the same location. The data used Landsat 8 satellite imagery, SPOT 6 and Orthophotos. In practical, the data used to produce land cover map in 1:50,000 map scale for Landsat, 1:25,000 map scale for SPOT and 1:5,000 map scale for Orthophotos, but using visual interpretation to retrieve information. Maximum likelihood Classifiers (MLC) which use pixel-based and parameters approach applied to such data, and also Artificial Neural Network classifiers which use pixel-based and non-parameters approach applied too. Moreover, this study applied object-based classifiers to the data. The classification system implemented is land cover classification on Indonesia topographic map. The classification applied to data source, which is expected to recognize the pattern and to assess consistency of the land cover map produced by each data. Furthermore, the study analyse benefits and limitations the use of methods.
Multi-Scale Stochastic Resonance Spectrogram for fault diagnosis of rolling element bearings
NASA Astrophysics Data System (ADS)
He, Qingbo; Wu, Enhao; Pan, Yuanyuan
2018-04-01
It is not easy to identify incipient defect of a rolling element bearing by analyzing the vibration data because of the disturbance of background noise. The weak and unrecognizable transient fault signal of a mechanical system can be enhanced by the stochastic resonance (SR) technique that utilizes the noise in the system. However, it is challenging for the SR technique to identify sensitive fault information in non-stationary signals. This paper proposes a new method called multi-scale SR spectrogram (MSSRS) for bearing defect diagnosis. The new method considers the non-stationary property of the defective bearing vibration signals, and treats every scale of the time-frequency distribution (TFD) as a modulation system. Then the SR technique is utilized on each modulation system according to each frequencies in the TFD. The SR results are sensitive to the defect information because the energy of transient vibration is distributed in a limited frequency band in the TFD. Collecting the spectra of the SR outputs at all frequency scales then generates the MSSRS. The proposed MSSRS is able to well deal with the non-stationary transient signal, and can highlight the defect-induced frequency component corresponding to the impulse information. Experimental results with practical defective bearing vibration data have shown that the proposed method outperforms the former SR methods and exhibits a good application prospect in rolling element bearing fault diagnosis.
NASA Astrophysics Data System (ADS)
Austin, Kemen G.; González-Roglich, Mariano; Schaffer-Smith, Danica; Schwantes, Amanda M.; Swenson, Jennifer J.
2017-05-01
Deforestation continues across the tropics at alarming rates, with repercussions for ecosystem processes, carbon storage and long term sustainability. Taking advantage of recent fine-scale measurement of deforestation, this analysis aims to improve our understanding of the scale of deforestation drivers in the tropics. We examined trends in forest clearings of different sizes from 2000-2012 by country, region and development level. As tropical deforestation increased from approximately 6900 kha yr-1 in the first half of the study period, to >7900 kha yr-1 in the second half of the study period, >50% of this increase was attributable to the proliferation of medium and large clearings (>10 ha). This trend was most pronounced in Southeast Asia and in South America. Outside of Brazil >60% of the observed increase in deforestation in South America was due to an upsurge in medium- and large-scale clearings; Brazil had a divergent trend of decreasing deforestation, >90% of which was attributable to a reduction in medium and large clearings. The emerging prominence of large-scale drivers of forest loss in many regions and countries suggests the growing need for policy interventions which target industrial-scale agricultural commodity producers. The experience in Brazil suggests that there are promising policy solutions to mitigate large-scale deforestation, but that these policy initiatives do not adequately address small-scale drivers. By providing up-to-date and spatially explicit information on the scale of deforestation, and the trends in these patterns over time, this study contributes valuable information for monitoring, and designing effective interventions to address deforestation.
Memory and betweenness preference in temporal networks induced from time series
NASA Astrophysics Data System (ADS)
Weng, Tongfeng; Zhang, Jie; Small, Michael; Zheng, Rui; Hui, Pan
2017-02-01
We construct temporal networks from time series via unfolding the temporal information into an additional topological dimension of the networks. Thus, we are able to introduce memory entropy analysis to unravel the memory effect within the considered signal. We find distinct patterns in the entropy growth rate of the aggregate network at different memory scales for time series with different dynamics ranging from white noise, 1/f noise, autoregressive process, periodic to chaotic dynamics. Interestingly, for a chaotic time series, an exponential scaling emerges in the memory entropy analysis. We demonstrate that the memory exponent can successfully characterize bifurcation phenomenon, and differentiate the human cardiac system in healthy and pathological states. Moreover, we show that the betweenness preference analysis of these temporal networks can further characterize dynamical systems and separate distinct electrocardiogram recordings. Our work explores the memory effect and betweenness preference in temporal networks constructed from time series data, providing a new perspective to understand the underlying dynamical systems.
Multiscale multifractal detrended cross-correlation analysis of financial time series
NASA Astrophysics Data System (ADS)
Shi, Wenbin; Shang, Pengjian; Wang, Jing; Lin, Aijing
2014-06-01
In this paper, we introduce a method called multiscale multifractal detrended cross-correlation analysis (MM-DCCA). The method allows us to extend the description of the cross-correlation properties between two time series. MM-DCCA may provide new ways of measuring the nonlinearity of two signals, and it helps to present much richer information than multifractal detrended cross-correlation analysis (MF-DCCA) by sweeping all the range of scale at which the multifractal structures of complex system are discussed. Moreover, to illustrate the advantages of this approach we make use of the MM-DCCA to analyze the cross-correlation properties between financial time series. We show that this new method can be adapted to investigate stock markets under investigation. It can provide a more faithful and more interpretable description of the dynamic mechanism between financial time series than traditional MF-DCCA. We also propose to reduce the scale ranges to analyze short time series, and some inherent properties which remain hidden when a wide range is used may exhibit perfectly in this way.
NASA Astrophysics Data System (ADS)
Khalil, Nagi
2018-04-01
The homogeneous cooling state (HCS) of a granular gas described by the inelastic Boltzmann equation is reconsidered. As usual, particles are taken as inelastic hard disks or spheres, but now the coefficient of normal restitution α is allowed to take negative values , which is a simple way of modeling more complicated inelastic interactions. The distribution function of the HCS is studied at the long-time limit, as well as intermediate times. At the long-time limit, the relevant information of the HCS is given by a scaling distribution function , where the time dependence occurs through a dimensionless velocity c. For , remains close to the Gaussian distribution in the thermal region, its cumulants and exponential tails being well described by the first Sonine approximation. In contrast, for , the distribution function becomes multimodal, its maxima located at , and its observable tails algebraic. The latter is a consequence of an unbalanced relaxation–dissipation competition, and is analytically demonstrated for , thanks to a reduction of the Boltzmann equation to a Fokker–Plank-like equation. Finally, a generalized scaling solution to the Boltzmann equation is also found . Apart from the time dependence occurring through the dimensionless velocity, depends on time through a new parameter β measuring the departure of the HCS from its long-time limit. It is shown that describes the time evolution of the HCS for almost all times. The relevance of the new scaling is also discussed.
Snow, topography, and the diurnal cycle in streamflow
Lundquist, J.D.; Knowles, N.; Dettinger, M.; Cayan, D.
2002-01-01
Because snowmelt processes are spatially complex, point measurements, particularly in mountainous regions, are often inadequate to resolve basin-scale characteristics. Satellite measurements provide good spatial sampling but are often infrequent in time, particularly during cloudy weather. Fortunately, hourly measurements of river discharge provide another widely available, but as yet underutilized, source of information, providing direct information on basin output at a fine temporal scale. The hour of maximum discharge recorded each day reflects the travel time between peak melt and the time most water reaches the gauge. Traditional theories, based on numerical models of melt-water percolation through a snowpack and localized, small-basin observations, report that the hour of daily maximum flow becomes earlier as the snowpack thins and matures, reflecting shorter travel times for surface melt to reach the base of the snowpack. However, an examination of hourly discharge from 100 basins in the Western United States, ranging in size from 1.3 km2 to 10,813 km2, reveals a more complex situation. The sequences of seasonal evolution of the hour of maximum discharge are unique to each basin, but within a given basin are remarkably consistent between years, regardless of the size of the snowpack. This seems to imply that basin topography strongly influences the timing of peak flow. In most of the basins examined, at the end of the melt season, the hour of maximum discharge shifts to later in the day, reflecting increased travel times as the snowline retreats to higher elevations.
Cross-scale impact of climate temporal variability on ecosystem water and carbon fluxes
Paschalis, Athanasios; Fatichi, Simone; Katul, Gabriel G.; ...
2015-08-07
While the importance of ecosystem functioning is undisputed in the context of climate change and Earth system modeling, the role of short-scale temporal variability of hydrometeorological forcing (~1 h) on the related ecosystem processes remains to be fully understood. Additionally, various impacts of meteorological forcing variability on water and carbon fluxes across a range of scales are explored here using numerical simulations. Synthetic meteorological drivers that highlight dynamic features of the short temporal scale in series of precipitation, temperature, and radiation are constructed. These drivers force a mechanistic ecohydrological model that propagates information content into the dynamics of water andmore » carbon fluxes for an ensemble of representative ecosystems. The focus of the analysis is on a cross-scale effect of the short-scale forcing variability on the modeled evapotranspiration and ecosystem carbon assimilation. Interannual variability of water and carbon fluxes is emphasized in the analysis. The main study inferences are summarized as follows: (a) short-scale variability of meteorological input does affect water and carbon fluxes across a wide range of time scales, spanning from the hourly to the annual and longer scales; (b) different ecosystems respond to the various characteristics of the short-scale variability of the climate forcing in various ways, depending on dominant factors limiting system productivity; (c) whenever short-scale variability of meteorological forcing influences primarily fast processes such as photosynthesis, its impact on the slow-scale variability of water and carbon fluxes is small; and (d) whenever short-scale variability of the meteorological forcing impacts slow processes such as movement and storage of water in the soil, the effects of the variability can propagate to annual and longer time scales.« less
Cross-scale impact of climate temporal variability on ecosystem water and carbon fluxes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paschalis, Athanasios; Fatichi, Simone; Katul, Gabriel G.
While the importance of ecosystem functioning is undisputed in the context of climate change and Earth system modeling, the role of short-scale temporal variability of hydrometeorological forcing (~1 h) on the related ecosystem processes remains to be fully understood. Additionally, various impacts of meteorological forcing variability on water and carbon fluxes across a range of scales are explored here using numerical simulations. Synthetic meteorological drivers that highlight dynamic features of the short temporal scale in series of precipitation, temperature, and radiation are constructed. These drivers force a mechanistic ecohydrological model that propagates information content into the dynamics of water andmore » carbon fluxes for an ensemble of representative ecosystems. The focus of the analysis is on a cross-scale effect of the short-scale forcing variability on the modeled evapotranspiration and ecosystem carbon assimilation. Interannual variability of water and carbon fluxes is emphasized in the analysis. The main study inferences are summarized as follows: (a) short-scale variability of meteorological input does affect water and carbon fluxes across a wide range of time scales, spanning from the hourly to the annual and longer scales; (b) different ecosystems respond to the various characteristics of the short-scale variability of the climate forcing in various ways, depending on dominant factors limiting system productivity; (c) whenever short-scale variability of meteorological forcing influences primarily fast processes such as photosynthesis, its impact on the slow-scale variability of water and carbon fluxes is small; and (d) whenever short-scale variability of the meteorological forcing impacts slow processes such as movement and storage of water in the soil, the effects of the variability can propagate to annual and longer time scales.« less
Scale in Remote Sensing and GIS: An Advancement in Methods Towards a Science of Scale
NASA Technical Reports Server (NTRS)
Quattrochi, Dale A.
1998-01-01
The term "scale", both in space and time, is central to remote sensing and geographic information systems (GIS). The emergence and widespread use of GIS technologies, including remote sensing, has generated significant interest in addressing scale as a generic topic, and in the development and implementation of techniques for dealing explicitly with the vicissitudes of scale as a multidisciplinary issue. As science becomes more complex and utilizes databases that are capable of performing complex space-time data analyses, it becomes paramount that we develop the tools and techniques needed to operate at multiple scales, to work with data whose scales are not necessarily ideal, and to produce results that can be aggregated or disaggregated in ways that suit the decision-making process. Contemporary science is constantly coping with compromises, and the data available for a particular study rarely fit perfectly with the scales at which the processes being investigated operate, or the scales that policy-makers require to make sound, rational decisions. This presentation discusses some of the problems associated with scale as related to remote sensing and GIS, and describes some of the questions that need to be addressed in approaching the development of a multidisciplinary "science of scale". Techniques for dealing with multiple scaled data that have been developed or explored recently are described as a means for recognizing scale as a generic issue, along with associated theory and tools that can be of simultaneous value to a large number of disciplines. These can be used to seek answers to a host of interrelated questions in the interest of providing a formal structure for the management and manipulation of scale and its universality as a key concept from a multidisciplinary perspective.
Scale in Remote Sensing and GIS: An Advancement in Methods Towards a Science of Scale
NASA Technical Reports Server (NTRS)
Quattrochi, D. A.
1998-01-01
The term "scale", both in space and time, is central to remote sensing and Geographic Information Systems (GIS). The emergence and widespread use of GIS technologies, including remote sensing, has generated significant interest in addressing scale as a generic topic, and in the development and implementation of techniques for dealing explicitly with the vicissitudes of scale as a multidisciplinary issue. As science becomes more complex and utilizes databases that are capable of performing complex space-time data analyses, it becomes paramount that we develop the tools and techniques needed to operate at multiple scales, to work with data whose scales are not necessarily ideal, and to produce results that can be aggregated or disaggregated ways that suit the decision-making process. Contemporary science is constantly coping with compromises, and the data available for a particular study rarely fit perfectly with the scales at which the processes being investigated operate, or the scales that policy-makers require to make sound, rational decisions. This presentation discusses some of the problems associated with scale as related to remote sensing and GIS, and describes some of the questions that need to be addressed in approaching the development of a multidisciplinary "science of scale". Techniques for dealing with multiple scaled data that have been developed or explored recently are described as a means for recognizing scale as a generic issue, along with associated theory and tools that can be of simultaneous value to a large number of disciplines. These can be used to seek answers to a host of interrelated questions in the interest of providing a formal structure for the management and manipulation of scale and its universality as a key concept from a multidisciplinary perspective.
How much a galaxy knows about its large-scale environment?: An information theoretic perspective
NASA Astrophysics Data System (ADS)
Pandey, Biswajit; Sarkar, Suman
2017-05-01
The small-scale environment characterized by the local density is known to play a crucial role in deciding the galaxy properties but the role of large-scale environment on galaxy formation and evolution still remain a less clear issue. We propose an information theoretic framework to investigate the influence of large-scale environment on galaxy properties and apply it to the data from the Galaxy Zoo project that provides the visual morphological classifications of ˜1 million galaxies from the Sloan Digital Sky Survey. We find a non-zero mutual information between morphology and environment that decreases with increasing length-scales but persists throughout the entire length-scales probed. We estimate the conditional mutual information and the interaction information between morphology and environment by conditioning the environment on different length-scales and find a synergic interaction between them that operates up to at least a length-scales of ˜30 h-1 Mpc. Our analysis indicates that these interactions largely arise due to the mutual information shared between the environments on different length-scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Susandi, Armi, E-mail: armi@meteo.itb.ac.id; Tamamadin, Mamad, E-mail: mamadtama@meteo.itb.ac.id; Djamal, Erizal, E-mail: erizal-jamal@yahoo.com
This paper describes information system of rice planting calendar to help farmers in determining the time for rice planting. The information includes rainfall prediction in ten days (dasarian) scale overlaid to map of rice field to produce map of rice planting in village level. The rainfall prediction was produced by stochastic modeling using Fast Fourier Transform (FFT) and Non-Linier Least Squares methods to fit the curve of function to the rainfall data. In this research, the Fourier series has been modified become non-linear function to follow the recent characteristics of rainfall that is non stationary. The results have been alsomore » validated in 4 steps, including R-Square, RMSE, R-Skill, and comparison with field data. The development of information system (cyber extension) provides information such as rainfall prediction, prediction of the planting time, and interactive space for farmers to respond to the information submitted. Interfaces for interactive response will be critical to the improvement of prediction accuracy of information, both rainfall and planting time. The method used to get this information system includes mapping on rice planting prediction, converting the format file, developing database system, developing website, and posting website. Because of this map was overlaid with the Google map, the map files must be converted to the .kml file format.« less
1978-04-01
3 1.7 Production Rate Change Time . . . . 3 1.8 Time of Fatigue Test Start . ..... 3 1.9 Fatigue Test Acceleration Factor . 3 1.10 Corrosion...simulation logic. SAIFE accounts for the following factors : (1) aircraft design analysis; (2) component and full-scale fatigue testing; (3) production ...reliability; production , servi ce,Information Service, Springfield, and corrosion defects; crack or corrosi on Virginia 22151 detection probability; crack
2013-01-01
almost half of the cases, related to delays in hemorrhage control during transportation or in resuscitation efforts. Earlier detection of hemorrhagic...restored to normal values with fluid resuscitation [311. On further analysis, we found that SampEn retained its ability to discriminate survivors from...information about HR dynamics in the neonatal ICU setting. They have developed a real time index, termed HR characteristics, which takes into
Size-selective sorting in bubble streaming flows: Particle migration on fast time scales
NASA Astrophysics Data System (ADS)
Thameem, Raqeeb; Rallabandi, Bhargav; Hilgenfeldt, Sascha
2015-11-01
Steady streaming from ultrasonically driven microbubbles is an increasingly popular technique in microfluidics because such devices are easily manufactured and generate powerful and highly controllable flows. Combining streaming and Poiseuille transport flows allows for passive size-sensitive sorting at particle sizes and selectivities much smaller than the bubble radius. The crucial particle deflection and separation takes place over very small times (milliseconds) and length scales (20-30 microns) and can be rationalized using a simplified geometric mechanism. A quantitative theoretical description is achieved through the application of recent results on three-dimensional streaming flow field contributions. To develop a more fundamental understanding of the particle dynamics, we use high-speed photography of trajectories in polydisperse particle suspensions, recording the particle motion on the time scale of the bubble oscillation. Our data reveal the dependence of particle displacement on driving phase, particle size, oscillatory flow speed, and streaming speed. With this information, the effective repulsive force exerted by the bubble on the particle can be quantified, showing for the first time how fast, selective particle migration is effected in a streaming flow. We acknowledge support by the National Science Foundation under grant number CBET-1236141.
NASA Astrophysics Data System (ADS)
Young, F.; Siegel, Edward Carl-Ludwig
2011-03-01
(so MIScalled) "complexity" with INHERENT BOTH SCALE-Invariance Symmetry-RESTORING, AND 1 / w (1.000..) "pink" Zipf-law Archimedes-HYPERBOLICITY INEVITABILITY power-spectrum power-law decay algebraicity. Their CONNECTION is via simple-calculus SCALE-Invariance Symmetry-RESTORING logarithm-function derivative: (d/ d ω) ln(ω) = 1 / ω , i.e. (d/ d ω) [SCALE-Invariance Symmetry-RESTORING](ω) = 1/ ω . Via Noether-theorem continuous-symmetries relation to conservation-laws: (d/ d ω) [inter-scale 4-current 4-div-ergence} = 0](ω) = 1 / ω . Hence (so MIScalled) "complexity" is information inter-scale conservation, in agreement with Anderson-Mandell [Fractals of Brain/Mind, G. Stamov ed.(1994)] experimental-psychology!!!], i.e. (so MIScalled) "complexity" is UTTER-SIMPLICITY!!! Versus COMPLICATEDNESS either PLUS (Additive) VS. TIMES (Multiplicative) COMPLICATIONS of various system-specifics. COMPLICATEDNESS-MEASURE DEVIATIONS FROM complexity's UTTER-SIMPLICITY!!!: EITHER [SCALE-Invariance Symmetry-BREAKING] MINUS [SCALE-Invariance Symmetry-RESTORING] via power-spectrum power-law algebraicity decays DIFFERENCES: ["red"-Pareto] MINUS ["pink"-Zipf Archimedes-HYPERBOLICITY INEVITABILITY]!!!
Time-series modeling of long-term weight self-monitoring data.
Helander, Elina; Pavel, Misha; Jimison, Holly; Korhonen, Ilkka
2015-08-01
Long-term self-monitoring of weight is beneficial for weight maintenance, especially after weight loss. Connected weight scales accumulate time series information over long term and hence enable time series analysis of the data. The analysis can reveal individual patterns, provide more sensitive detection of significant weight trends, and enable more accurate and timely prediction of weight outcomes. However, long term self-weighing data has several challenges which complicate the analysis. Especially, irregular sampling, missing data, and existence of periodic (e.g. diurnal and weekly) patterns are common. In this study, we apply time series modeling approach on daily weight time series from two individuals and describe information that can be extracted from this kind of data. We study the properties of weight time series data, missing data and its link to individuals behavior, periodic patterns and weight series segmentation. Being able to understand behavior through weight data and give relevant feedback is desired to lead to positive intervention on health behaviors.
NASA Technical Reports Server (NTRS)
Borella, H. M.; Estes, J. E.; Ezra, C. E.; Scepan, J.; Tinney, L. R.
1982-01-01
For two test sites in Pennsylvania the interpretability of commercially acquired low-altitude and existing high-altitude aerial photography are documented in terms of time, costs, and accuracy for Anderson Level II land use/land cover mapping. Information extracted from the imagery is to be used in the evaluation process for siting energy facilities. Land use/land cover maps were drawn at 1:24,000 scale using commercially flown color infrared photography obtained from the United States Geological Surveys' EROS Data Center. Detailed accuracy assessment of the maps generated by manual image analysis was accomplished employing a stratified unaligned adequate class representation. Both 'area-weighted' and 'by-class' accuracies were documented and field-verified. A discrepancy map was also drawn to illustrate differences in classifications between the two map scales. Results show that the 1:24,000 scale map set was more accurate (99% to 94% area-weighted) than the 1:62,500 scale set, especially when sampled by class (96% to 66%). The 1:24,000 scale maps were also more time-consuming and costly to produce, due mainly to higher image acquisition costs.
A post-Bertalanffy Systemics Healthcare Competitive Framework Proposal.
Fiorini, Rodolfo A; Santacroce, Giulia F
2014-01-01
Health Information community can take advantage of a new evolutive categorization cybernetic framework. A systemic concept of principles organizing nature is proposed. It can be used as a multiscaling reference framework to develop successful and competitive antifragile system and new HRO information management strategies in advanced healthcare organization (HO) and high reliability organization (HRO) conveniently. Expected impacts are multifarious and quite articulated at different system scale level: major one is that, for the first time, Biomedical Engineering ideal system categorization levels can be matched exactly to practical system modeling interaction styles, with no paradigmatic operational ambiguity and information loss.
Sako, Binta; Leerlooijer, Joanne N.; Lelisa, Azeb; Hailemariam, Abebe; Brouwer, Inge D.; Tucker Brown, Amal
2017-01-01
Abstract Child malnutrition remains high in Ethiopia, and inadequate complementary feeding is a contributing factor. In this context, a community‐based intervention was designed to provide locally made complementary food for children 6–23 months, using a bartering system, in four Ethiopian regions. After a pilot phase, the intervention was scaled up from 8 to 180 localities. We conducted a process evaluation to determine enablers and barriers for the scaling up of this intervention. Eight study sites were selected to perform 52 key informant interviews and 31 focus group discussions with purposely selected informants. For analysis, we used a framework describing six elements of successful scaling up: socio‐political context, attributes of the intervention, attributes of the implementers, appropriate delivery strategy, the adopting community, and use of research to inform the scale‐up process. A strong political will, alignment of the intervention with national priorities, and integration with the health care system were instrumental in the scaling up. The participatory approach in decision‐making reinforced ownership at community level, and training about complementary feeding motivated mothers and women's groups to participate. However, the management of the complex intervention, limited human resources, and lack of incentives for female volunteers proved challenging. In the bartering model, the barter rate was accepted, but the bartering was hindered by unavailability of cereals and limited financial and material resources to contribute, threatening the project's sustainability. Scaling up strategies for nutrition interventions require sufficient time, thorough planning, and assessment of the community's capacity to contribute human, financial, and material resources. PMID:29063698
Zhang, Wenqing; Qiu, Lu; Xiao, Qin; Yang, Huijie; Zhang, Qingjun; Wang, Jianyong
2012-11-01
By means of the concept of the balanced estimation of diffusion entropy, we evaluate the reliable scale invariance embedded in different sleep stages and stride records. Segments corresponding to waking, light sleep, rapid eye movement (REM) sleep, and deep sleep stages are extracted from long-term electroencephalogram signals. For each stage the scaling exponent value is distributed over a considerably wide range, which tell us that the scaling behavior is subject and sleep cycle dependent. The average of the scaling exponent values for waking segments is almost the same as that for REM segments (∼0.8). The waking and REM stages have a significantly higher value of the average scaling exponent than that for light sleep stages (∼0.7). For the stride series, the original diffusion entropy (DE) and the balanced estimation of diffusion entropy (BEDE) give almost the same results for detrended series. The evolutions of local scaling invariance show that the physiological states change abruptly, although in the experiments great efforts have been made to keep conditions unchanged. The global behavior of a single physiological signal may lose rich information on physiological states. Methodologically, the BEDE can evaluate with considerable precision the scale invariance in very short time series (∼10^{2}), while the original DE method sometimes may underestimate scale-invariance exponents or even fail in detecting scale-invariant behavior. The BEDE method is sensitive to trends in time series. The existence of trends may lead to an unreasonably high value of the scaling exponent and consequent mistaken conclusions.
NASA Astrophysics Data System (ADS)
Zhang, Wenqing; Qiu, Lu; Xiao, Qin; Yang, Huijie; Zhang, Qingjun; Wang, Jianyong
2012-11-01
By means of the concept of the balanced estimation of diffusion entropy, we evaluate the reliable scale invariance embedded in different sleep stages and stride records. Segments corresponding to waking, light sleep, rapid eye movement (REM) sleep, and deep sleep stages are extracted from long-term electroencephalogram signals. For each stage the scaling exponent value is distributed over a considerably wide range, which tell us that the scaling behavior is subject and sleep cycle dependent. The average of the scaling exponent values for waking segments is almost the same as that for REM segments (˜0.8). The waking and REM stages have a significantly higher value of the average scaling exponent than that for light sleep stages (˜0.7). For the stride series, the original diffusion entropy (DE) and the balanced estimation of diffusion entropy (BEDE) give almost the same results for detrended series. The evolutions of local scaling invariance show that the physiological states change abruptly, although in the experiments great efforts have been made to keep conditions unchanged. The global behavior of a single physiological signal may lose rich information on physiological states. Methodologically, the BEDE can evaluate with considerable precision the scale invariance in very short time series (˜102), while the original DE method sometimes may underestimate scale-invariance exponents or even fail in detecting scale-invariant behavior. The BEDE method is sensitive to trends in time series. The existence of trends may lead to an unreasonably high value of the scaling exponent and consequent mistaken conclusions.
NASA Astrophysics Data System (ADS)
Robinet, Jérémy; von Hebel, Christian; van der Kruk, Jan; Govers, Gerard; Vanderborght, Jan
2016-04-01
As highlighted by many authors, classical or geophysical techniques for measuring soil moisture such as destructive soil sampling, neutron probes or Time Domain Reflectometry (TDR) have some major drawbacks. Among other things, they provide point scale information, are often intrusive and time-consuming. ElectroMagnetic Induction (EMI) instruments are often cited as a promising alternative hydrogeophysical methods providing more efficiently soil moisture measurements ranging from hillslope to catchment scale. The overall objective of our research project is to investigate whether a combination of geophysical techniques at various scales can be used to study the impact of land use change on temporal and spatial variations of soil moisture and soil properties. In our work, apparent electrical conductivity (ECa) patterns are obtained with an EM multiconfiguration system. Depth profiles of ECa were subsequently inferred through a calibration-inversion procedure based on TDR data. The obtained spatial patterns of these profiles were linked to soil profile and soil water content distributions. Two catchments with contrasting land use (agriculture vs. natural forest) were selected in a subtropical region in the south of Brazil. On selected slopes within the catchments, combined EMI and TDR measurements were carried out simultaneously, under different atmospheric and soil moisture conditions. Ground-truth data for soil properties were obtained through soil sampling and auger profiles. The comparison of these data provided information about the potential of the EMI technique to deliver qualitative and quantitative information about the variability of soil moisture and soil properties.
Equation-of-State Scaling Factors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scannapieco, Anthony J.
2016-06-28
Equation-of-State scaling factors are needed when using a tabular EOS in which the user de ned material isotopic fractions di er from the actual isotopic fractions used by the table. Additionally, if a material is dynamically changing its isotopic structure, then an EOS scaling will again be needed, and will vary in time and location. The procedure that allows use of a table to obtain information about a similar material with average atomic mass Ms and average atomic number Zs is described below. The procedure is exact for a fully ionized ideal gas. However, if the atomic number is replacemore » by the e ective ionization state the procedure can be applied to partially ionized material as well, which extends the applicability of the scaling approximation continuously from low to high temperatures.« less
Lead isotopic studies of lunar soils - Their bearing on the time scale of agglutinate formation
NASA Technical Reports Server (NTRS)
Church, S. E.; Tilton, G. R.; Chen, J. H.
1976-01-01
Fines (smaller than 75 microns) and bulk soil were studied to analyze loss of volatile lead; losses of the order of 10% to 30% radiogenic lead during the production of agglutinates are assessed. Lead isotope data from fine-agglutinate pairs are analyzed for information on the time scale of micrometeorite bombardment, from the chords generated by the data in concordia diagrams. Resulting mean lead loss ages were compared to spallogenic gas exposure ages for all samples. Labile parentless radiogenic Pb residing preferentially on or in the fines is viewed as possibly responsible for aberrant lead loss ages. Bulk soils plot above the concordia curve (in a field of excess radiogenic Pb) for all samples with anomalous ages.
Monthly means of selected climate variables for 1985 - 1989
NASA Technical Reports Server (NTRS)
Schubert, S.; Wu, C.-Y.; Zero, J.; Schemm, J.-K.; Park, C.-K.; Suarez, M.
1992-01-01
Meteorologists are accustomed to viewing instantaneous weather maps, since these contain the most relevant information for the task of producing short-range weather forecasts. Climatologists, on the other hand, tend to deal with long-term means, which portray the average climate. The recent emphasis on dynamical extended-range forecasting and, in particular measuring and predicting short term climate change makes it important that we become accustomed to looking at variations on monthly and longer time scales. A convenient toll for researchers to familiarize themselves with the variability which occurs in selected parameters on these time scales is provided. The format of the document was chosen to help facilitate the intercomparison of various parameters and highlight the year-to-year variability in monthly means.
Large-scale semidefinite programming for many-electron quantum mechanics.
Mazziotti, David A
2011-02-25
The energy of a many-electron quantum system can be approximated by a constrained optimization of the two-electron reduced density matrix (2-RDM) that is solvable in polynomial time by semidefinite programming (SDP). Here we develop a SDP method for computing strongly correlated 2-RDMs that is 10-20 times faster than previous methods [D. A. Mazziotti, Phys. Rev. Lett. 93, 213001 (2004)]. We illustrate with (i) the dissociation of N(2) and (ii) the metal-to-insulator transition of H(50). For H(50) the SDP problem has 9.4×10(6) variables. This advance also expands the feasibility of large-scale applications in quantum information, control, statistics, and economics. © 2011 American Physical Society
Large-Scale Semidefinite Programming for Many-Electron Quantum Mechanics
NASA Astrophysics Data System (ADS)
Mazziotti, David A.
2011-02-01
The energy of a many-electron quantum system can be approximated by a constrained optimization of the two-electron reduced density matrix (2-RDM) that is solvable in polynomial time by semidefinite programming (SDP). Here we develop a SDP method for computing strongly correlated 2-RDMs that is 10-20 times faster than previous methods [D. A. Mazziotti, Phys. Rev. Lett. 93, 213001 (2004)PRLTAO0031-900710.1103/PhysRevLett.93.213001]. We illustrate with (i) the dissociation of N2 and (ii) the metal-to-insulator transition of H50. For H50 the SDP problem has 9.4×106 variables. This advance also expands the feasibility of large-scale applications in quantum information, control, statistics, and economics.
Kelly, Glenn; Simpson, Grahame K; Brown, Suzanne; Kremer, Peter; Gillett, Lauren
2017-05-23
The objectives were to test the properties, via a psychometric study, of the Overt Behaviour Scale-Self-Report (OBS-SR), a version of the OBS-Adult Scale developed to provide a client perspective on challenging behaviours after acquired brain injury. Study sample 1 consisted of 37 patients with primary brain tumour (PBT) and a family-member informant. Sample 2 consisted of 34 clients with other acquired brain injury (mixed brain injury, MBI) and a service-provider informant. Participants completed the OBS-SR (at two time points), and the Awareness Questionnaire (AQ) and Mayo Portland Adaptability Inventory-III (MPAI-III) once; informants completed the OBS-Adult and AQ once only. PBT-informant dyads displayed "good" levels of agreement (ICC 2,k = .74; OBS-SR global index). Although MBI-informant dyads displayed no agreement (ICC 2,k = .22; OBS-SR global index), the sub-group (17/29) rated by clinicians as having moderate to good levels of awareness displayed "fair" agreement (ICC 2,k = .58; OBS-SR global index). Convergent/divergent validity was demonstrated by significant correlations between OBS-SR subscales and MPAI-III subscales with behavioural content (coefficients in the range .36 -.61). Scores had good reliability across one week (ICC 2,k = .69). The OBS-SR took approximately 15 minutes to complete. It was concluded that the OBS-SR demonstrated acceptable reliability and validity, providing a useful resource in understanding clients' perspectives about their behaviour.
Scattering - a probe to Earth's small scale structure
NASA Astrophysics Data System (ADS)
Rost, S.; Earle, P.
2009-05-01
Much of the short-period teleseismic wavefield shows strong evidence for scattered waves in extended codas trailing the main arrivals predicted by ray theory. This energy mainly originates from high-frequency body waves interacting with fine-scale volumetric heterogeneities in the Earth. Studies of this energy revealed much of what we know about Earth's structure at scale lengths around 10 km throughout the Earth from crust to core. From these data we can gain important information about the mineral-physical and geochemical constitution of the Earth that is inaccessible to many other seismic imaging techniques. Previous studies used scattered energy related to PKP, PKiKP, and Pdiff to identify and map the small-scale structure of the mantle and core. We will present observations related to the core phases PKKP and P'P' to study fine-scale mantle heterogeneities. These phases are maximum travel-time phases with respect to perturbations at their reflection points. This allows observation of the scattered energy as precursors to the main phase avoiding common problems with traditional coda phases which arrive after the main pulse. The precursory arrival of the scattered energy allows the separation between deep Earth and crustal contributions to the scattered wavefield for certain source-receiver configurations. Using the information from these scattered phases we identify regions of the mantle that shows increased scattering potential likely linked to larger scale mantle structure identified in seismic tomography and geodynamical models.
NASA Astrophysics Data System (ADS)
Holá, Markéta; Kalvoda, Jiří; Nováková, Hana; Škoda, Radek; Kanický, Viktor
2011-01-01
LA-ICP-MS and solution based ICP-MS in combination with electron microprobe are presented as a method for the determination of the elemental spatial distribution in fish scales which represent an example of a heterogeneous layered bone structure. Two different LA-ICP-MS techniques were tested on recent common carp ( Cyprinus carpio) scales: A line scan through the whole fish scale perpendicular to the growth rings. The ablation crater of 55 μm width and 50 μm depth allowed analysis of the elemental distribution in the external layer. Suitable ablation conditions providing a deeper ablation crater gave average values from the external HAP layer and the collagen basal plate. Depth profiling using spot analysis was tested in fish scales for the first time. Spot analysis allows information to be obtained about the depth profile of the elements at the selected position on the sample. The combination of all mentioned laser ablation techniques provides complete information about the elemental distribution in the fish scale samples. The results were compared with the solution based ICP-MS and EMP analyses. The fact that the results of depth profiling are in a good agreement both with EMP and PIXE results and, with the assumed ways of incorporation of the studied elements in the HAP structure, suggests a very good potential for this method.
Narimani, Zahra; Beigy, Hamid; Ahmad, Ashar; Masoudi-Nejad, Ali; Fröhlich, Holger
2017-01-01
Inferring the structure of molecular networks from time series protein or gene expression data provides valuable information about the complex biological processes of the cell. Causal network structure inference has been approached using different methods in the past. Most causal network inference techniques, such as Dynamic Bayesian Networks and ordinary differential equations, are limited by their computational complexity and thus make large scale inference infeasible. This is specifically true if a Bayesian framework is applied in order to deal with the unavoidable uncertainty about the correct model. We devise a novel Bayesian network reverse engineering approach using ordinary differential equations with the ability to include non-linearity. Besides modeling arbitrary, possibly combinatorial and time dependent perturbations with unknown targets, one of our main contributions is the use of Expectation Propagation, an algorithm for approximate Bayesian inference over large scale network structures in short computation time. We further explore the possibility of integrating prior knowledge into network inference. We evaluate the proposed model on DREAM4 and DREAM8 data and find it competitive against several state-of-the-art existing network inference methods.
Wavelet Analysis of Turbulent Spots and Other Coherent Structures in Unsteady Transition
NASA Technical Reports Server (NTRS)
Lewalle, Jacques
1998-01-01
This is a secondary analysis of a portion of the Halstead data. The hot-film traces from an embedded stage of a low pressure turbine have been extensively analyzed by Halstead et al. In this project, wavelet analysis is used to develop the quantitative characterization of individual coherent structures in terms of size, amplitude, phase, convection speed, etc., as well as phase-averaged time scales. The purposes of the study are (1) to extract information about turbulent time scales for comparison with unsteady model results (e.g. k/epsilon). Phase-averaged maps of dominant time scales will be presented; and (2) to evaluate any differences between wake-induced and natural spots that might affect model performance. Preliminary results, subject to verification with data at higher frequency resolution, indicate that spot properties are independent of their phase relative to the wake footprints: therefore requirements for the physical content of models are kept relatively simple. Incidentally, we also observed that spot substructures can be traced over several stations; further study will examine their possible impact.
A functional model for characterizing long-distance movement behaviour
Buderman, Frances E.; Hooten, Mevin B.; Ivan, Jacob S.; Shenk, Tanya M.
2016-01-01
Advancements in wildlife telemetry techniques have made it possible to collect large data sets of highly accurate animal locations at a fine temporal resolution. These data sets have prompted the development of a number of statistical methodologies for modelling animal movement.Telemetry data sets are often collected for purposes other than fine-scale movement analysis. These data sets may differ substantially from those that are collected with technologies suitable for fine-scale movement modelling and may consist of locations that are irregular in time, are temporally coarse or have large measurement error. These data sets are time-consuming and costly to collect but may still provide valuable information about movement behaviour.We developed a Bayesian movement model that accounts for error from multiple data sources as well as movement behaviour at different temporal scales. The Bayesian framework allows us to calculate derived quantities that describe temporally varying movement behaviour, such as residence time, speed and persistence in direction. The model is flexible, easy to implement and computationally efficient.We apply this model to data from Colorado Canada lynx (Lynx canadensis) and use derived quantities to identify changes in movement behaviour.
Cross-correlations and influence in world gold markets
NASA Astrophysics Data System (ADS)
Lin, Min; Wang, Gang-Jin; Xie, Chi; Stanley, H. Eugene
2018-01-01
Using the detrended cross-correlation analysis (DCCA) coefficient and the detrended partial cross-correlation analysis (DPCCA) coefficient, we investigate cross-correlations and net cross-correlations among five major world gold markets (London, New York, Shanghai, Tokyo, and Mumbai) at different time scales. We propose multiscale influence measures for examining the influence of individual markets on other markets and on the entire system. We find (i) that the cross-correlations, net cross-correlations, and net influences among the five gold markets vary across time scales, (ii) that the cross-market correlation between London and New York at each time scale is intense and inherent, meaning that the influence of other gold markets on the London-New York market is negligible, (iii) that the remaining cross-market correlations (i.e., those other than London-New York) are greatly affected by other gold markets, and (iv) that the London gold market significantly affects the other four gold markets and dominates the world-wide gold market. Our multiscale findings give market participants and market regulators new information on cross-market linkages in the world-wide gold market.
NASA Astrophysics Data System (ADS)
Coughlan, Michael R.
2016-05-01
Forest managers are increasingly recognizing the value of disturbance-based land management techniques such as prescribed burning. Unauthorized, "arson" fires are common in the southeastern United States where a legacy of agrarian cultural heritage persists amidst an increasingly forest-dominated landscape. This paper reexamines unauthorized fire-setting in the state of Georgia, USA from a historical ecology perspective that aims to contribute to historically informed, disturbance-based land management. A space-time permutation analysis is employed to discriminate systematic, management-oriented unauthorized fires from more arbitrary or socially deviant fire-setting behaviors. This paper argues that statistically significant space-time clusters of unauthorized fire occurrence represent informal management regimes linked to the legacy of traditional land management practices. Recent scholarship has pointed out that traditional management has actively promoted sustainable resource use and, in some cases, enhanced biodiversity often through the use of fire. Despite broad-scale displacement of traditional management during the 20th century, informal management practices may locally circumvent more formal and regionally dominant management regimes. Space-time permutation analysis identified 29 statistically significant fire regimes for the state of Georgia. The identified regimes are classified by region and land cover type and their implications for historically informed disturbance-based resource management are discussed.
Coughlan, Michael R
2016-05-01
Forest managers are increasingly recognizing the value of disturbance-based land management techniques such as prescribed burning. Unauthorized, "arson" fires are common in the southeastern United States where a legacy of agrarian cultural heritage persists amidst an increasingly forest-dominated landscape. This paper reexamines unauthorized fire-setting in the state of Georgia, USA from a historical ecology perspective that aims to contribute to historically informed, disturbance-based land management. A space-time permutation analysis is employed to discriminate systematic, management-oriented unauthorized fires from more arbitrary or socially deviant fire-setting behaviors. This paper argues that statistically significant space-time clusters of unauthorized fire occurrence represent informal management regimes linked to the legacy of traditional land management practices. Recent scholarship has pointed out that traditional management has actively promoted sustainable resource use and, in some cases, enhanced biodiversity often through the use of fire. Despite broad-scale displacement of traditional management during the 20th century, informal management practices may locally circumvent more formal and regionally dominant management regimes. Space-time permutation analysis identified 29 statistically significant fire regimes for the state of Georgia. The identified regimes are classified by region and land cover type and their implications for historically informed disturbance-based resource management are discussed.
Threats to information security of real-time disease surveillance systems.
Henriksen, Eva; Johansen, Monika A; Baardsgaard, Anders; Bellika, Johan G
2009-01-01
This paper presents the main results from a qualitative risk assessment of information security aspects for a new real-time disease surveillance approach in general, and for the Snow surveillance system in particular. All possible security threats and acceptable solutions, and the implications these solutions had to the design of the system, were discussed. Approximately 30 threats were identified. None of these got an unacceptable high risk level originally, but two got medium risk level, of which one was concluded to be unacceptable after further investigation. Of the remaining low risk threats, some have severe consequence, thus requiring particular assessment. Since it is very important to identify and solve all security threats before real-time solutions can be used in a wide scale, additional investigations are needed.
Lifetime evaluation of large format CMOS mixed signal infrared devices
NASA Astrophysics Data System (ADS)
Linder, A.; Glines, Eddie
2015-09-01
New large scale foundry processes continue to produce reliable products. These new large scale devices continue to use industry best practice to screen for failure mechanisms and validate their long lifetime. The Failure-in-Time analysis in conjunction with foundry qualification information can be used to evaluate large format device lifetimes. This analysis is a helpful tool when zero failure life tests are typical. The reliability of the device is estimated by applying the failure rate to the use conditions. JEDEC publications continue to be the industry accepted methods.
From coupled elementary units to the complexity of the glass transition.
Rehwald, Christian; Rubner, Oliver; Heuer, Andreas
2010-09-10
Supercooled liquids display fascinating properties upon cooling such as the emergence of dynamic length scales. Different models strongly vary with respect to the choice of the elementary subsystems as well as their mutual coupling. Here we show via computer simulations of a glass former that both ingredients can be identified via analysis of finite-size effects within the continuous-time random walk framework. The subsystems already contain complete information about thermodynamics and diffusivity, whereas the coupling determines structural relaxation and the emergence of dynamic length scales.
Not Fully Developed Turbulence in the Dow Jones Index
NASA Astrophysics Data System (ADS)
Trincado, Estrella; Vindel, Jose María
2013-08-01
The shape of the curves relating the scaling exponents of the structure functions to the order of these functions is shown to distinguish the Dow Jones index from other stock market indices. We conclude from the shape differences that the information-loss rate for the Dow Jones index is reduced at smaller time scales, while it grows for other indices. This anomaly is due to the construction of the index, in particular to its dependence on a single market parameter: price. Prices are subject to turbulence bursts, which act against full development of turbulence.
Entanglement renormalization, quantum error correction, and bulk causality
NASA Astrophysics Data System (ADS)
Kim, Isaac H.; Kastoryano, Michael J.
2017-04-01
Entanglement renormalization can be viewed as an encoding circuit for a family of approximate quantum error correcting codes. The logical information becomes progres-sively more well-protected against erasure errors at larger length scales. In particular, an approximate variant of holographic quantum error correcting code emerges at low energy for critical systems. This implies that two operators that are largely separated in scales behave as if they are spatially separated operators, in the sense that they obey a Lieb-Robinson type locality bound under a time evolution generated by a local Hamiltonian.
Quantitative modeling of soil genesis processes
NASA Technical Reports Server (NTRS)
Levine, E. R.; Knox, R. G.; Kerber, A. G.
1992-01-01
For fine spatial scale simulation, a model is being developed to predict changes in properties over short-, meso-, and long-term time scales within horizons of a given soil profile. Processes that control these changes can be grouped into five major process clusters: (1) abiotic chemical reactions; (2) activities of organisms; (3) energy balance and water phase transitions; (4) hydrologic flows; and (5) particle redistribution. Landscape modeling of soil development is possible using digitized soil maps associated with quantitative soil attribute data in a geographic information system (GIS) framework to which simulation models are applied.
Perez, Samara; Shapiro, Gilla K; Tatar, Ovidiu; Joyal-Desmarais, Keven; Rosberger, Zeev
2016-10-01
Parents' human papillomavirus (HPV) vaccination decision-making is strongly influenced by their attitudes and beliefs toward vaccination. To date, psychometrically evaluated HPV vaccination attitudes scales have been narrow in their range of measured beliefs and often limited to attitudes surrounding female HPV vaccination. The study aimed to develop a comprehensive, validated and reliable HPV vaccination attitudes and beliefs scale among parents of boys. Data were collected from Canadian parents of 9- to 16-year-old boys using an online questionnaire completed in 2 waves with a 7-month interval. Based on existing vaccination attitudes scales, a set of 61 attitude and belief items were developed. Exploratory and confirmatory factor analyses were conducted. Internal consistency was evaluated with Cronbach's α and stability over time with intraclass correlations. The HPV Attitudes and Beliefs Scale (HABS) was informed by 3117 responses at time 1 and 1427 at time 2. The HABS contains 46 items organized in 9 factors: Benefits (10 items), Threat (3 items), Influence (8 items), Harms (6 items), Risk (3 items), Affordability (3 items), Communication (5 items), Accessibility (4 items), and General Vaccination Attitudes (4 items). Model fit at time 2 were: χ/df = 3.13, standardized root mean square residual = 0.056, root mean square error approximation (confidence interval) = 0.039 (0.037-0.04), comparative fit index = 0.962 and Tucker-Lewis index = 0.957. Cronbach's αs were greater than 0.8 and intraclass correlations of factors were greater than 0.6. The HABS is the first psychometrically-tested scale of HPV attitude and beliefs among parents of boys available for use in English and French. Further testing among parents of girls and young adults and assessing predictive validity are warranted.
A multi-scale approach to designing therapeutics for tuberculosis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linderman, Jennifer J.; Cilfone, Nicholas A.; Pienaar, Elsje
Approximately one third of the world’s population is infected with Mycobacterium tuberculosis. Limited information about how the immune system fights M. tuberculosis and what constitutes protection from the bacteria impact our ability to develop effective therapies for tuberculosis. We present an in vivo systems biology approach that integrates data from multiple model systems and over multiple length and time scales into a comprehensive multi-scale and multi-compartment view of the in vivo immune response to M. tuberculosis. Lastly, we describe computational models that can be used to study (a) immunomodulation with the cytokines tumor necrosis factor and interleukin 10, (b) oralmore » and inhaled antibiotics, and (c) the effect of vaccination.« less
A One-Dimensional Global-Scaling Erosive Burning Model Informed by Blowing Wall Turbulence
NASA Technical Reports Server (NTRS)
Kibbey, Timothy P.
2014-01-01
A derivation of turbulent flow parameters, combined with data from erosive burning test motors and blowing wall tests results in erosive burning model candidates useful in one-dimensional internal ballistics analysis capable of scaling across wide ranges of motor size. The real-time burn rate data comes from three test campaigns of subscale segmented solid rocket motors tested at two facilities. The flow theory admits the important effect of the blowing wall on the turbulent friction coefficient by using blowing wall data to determine the blowing wall friction coefficient. The erosive burning behavior of full-scale motors is now predicted more closely than with other recent models.
Christensen, Bruce K; Girard, Todd A; Bagby, R Michael
2007-06-01
An eight-subtest short form (SF8) of the Wechsler Adult Intelligence Scale, Third Edition (WAIS-III), maintaining equal representation of each index factor, was developed for use with psychiatric populations. Data were collected from a mixed inpatient/outpatient sample (99 men and 101 women) referred for neuropsychological assessment. Psychometric analyses revealed an optimal SF8 comprising Vocabulary, Similarities, Arithmetic, Digit Span, Picture Completion, Matrix Reasoning, Digit Symbol Coding, and Symbol Search, scored by linear scaling. Expanding on previous short forms, the current SF8 maximizes the breadth of information and reduces administration time while maintaining the original WAIS-III factor structure. (c) 2007 APA, all rights reserved
A multi-scale approach to designing therapeutics for tuberculosis
Linderman, Jennifer J.; Cilfone, Nicholas A.; Pienaar, Elsje; ...
2015-04-20
Approximately one third of the world’s population is infected with Mycobacterium tuberculosis. Limited information about how the immune system fights M. tuberculosis and what constitutes protection from the bacteria impact our ability to develop effective therapies for tuberculosis. We present an in vivo systems biology approach that integrates data from multiple model systems and over multiple length and time scales into a comprehensive multi-scale and multi-compartment view of the in vivo immune response to M. tuberculosis. Lastly, we describe computational models that can be used to study (a) immunomodulation with the cytokines tumor necrosis factor and interleukin 10, (b) oralmore » and inhaled antibiotics, and (c) the effect of vaccination.« less
2011-04-30
internal constructs f l f t th h l i l li k l i (LLA)? 3 use u or managemen , roug ex ca n ana ys s LLA Methodology Can Help! Warfighters RDTE...information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports...categories of interest in various spreadsheets). This year, we started to develop LLA from a demonstration to an operational capability and facilitate a
Facilitating the exploitation of ERTS imagery using snow enhancement techniques
NASA Technical Reports Server (NTRS)
Wobber, F. J. (Principal Investigator); Martin, K. R.; Sheffield, C.; Russell, O.; Amato, R. V.
1973-01-01
The author has identified the following significant results. EarthSat has established an effective mail-based method for obtaining timely ground truth (snow depth) information over an extensive area. The method is both efficient and inexpensive compared with the cost of a similarly scaled direct field checking effort. Additional geological information has been acquired which is not shown in geological maps in the area. Excellent quality snow-free ERTS-1 transparencies of the test areas have been received and are being analyzed.
Demonstration of Wavelet Techniques in the Spectral Analysis of Bypass Transition Data
NASA Technical Reports Server (NTRS)
Lewalle, Jacques; Ashpis, David E.; Sohn, Ki-Hyeon
1997-01-01
A number of wavelet-based techniques for the analysis of experimental data are developed and illustrated. A multiscale analysis based on the Mexican hat wavelet is demonstrated as a tool for acquiring physical and quantitative information not obtainable by standard signal analysis methods. Experimental data for the analysis came from simultaneous hot-wire velocity traces in a bypass transition of the boundary layer on a heated flat plate. A pair of traces (two components of velocity) at one location was excerpted. A number of ensemble and conditional statistics related to dominant time scales for energy and momentum transport were calculated. The analysis revealed a lack of energy-dominant time scales inside turbulent spots but identified transport-dominant scales inside spots that account for the largest part of the Reynolds stress. Momentum transport was much more intermittent than were energetic fluctuations. This work is the first step in a continuing study of the spatial evolution of these scale-related statistics, the goal being to apply the multiscale analysis results to improve the modeling of transitional and turbulent industrial flows.
Comprehensive analysis of information dissemination in disasters
NASA Astrophysics Data System (ADS)
Zhang, N.; Huang, H.; Su, Boni
2016-11-01
China is a country that experiences a large number of disasters. The number of deaths caused by large-scale disasters and accidents in past 10 years is around 900,000. More than 92.8 percent of these deaths could be avoided if there were an effective pre-warning system deployed. Knowledge of the information dissemination characteristics of different information media taking into consideration governmental assistance (information published by a government) in disasters in urban areas, plays a critical role in increasing response time and reducing the number of deaths and economic losses. In this paper we have developed a comprehensive information dissemination model to optimize efficiency of pre-warning mechanics. This model also can be used for disseminating information for evacuees making real-time evacuation plans. We analyzed every single information dissemination models for pre-warning in disasters by considering 14 media: short message service (SMS), phone, television, radio, news portals, Wechat, microblogs, email, newspapers, loudspeaker vehicles, loudspeakers, oral communication, and passive information acquisition via visual and auditory senses. Since governmental assistance is very useful in a disaster, we calculated the sensitivity of governmental assistance ratio. The results provide useful references for information dissemination during disasters in urban areas.
Clinical and experimental characteristics of "hypothetically psychosis prone" college students.
Cadenhead, K; Kumar, C; Braff, D
1996-01-01
The study of individuals at the boundaries of schizophrenia has historically involved genetic relatives of schizophrenia patients or individuals who meet criteria for schizotypal personality disorder (SPD). Recently, many investigators have turned to the use of psychometric scales, developed to measure psychotic traits or vulnerability to developing schizophrenia, to screen large populations of college students in order to identify individuals who are "psychosis prone" or "schizotypal". To help answer the question of whether students identified with psychometric scales are indeed psychosis prone, we screened 1115 college students with the Perceptual Aberration/ Magical Ideation (PerMag) and Physical Anhedonia (PhysAn) Scales. Individuals who scored 2 standard deviations (SD) above the mean on the scales were selected as experimental subjects (N = 13 PerMag, N = 10 PhysAn) and a subpopulation of matched subjects who scored less than 0.5 SD above the mean were selected as control subjects (N = 24). All subjects then received a full battery of tests, including structured clinical interviews, the MMPI, and psychophysiological measures of information processing, including prepulse inhibition and habituation of the human startle response, visual backward masking and reaction time measures. The results suggest that the PerMag scale, but not the PhysAn scale, identifies individuals with some psychotic, affective and anxiety symptoms when compared to the controls. Neither scale predicts a diagnosis of schizotypal personality disorder or deficits on measures of information processing that characterize schizophrenia or schizotypal personality disordered patients.
Advanced Algorithms for Local Routing Strategy on Complex Networks
Lin, Benchuan; Chen, Bokui; Gao, Yachun; Tse, Chi K.; Dong, Chuanfei; Miao, Lixin; Wang, Binghong
2016-01-01
Despite the significant improvement on network performance provided by global routing strategies, their applications are still limited to small-scale networks, due to the need for acquiring global information of the network which grows and changes rapidly with time. Local routing strategies, however, need much less local information, though their transmission efficiency and network capacity are much lower than that of global routing strategies. In view of this, three algorithms are proposed and a thorough investigation is conducted in this paper. These algorithms include a node duplication avoidance algorithm, a next-nearest-neighbor algorithm and a restrictive queue length algorithm. After applying them to typical local routing strategies, the critical generation rate of information packets Rc increases by over ten-fold and the average transmission time 〈T〉 decreases by 70–90 percent, both of which are key physical quantities to assess the efficiency of routing strategies on complex networks. More importantly, in comparison with global routing strategies, the improved local routing strategies can yield better network performance under certain circumstances. This is a revolutionary leap for communication networks, because local routing strategy enjoys great superiority over global routing strategy not only in terms of the reduction of computational expense, but also in terms of the flexibility of implementation, especially for large-scale networks. PMID:27434502
NASA Astrophysics Data System (ADS)
Justice, C. J.
2015-12-01
80% of Tanzania's population is involved in the agriculture sector. Despite this national dependence, agricultural reporting is minimal and monitoring efforts are in their infancy. The cropland mask developed through this study provides the framework for agricultural monitoring through informing analysis of crop conditions, dispersion, and intensity at a national scale. Tanzania is dominated by smallholder agricultural systems with an average field size of less than one hectare (Sarris et al, 2006). At this field scale, previous classifications of agricultural land in Tanzania using MODIS course resolution data are insufficient to inform a working monitoring system. The nation-wide cropland mask in this study was developed using composited Landsat tiles from a 2010-2013 time series. Decision tree classifiers methods were used in the study with representative training areas collected for agriculture and no agriculture using appropriate indices to separate these classes (Hansen et al, 2013). Validation was done using random sample and high resolution satellite images to compare Agriculture and No agriculture samples from the study area. The techniques used in this study were successful and have the potential to be adapted for other countries, allowing targeted monitoring efforts to improve food security, market price, and inform agricultural policy.
Advanced Algorithms for Local Routing Strategy on Complex Networks.
Lin, Benchuan; Chen, Bokui; Gao, Yachun; Tse, Chi K; Dong, Chuanfei; Miao, Lixin; Wang, Binghong
2016-01-01
Despite the significant improvement on network performance provided by global routing strategies, their applications are still limited to small-scale networks, due to the need for acquiring global information of the network which grows and changes rapidly with time. Local routing strategies, however, need much less local information, though their transmission efficiency and network capacity are much lower than that of global routing strategies. In view of this, three algorithms are proposed and a thorough investigation is conducted in this paper. These algorithms include a node duplication avoidance algorithm, a next-nearest-neighbor algorithm and a restrictive queue length algorithm. After applying them to typical local routing strategies, the critical generation rate of information packets Rc increases by over ten-fold and the average transmission time 〈T〉 decreases by 70-90 percent, both of which are key physical quantities to assess the efficiency of routing strategies on complex networks. More importantly, in comparison with global routing strategies, the improved local routing strategies can yield better network performance under certain circumstances. This is a revolutionary leap for communication networks, because local routing strategy enjoys great superiority over global routing strategy not only in terms of the reduction of computational expense, but also in terms of the flexibility of implementation, especially for large-scale networks.
The Preservation of Cued Recall in the Acute Mentally Fatigued State: A Randomised Crossover Study.
Flindall, Ian Richard; Leff, Daniel Richard; Pucks, Neysan; Sugden, Colin; Darzi, Ara
2016-01-01
The objective of this study is to investigate the impact of acute mental fatigue on the recall of clinical information in the non-sleep-deprived state. Acute mental fatigue in the non-sleep-deprived subject is rarely studied in the medical workforce. Patient handover has been highlighted as an area of high risk especially in fatigued subjects. This study evaluates the deterioration in recall of clinical information over 2 h with cognitively demanding work in non-sleep-deprived subjects. A randomised crossover study involving twenty medical students assessed free (presentation) and cued (MCQ) recall of clinical case histories at 0 and 2 h under low and high cognitive load using the N-Back task. Acute mental fatigue was assessed through the Visual Analogue Scale, Stanford Scale and NASA-TLX Mental Workload Rating Scale. Free recall is significantly impaired by increased cognitive load (p < 0.05) with subjects demonstrating perceived mental fatigue during the high cognitive load assessment. There was no significant difference in the amount of information retrieved by cued recall under high and low cognitive load conditions (p = 1). This study demonstrates the loss of clinical information over a short time period involving a mentally fatiguing, high cognitive load task. Free recall for the handover of clinical information is unreliable. Memory cues maintain recall of clinical information. This study provides evidence towards the requirement for standardisation of a structured patient handover. The use of memory cues (involving recognition memory and cued recall methodology) would be beneficial in a handover checklist to aid recall of clinical information and supports evidence for their adoption into clinical practice.
The effect of orthostatic stress on multiscale entropy of heart rate and blood pressure.
Turianikova, Zuzana; Javorka, Kamil; Baumert, Mathias; Calkovska, Andrea; Javorka, Michal
2011-09-01
Cardiovascular control acts over multiple time scales, which introduces a significant amount of complexity to heart rate and blood pressure time series. Multiscale entropy (MSE) analysis has been developed to quantify the complexity of a time series over multiple time scales. In previous studies, MSE analyses identified impaired cardiovascular control and increased cardiovascular risk in various pathological conditions. Despite the increasing acceptance of the MSE technique in clinical research, information underpinning the involvement of the autonomic nervous system in the MSE of heart rate and blood pressure is lacking. The objective of this study is to investigate the effect of orthostatic challenge on the MSE of heart rate and blood pressure variability (HRV, BPV) and the correlation between MSE (complexity measures) and traditional linear (time and frequency domain) measures. MSE analysis of HRV and BPV was performed in 28 healthy young subjects on 1000 consecutive heart beats in the supine and standing positions. Sample entropy values were assessed on scales of 1-10. We found that MSE of heart rate and blood pressure signals is sensitive to changes in autonomic balance caused by postural change from the supine to the standing position. The effect of orthostatic challenge on heart rate and blood pressure complexity depended on the time scale under investigation. Entropy values did not correlate with the mean values of heart rate and blood pressure and showed only weak correlations with linear HRV and BPV measures. In conclusion, the MSE analysis of heart rate and blood pressure provides a sensitive tool to detect changes in autonomic balance as induced by postural change.
Sensitivity to timing and order in human visual cortex.
Singer, Jedediah M; Madsen, Joseph R; Anderson, William S; Kreiman, Gabriel
2015-03-01
Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. Copyright © 2015 the American Physiological Society.
The neural processing of taste
Lemon, Christian H; Katz, Donald B
2007-01-01
Although there have been many recent advances in the field of gustatory neurobiology, our knowledge of how the nervous system is organized to process information about taste is still far from complete. Many studies on this topic have focused on understanding how gustatory neural circuits are spatially organized to represent information about taste quality (e.g., "sweet", "salty", "bitter", etc.). Arguments pertaining to this issue have largely centered on whether taste is carried by dedicated neural channels or a pattern of activity across a neural population. But there is now mounting evidence that the timing of neural events may also importantly contribute to the representation of taste. In this review, we attempt to summarize recent findings in the field that pertain to these issues. Both space and time are variables likely related to the mechanism of the gustatory neural code: information about taste appears to reside in spatial and temporal patterns of activation in gustatory neurons. What is more, the organization of the taste network in the brain would suggest that the parameters of space and time extend to the neural processing of gustatory information on a much grander scale. PMID:17903281
Trace contaminant determination in fish scale by laser-ablation technique
NASA Astrophysics Data System (ADS)
Lee, Ida; Coutant, C. C.; Arakawa, E. T.
1993-10-01
Laser ablation on rings of fish scale has been used to analyze the historical accumulation of polychlorinated biphenyls (PCB) in striped bass in the Watts Bar Reservoir. Rings on a fish scale grow in a pattern that forms a record of the fish's chemical intake. In conjunction with the migration patterns of fish monitored by ecologists, relative PCB concentrations in the seasonal rings of fish scale can be used to study the PCB distribution in the reservoir. In this study, a tightly-focused laser beam from a XeCl excimer laser was used to ablate and ionize a small portion of a fish scale placed in a vacuum chamber. The ions were identified and quantified by a time-of-flight mass spectrometer. Studies of this type can provide valuable information for the Department of Energy (DOE) off-site clean-up efforts as well as identifying the impacts of other sources to local aquatic populations.
The Role of Social Media in the Civic Co-Management of Urban Infrastructure Resilience
NASA Astrophysics Data System (ADS)
Turpin, E.; Holderness, T.; Wickramasuriya, R.
2014-12-01
As cities evolve to become increasingly complex systems of people and interconnected infrastructure the impacts of extreme events and long term climatological change are significantly heightened (Walsh et al. 2011). Understanding the resilience of urban systems and the impacts of infrastructure failure is therefore key to understanding the adaptability of cities to climate change (Rosenzweig 2011). Such information is particularly critical in developing nations which are predicted to bear the brunt of climate change (Douglas et al., 2008), but often lack the resources and data required to make informed decisions regarding infrastructure and societal resilience (e.g. Paar & Rekittke 2011). We propose that mobile social media in a people-as-sensors paradigm provides a means of monitoring the response of a city to cascading infrastructure failures induced by extreme weather events. Such an approach is welcomed in developing nations where crowd-sourced data are increasingly being used as an alternative to missing or incomplete formal data sources to help solve infrastructure challenges (Holderness 2014). In this paper we present PetaJakarta.org as a case study that harnesses the power of social media to gather, sort and display information about flooding for residents of Jakarta, Indonesia in real time, recuperating the failures of infrastructure and monitoring systems through a web of social media connections. Our GeoSocial Intelligence Framework enables the capture and comprehension of significant time-critical information to support decision-making, and as a means of transparent communication, while maintaining user privacy, to enable civic co-management processes to aid city-scale climate adaptation and resilience. PetaJakarta empowers community residents to collect and disseminate situational information about flooding, via the social media network Twitter, to provide city-scale decision support for Jakarta's Emergency Management Team, and a neighbourhood-scale public information service for individuals and communities to alert them of nearby flood events. Douglas I., et al. 2008 ENVIRONMENT & URBANIZATION Holderness T. 2014 IEEE TECHNOLOGY & SOCIETY MAGAZINE Paar P. & Rekittke J. 2011 FUTURE INTERNET Rosenzweig C. 2011 SCIENTIFIC AMERICAN Walsh C. L., et al. 2011 URBAN DESIGN & PLANNING
NASA Astrophysics Data System (ADS)
Wright, D. J.
2013-12-01
In the early 1990s the author came of age as the technology driving the geographic information system or GIS was beginning to successfully 'handle' geospatial data at a range of scales and formats, and a wide array of information technology products emerged from an expanding GIS industry. However, that small community struggled to reflect the diverse research efforts at play in understanding the deeper issues surrounding geospatial data, and the impediments to that effective use of that data. It was from this need that geographic information science or GIScience arose, to ensure in part that GIS did not fall into the trap of being a technology in search of applications, a one-time, one-off, non-intellectual 'bag of tricks' with no substantive theory underpinning it, and suitable only for a static period of time (e.g., Goodchild, 1992). The community has since debated the issue of "tool versus science' which has also played a role in defining GIS as an actual profession. In turn, GIS has contributed to "methodological versus substantive" questions in science, leading to understandings of how the Earth works versus how the Earth should look. In the author's experience, the multidimensional structuring and scaling data, with integrative and innovative approaches to analyzing, modeling, and developing extensive and spatial data from selected places on land and at sea, have revealed how theory and application are in no way mutually exclusive, and it may often be application that advances theory, rather than vice versa. Increasingly, both the system and science of geographic information have welcomed strong collaborations among computer scientists, information scientists, and domain scientists to solve complex scientific questions. As such, they have paralleled the emergence and acceptance of "data science." And now that we are squarely in an era of regional- to global-scale observation and simulation of the Earth, produce data that are too big, move too fast, and do not fit the structures and processing capacity of conventional database systems, and the author reflects on how the potential of the GIS/GIScience world to contribute to the training and professional advancement of data science.
NASA Technical Reports Server (NTRS)
Bivolaru, Daniel (Inventor); Cutler, Andrew D. (Inventor); Danehy, Paul M. (Inventor)
2015-01-01
A system that simultaneously measures the translational temperature, bulk velocity, and density in gases by collecting, referencing, and analyzing nanosecond time-scale Rayleigh scattered light from molecules is described. A narrow-band pulsed laser source is used to probe two largely separated measurement locations, one of which is used for reference. The elastically scattered photons containing information from both measurement locations are collected at the same time and analyzed spectrally using a planar Fabry-Perot interferometer. A practical means of referencing the measurement of velocity using the laser frequency, and the density and temperature using the information from the reference measurement location maintained at constant properties is provided.