Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-17
... comparable to the obligations proposed in this filing: Market-Makers % Time % Series Classes CBOE (current........ 60 All classes collectively. PMMs % Time % Series Classes CBOE (current rule) 99% of the time... % Time % Series Classes CBOE (current rule) 99% of the time required to 90% * Class-by-class. provide...
miniSEED: The Backbone Data Format for Seismological Time Series
NASA Astrophysics Data System (ADS)
Ahern, T. K.; Benson, R. B.; Trabant, C. M.
2017-12-01
In 1987, the International Federation of Digital Seismograph Networks (FDSN), adopted the Standard for the Exchange of Earthquake Data (SEED) format to be used for data archiving and exchange of seismological time series data. Since that time, the format has evolved to accommodate new capabilities and features. For example, a notable change in 1992 allowed the format, which includes both the comprehensive metadata and the time series samples, to be used in two additional forms: a container for metadata only called "dataless SEED", and 2) a stand-alone structure for time series called "miniSEED". While specifically designed for seismological data and related metadata, this format has proven to be a useful format for a wide variety of geophysical time series data. Many FDSN data centers now store temperature, pressure, infrasound, tilt and other time series measurements in this internationally used format. Since April 2016, members of the FDSN have been in discussions to design a next generation miniSEED format to accommodate current and future needs, to further generalize the format, and to address a number of historical problems or limitations. We believe the correct approach is to simplify the header, allow for arbitrary header additions, expand the current identifiers, and allow for anticipated future identifiers which are currently unknown. We also believe the primary goal of the format is for efficient archiving, selection and exchange of time series data. By focusing on these goals we avoid trying to generalize the format too broadly into specialized areas such as efficient, low-latency delivery, or including unbounded non-time series data. Our presentation will provide an overview of this format and highlight its most valuable characteristics for time series data from any geophysical domain or beyond.
Towards a study of synoptic-scale variability of the California current system
NASA Technical Reports Server (NTRS)
1985-01-01
A West Coast satellite time series advisory group was established to consider the scientific rationale for the development of complete west coast time series of imagery of sea surface temperature (as derived by the Advanced Very High Resolution Radiometer on the NOAA polar orbiter, and near-surface phytoplankton pigment concentrations (as derived by the Coastal Zone Color Scanner on Nimbus 7). The scientific and data processing requirements for such time series are also considered. It is determined that such time series are essential if a number of scientific questions regarding the synoptic-scale dynamics of the California Current System are to be addressed. These questions concern both biological and physical processes.
Voltage and Current Clamp Transients with Membrane Dielectric Loss
Fitzhugh, R.; Cole, K. S.
1973-01-01
Transient responses of a space-clamped squid axon membrane to step changes of voltage or current are often approximated by exponential functions of time, corresponding to a series resistance and a membrane capacity of 1.0 μF/cm2. Curtis and Cole (1938, J. Gen. Physiol. 21:757) found, however, that the membrane had a constant phase angle impedance z = z1(jωτ)-α, with a mean α = 0.85. (α = 1.0 for an ideal capacitor; α < 1.0 may represent dielectric loss.) This result is supported by more recently published experimental data. For comparison with experiments, we have computed functions expressing voltage and current transients with constant phase angle capacitance, a parallel leakage conductance, and a series resistance, at nine values of α from 0.5 to 1.0. A series in powers of tα provided a good approximation for short times; one in powers of t-α, for long times; for intermediate times, a rational approximation matching both series for a finite number of terms was used. These computations may help in determining experimental series resistances and parallel leakage conductances from membrane voltage or current clamp data. PMID:4754194
More on Time Series Designs: A Reanalysis of Mayer and Kozlow's Data.
ERIC Educational Resources Information Center
Willson, Victor L.
1982-01-01
Differentiating between time-series design and time-series analysis, examines design considerations and reanalyzes data previously reported by Mayer and Kozlow in this journal. The current analysis supports the analysis performed by Mayer and Kozlow but puts the results on a somewhat firmer statistical footing. (Author/JN)
75 FR 47284 - Secretary's Priorities for Discretionary Grant Programs
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-05
... the most currently available data. Interrupted time series design \\4\\ means a type of quasi... findings. \\4\\ A single subject or single case design is an adaptation of an interrupted time series design...), interrupted time series designs (as defined in this notice), or regression discontinuity designs (as defined...
Burau, J.R.; Simpson, M.R.; Cheng, R.T.
1993-01-01
Water-velocity profiles were collected at the west end of Carquinez Strait, San Francisco Bay, California, from March to November 1988, using an acoustic Doppler current profiler (ADCP). These data are a series of 10-minute-averaged water velocities collected at 1-meter vertical intervals (bins) in the 16.8-meter water column, beginning 2.1 meters above the estuary bed. To examine the vertical structure of the horizontal water velocities, the data are separated into individual time-series by bin and then used for time-series plots, harmonic analysis, and for input to digital filters. Three-dimensional graphic renditions of the filtered data are also used in the analysis. Harmonic analysis of the time-series data from each bin indicates that the dominant (12.42 hour or M2) partial tidal currents reverse direction near the bottom, on average, 20 minutes sooner than M2 partial tidal currents near the surface. Residual (nontidal) currents derived from the filtered data indicate that currents near the bottom are pre- dominantly up-estuary during the neap tides and down-estuary during the more energetic spring tides.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-17
... market in 60% of the non-adjusted option series \\4\\ of each registered class that have a time to... time series Classes C2 (current rule) 99 of the time........ 60 Class-by-class. NOM 90 of a trading day... required to provide continuous quotes for the same amount of time in the same percentage of series as...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-08
.... (``Bloomberg''), FactSet Research Systems, Inc. (``FactSet'') and Thomson Reuters (``Reuters''). Real time data... reasonably related to the current value of the underlying index at the time such series are first opened for... to which such series relates at or about the time such series of options is first opened for trading...
NASA Astrophysics Data System (ADS)
Smith, John N.; Smethie, William M.; Yashayev, Igor; Curry, Ruth; Azetsu-Scott, Kumiko
2016-11-01
Time series measurements of the nuclear fuel reprocessing tracer 129I and the gas ventilation tracer CFC-11 were undertaken on the AR7W section in the Labrador Sea (1997-2014) and on Line W (2004-2014), located over the US continental slope off Cape Cod, to determine advection and mixing time scales for the transport of Denmark Strait Overflow Water (DSOW) within the Deep Western Boundary Current (DWBC). Tracer measurements were also conducted in 2010 over the continental rise southeast of Bermuda to intercept the equatorward flow of DSOW by interior pathways. The Labrador Sea tracer and hydrographic time series data were used as input functions in a boundary current model that employs transit time distributions to simulate the effects of mixing and advection on downstream tracer distributions. Model simulations of tracer levels in the boundary current core and adjacent interior (shoulder) region with which mixing occurs were compared with the Line W time series measurements to determine boundary current model parameters. These results indicate that DSOW is transported from the Labrador Sea to Line W via the DWBC on a time scale of 5-6 years corresponding to a mean flow velocity of 2.7 cm/s while mixing between the core and interior regions occurs with a time constant of 2.6 years. A tracer section over the southern flank of the Bermuda rise indicates that the flow of DSOW that separated from the DWBC had undergone transport through interior pathways on a time scale of 9 years with a mixing time constant of 4 years.
NASA Astrophysics Data System (ADS)
Saturnino, Diana; Olsen, Nils; Finlay, Chris
2017-04-01
High-precision magnetic measurements collected by satellites such as Swarm or CHAMP,flying at altitudes between 300 and 800km, allow for improved geomagnetic field modelling. An accurate description of the internal (core and crust) field must account for contributions from other sources, such as the ionosphere and magnetosphere. However, the description of the rapidly changing external field contributions, particularly during the quiet times from which the data are selected, constitutes a major challenge of the construction of such models. Our study attempts to obtain improved knowledge on ionospheric field contributions during quiet times conditions, in particular during night local times. We use two different datasets: ground magnetic observatories time series (obtained below the ionospheric E-layer currents), and Swarm satellites measurements acquired above these currents. First, we remove from the data estimates of the core, lithospheric and large-scale magnetospheric magnetic contributions as given by the CHAOS-6 model, to obtain corrected time series. Then, we focus on the differences of the corrected time series: for a pair of ground magnetic observatories, we determine the time series of the difference, and similarly we determine time series differences at satellite altitude, given by the difference between the Swarm Alpha and Charlie satellites taken in the vicinity of the ground observatory locations. The obtained differences time series are analysed regarding their temporal and spatial scales variations, with emphasis on measurements during night local times.
NASA Astrophysics Data System (ADS)
Wright, Ashley J.; Walker, Jeffrey P.; Pauwels, Valentijn R. N.
2017-08-01
Floods are devastating natural hazards. To provide accurate, precise, and timely flood forecasts, there is a need to understand the uncertainties associated within an entire rainfall time series, even when rainfall was not observed. The estimation of an entire rainfall time series and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of entire rainfall input time series to be considered when estimating model parameters, and provides the ability to improve rainfall estimates from poorly gauged catchments. Current methods to estimate entire rainfall time series from streamflow records are unable to adequately invert complex nonlinear hydrologic systems. This study aims to explore the use of wavelets in the estimation of rainfall time series from streamflow records. Using the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia, it is shown that model parameter distributions and an entire rainfall time series can be estimated. Including rainfall in the estimation process improves streamflow simulations by a factor of up to 1.78. This is achieved while estimating an entire rainfall time series, inclusive of days when none was observed. It is shown that the choice of wavelet can have a considerable impact on the robustness of the inversion. Combining the use of a likelihood function that considers rainfall and streamflow errors with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.
Fulcher, Ben D; Jones, Nick S
2017-11-22
Phenotype measurements frequently take the form of time series, but we currently lack a systematic method for relating these complex data streams to scientifically meaningful outcomes, such as relating the movement dynamics of organisms to their genotype or measurements of brain dynamics of a patient to their disease diagnosis. Previous work addressed this problem by comparing implementations of thousands of diverse scientific time-series analysis methods in an approach termed highly comparative time-series analysis. Here, we introduce hctsa, a software tool for applying this methodological approach to data. hctsa includes an architecture for computing over 7,700 time-series features and a suite of analysis and visualization algorithms to automatically select useful and interpretable time-series features for a given application. Using exemplar applications to high-throughput phenotyping experiments, we show how hctsa allows researchers to leverage decades of time-series research to quantify and understand informative structure in time-series data. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Issues with the "Time for English" Textbook Series at Egyptian Primary Schools: An Evaluative Study
ERIC Educational Resources Information Center
Abdallah, Mahmoud Mohammad Sayed
2016-01-01
This study mainly aims at evaluating "Time for English", a new English language-learning (ELL) textbook series currently taught at mainstream Egyptian primary schools. This involves: (1) identifying--from senior and expert language teachers' perspectives--to what extent the textbook series (primary one to six) conform with the national…
Exploratory Causal Analysis in Bivariate Time Series Data
NASA Astrophysics Data System (ADS)
McCracken, James M.
Many scientific disciplines rely on observational data of systems for which it is difficult (or impossible) to implement controlled experiments and data analysis techniques are required for identifying causal information and relationships directly from observational data. This need has lead to the development of many different time series causality approaches and tools including transfer entropy, convergent cross-mapping (CCM), and Granger causality statistics. In this thesis, the existing time series causality method of CCM is extended by introducing a new method called pairwise asymmetric inference (PAI). It is found that CCM may provide counter-intuitive causal inferences for simple dynamics with strong intuitive notions of causality, and the CCM causal inference can be a function of physical parameters that are seemingly unrelated to the existence of a driving relationship in the system. For example, a CCM causal inference might alternate between ''voltage drives current'' and ''current drives voltage'' as the frequency of the voltage signal is changed in a series circuit with a single resistor and inductor. PAI is introduced to address both of these limitations. Many of the current approaches in the times series causality literature are not computationally straightforward to apply, do not follow directly from assumptions of probabilistic causality, depend on assumed models for the time series generating process, or rely on embedding procedures. A new approach, called causal leaning, is introduced in this work to avoid these issues. The leaning is found to provide causal inferences that agree with intuition for both simple systems and more complicated empirical examples, including space weather data sets. The leaning may provide a clearer interpretation of the results than those from existing time series causality tools. A practicing analyst can explore the literature to find many proposals for identifying drivers and causal connections in times series data sets, but little research exists of how these tools compare to each other in practice. This work introduces and defines exploratory causal analysis (ECA) to address this issue along with the concept of data causality in the taxonomy of causal studies introduced in this work. The motivation is to provide a framework for exploring potential causal structures in time series data sets. ECA is used on several synthetic and empirical data sets, and it is found that all of the tested time series causality tools agree with each other (and intuitive notions of causality) for many simple systems but can provide conflicting causal inferences for more complicated systems. It is proposed that such disagreements between different time series causality tools during ECA might provide deeper insight into the data than could be found otherwise.
Data and methodological problems in establishing state gasoline-conservation targets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greene, D.L.; Walton, G.H.
The Emergency Energy Conservation Act of 1979 gives the President the authority to set gasoline-conservation targets for states in the event of a supply shortage. This paper examines data and methodological problems associated with setting state gasoline-conservation targets. The target-setting method currently used is examined and found to have some flaws. Ways of correcting these deficiencies through the use of Box-Jenkins time-series analysis are investigated. A successful estimation of Box-Jenkins models for all states included the estimation of the magnitude of the supply shortages of 1979 in each state and a preliminary estimation of state short-run price elasticities, which weremore » found to vary about a median value of -0.16. The time-series models identified were very simple in structure and lent support to the simple consumption growth model assumed by the current target method. The authors conclude that the flaws in the current method can be remedied either by replacing the current procedures with time-series models or by using the models in conjunction with minor modifications of the current method.« less
Dst and a map of average equivalent ring current: 1958-2007
NASA Astrophysics Data System (ADS)
Love, J. J.
2008-12-01
A new Dst index construction is made using the original hourly magnetic-observatory data collected over the years 1958-2007; stations: Hermanus South Africa, Kakioka Japan, Honolulu Hawaii, and San Juan Puerto Rico. The construction method we use is generally consistent with the algorithm defined by Sugiura (1964), and which forms the basis for the standard Kyoto Dst index. This involves corrections for observatory baseline shifts, subtraction of the main-field secular variation, and subtraction of specific harmonics that approximate the solar-quiet (Sq) variation. Fourier analysis of the observatory data reveals the nature of Sq: it consists primarily of periodic variation driven by the Earth's rotation, the Moon's orbit, the Earth's orbit, and, to some extent, the solar cycle. Cross coupling of the harmonics associated with each of the external periodic driving forces results in a seemingly complicated Sq time series that is sometimes considered to be relatively random and unpredictable, but which is, in fact, well described in terms of Fourier series. Working in the frequency domain, Sq can be filtered out, and, upon return to the time domain, the local disturbance time series (Dist) for each observatory can be recovered. After averaging the local disturbance time series from each observatory, the global magnetic disturbance time series Dst is obtained. Analysis of this new Dst index is compared with that produced by Kyoto, and various biases and differences are discussed. The combination of the Dist and Dst time series can be used to explore the local-time/universal-time symmetry of an equivalent ring current. Individual magnetic storms can have a complicated disturbance field that is asymmetrical in longitude, presumably due to partial ring currents. Using 50 years of data we map the average local-time magnetic disturbance, finding that it is very nearly proportional to Dst. To our surprise, the primary asymmetry in mean magnetic disturbance is not between midnight and noon, but rather between dawn and dusk, with greatest mean disturbance occurring at dusk. As a result, proposed corrections to Dst for magnetopause and tail currents might be reasonably reconsidered.
Interactive digital signal processor
NASA Technical Reports Server (NTRS)
Mish, W. H.; Wenger, R. M.; Behannon, K. W.; Byrnes, J. B.
1982-01-01
The Interactive Digital Signal Processor (IDSP) is examined. It consists of a set of time series analysis Operators each of which operates on an input file to produce an output file. The operators can be executed in any order that makes sense and recursively, if desired. The operators are the various algorithms used in digital time series analysis work. User written operators can be easily interfaced to the sysatem. The system can be operated both interactively and in batch mode. In IDSP a file can consist of up to n (currently n=8) simultaneous time series. IDSP currently includes over thirty standard operators that range from Fourier transform operations, design and application of digital filters, eigenvalue analysis, to operators that provide graphical output, allow batch operation, editing and display information.
NASA Technical Reports Server (NTRS)
He, Yuning
2015-01-01
Safety of unmanned aerial systems (UAS) is paramount, but the large number of dynamically changing controller parameters makes it hard to determine if the system is currently stable, and the time before loss of control if not. We propose a hierarchical statistical model using Treed Gaussian Processes to predict (i) whether a flight will be stable (success) or become unstable (failure), (ii) the time-to-failure if unstable, and (iii) time series outputs for flight variables. We first classify the current flight input into success or failure types, and then use separate models for each class to predict the time-to-failure and time series outputs. As different inputs may cause failures at different times, we have to model variable length output curves. We use a basis representation for curves and learn the mappings from input to basis coefficients. We demonstrate the effectiveness of our prediction methods on a NASA neuro-adaptive flight control system.
NASA Astrophysics Data System (ADS)
Ramírez-Rojas, A.; Flores-Marquez, L. E.
2009-12-01
The short-time prediction of seismic phenomena is currently an important problem in the scientific community. In particular, the electromagnetic processes associated with seismic events take in great interest since the VAN method was implemented. The most important features of this methodology are the seismic electrical signals (SES) observed prior to strong earthquakes. SES has been observed in the electromagnetic series linked to EQs in Greece, Japan and Mexico. By mean of the so-called natural time domain, introduced by Varotsos et al. (2001), they could characterize signals of dichotomic nature observed in different systems, like SES and ionic current fluctuations in membrane channels. In this work we analyze SES observed in geoelectric time series monitored in Guerrero, México. Our analysis concern with two strong earthquakes occurred, on October 24, 1993 (M=6.6) and September 14, 1995 (M=7.3). The time series of the first one displayed a seismic electric signal six days before the main shock and for the second case the time series displayed dichotomous-like fluctuations some months before the EQ. In this work we present the first results of the analysis in natural time domain for the two cases which seems to be agreeing with the results reported by Varotsos. P. Varotsos, N. Sarlis, and E. Skordas, Practica of the Athens Academy 76, 388 (2001).
Root System Water Consumption Pattern Identification on Time Series Data
Figueroa, Manuel; Pope, Christopher
2017-01-01
In agriculture, soil and meteorological sensors are used along low power networks to capture data, which allows for optimal resource usage and minimizing environmental impact. This study uses time series analysis methods for outliers’ detection and pattern recognition on soil moisture sensor data to identify irrigation and consumption patterns and to improve a soil moisture prediction and irrigation system. This study compares three new algorithms with the current detection technique in the project; the results greatly decrease the number of false positives detected. The best result is obtained by the Series Strings Comparison (SSC) algorithm averaging a precision of 0.872 on the testing sets, vastly improving the current system’s 0.348 precision. PMID:28621739
Root System Water Consumption Pattern Identification on Time Series Data.
Figueroa, Manuel; Pope, Christopher
2017-06-16
In agriculture, soil and meteorological sensors are used along low power networks to capture data, which allows for optimal resource usage and minimizing environmental impact. This study uses time series analysis methods for outliers' detection and pattern recognition on soil moisture sensor data to identify irrigation and consumption patterns and to improve a soil moisture prediction and irrigation system. This study compares three new algorithms with the current detection technique in the project; the results greatly decrease the number of false positives detected. The best result is obtained by the Series Strings Comparison (SSC) algorithm averaging a precision of 0.872 on the testing sets, vastly improving the current system's 0.348 precision.
True random bit generators based on current time series of contact glow discharge electrolysis
NASA Astrophysics Data System (ADS)
Rojas, Andrea Espinel; Allagui, Anis; Elwakil, Ahmed S.; Alawadhi, Hussain
2018-05-01
Random bit generators (RBGs) in today's digital information and communication systems employ a high rate physical entropy sources such as electronic, photonic, or thermal time series signals. However, the proper functioning of such physical systems is bound by specific constrains that make them in some cases weak and susceptible to external attacks. In this study, we show that the electrical current time series of contact glow discharge electrolysis, which is a dc voltage-powered micro-plasma in liquids, can be used for generating random bit sequences in a wide range of high dc voltages. The current signal is quantized into a binary stream by first using a simple moving average function which makes the distribution centered around zero, and then applying logical operations which enables the binarized data to pass all tests in industry-standard randomness test suite by the National Institute of Standard Technology. Furthermore, the robustness of this RBG against power supply attacks has been examined and verified.
NASA Astrophysics Data System (ADS)
Lu, Wen-Ting; Zhao, Hong-Kang; Wang, Jian
2018-03-01
Photon heat current tunneling through a series coupled two mesoscopic Josephson junction (MJJ) system biased by dc voltages has been investigated by employing the nonequilibrium Green’s function approach. The time-oscillating photon heat current is contributed by the superposition of different current branches associated with the frequencies of MJJs ω j (j = 1, 2). Nonlinear behaviors are exhibited to be induced by the self-inductance, Coulomb interaction, and interference effect relating to the coherent transport of Cooper pairs in the MJJs. Time-oscillating pumping photon heat current is generated in the absence of temperature difference, while it becomes zero after time-average. The combination of ω j and Coulomb interactions in the MJJs determines the concrete heat current configuration. As the external and intrinsic frequencies ω j and ω 0 of MJJs match some specific combinations, resonant photon heat current exhibits sinusoidal behaviors with large amplitudes. Symmetric and asymmetric evolutions versus time t with respect to ω 1 t and ω 2 t are controlled by the applied dc voltages of V 1 and V 2. The dc photon heat current formula is a special case of the general time-dependent heat current formula when the bias voltages are settled to zero. The Aharonov-Bohm effect has been investigated, and versatile oscillation structures of photon heat current can be achieved by tuning the magnetic fluxes threading through separating MJJs.
Design of a 9-loop quasi-exponential waveform generator
NASA Astrophysics Data System (ADS)
Banerjee, Partha; Shukla, Rohit; Shyam, Anurag
2015-12-01
We know in an under-damped L-C-R series circuit, current follows a damped sinusoidal waveform. But if a number of sinusoidal waveforms of decreasing time period, generated in an L-C-R circuit, be combined in first quarter cycle of time period, then a quasi-exponential nature of output current waveform can be achieved. In an L-C-R series circuit, quasi-exponential current waveform shows a rising current derivative and thereby finds many applications in pulsed power. Here, we have described design and experiment details of a 9-loop quasi-exponential waveform generator. In that, design details of magnetic switches have also been described. In the experiment, output current of 26 kA has been achieved. It has been shown that how well the experimentally obtained output current profile matches with the numerically computed output.
Design of a 9-loop quasi-exponential waveform generator.
Banerjee, Partha; Shukla, Rohit; Shyam, Anurag
2015-12-01
We know in an under-damped L-C-R series circuit, current follows a damped sinusoidal waveform. But if a number of sinusoidal waveforms of decreasing time period, generated in an L-C-R circuit, be combined in first quarter cycle of time period, then a quasi-exponential nature of output current waveform can be achieved. In an L-C-R series circuit, quasi-exponential current waveform shows a rising current derivative and thereby finds many applications in pulsed power. Here, we have described design and experiment details of a 9-loop quasi-exponential waveform generator. In that, design details of magnetic switches have also been described. In the experiment, output current of 26 kA has been achieved. It has been shown that how well the experimentally obtained output current profile matches with the numerically computed output.
A wavelet based approach to measure and manage contagion at different time scales
NASA Astrophysics Data System (ADS)
Berger, Theo
2015-10-01
We decompose financial return series of US stocks into different time scales with respect to different market regimes. First, we examine dependence structure of decomposed financial return series and analyze the impact of the current financial crisis on contagion and changing interdependencies as well as upper and lower tail dependence for different time scales. Second, we demonstrate to which extent the information of different time scales can be used in the context of portfolio management. As a result, minimizing the variance of short-run noise outperforms a portfolio that minimizes the variance of the return series.
Distributions-per-level: a means of testing level detectors and models of patch-clamp data.
Schröder, I; Huth, T; Suitchmezian, V; Jarosik, J; Schnell, S; Hansen, U P
2004-01-01
Level or jump detectors generate the reconstructed time series from a noisy record of patch-clamp current. The reconstructed time series is used to create dwell-time histograms for the kinetic analysis of the Markov model of the investigated ion channel. It is shown here that some additional lines in the software of such a detector can provide a powerful new means of patch-clamp analysis. For each current level that can be recognized by the detector, an array is declared. The new software assigns every data point of the original time series to the array that belongs to the actual state of the detector. From the data sets in these arrays distributions-per-level are generated. Simulated and experimental time series analyzed by Hinkley detectors are used to demonstrate the benefits of these distributions-per-level. First, they can serve as a test of the reliability of jump and level detectors. Second, they can reveal beta distributions as resulting from fast gating that would usually be hidden in the overall amplitude histogram. Probably the most valuable feature is that the malfunctions of the Hinkley detectors turn out to depend on the Markov model of the ion channel. Thus, the errors revealed by the distributions-per-level can be used to distinguish between different putative Markov models of the measured time series.
The time series approach to short term load forecasting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagan, M.T.; Behr, S.M.
The application of time series analysis methods to load forecasting is reviewed. It is shown than Box and Jenkins time series models, in particular, are well suited to this application. The logical and organized procedures for model development using the autocorrelation function make these models particularly attractive. One of the drawbacks of these models is the inability to accurately represent the nonlinear relationship between load and temperature. A simple procedure for overcoming this difficulty is introduced, and several Box and Jenkins models are compared with a forecasting procedure currently used by a utility company.
Hydrodynamic measurements in Suisun Bay, California, 1992-93
Gartner, Jeffrey W.; Burau, Jon R.
1999-01-01
Sea level, velocity, temperature, and salinity (conductivity and temperature) data collected in Suisun Bay, California, from December 11, 1992, through May 31, 1993, by the U.S. Geological Survey are documented in this report. Sea-level data were collected at four locations and temperature and salinity data were collected at seven locations. Velocity data were collected at three locations using acoustic Doppler current profilers and at four other locations using point velocity meters. Sea-level and velocity data are presented in three forms (1) harmonic analysis results, (2) time-series plots (sea level, current speed, and current direction versus time), and (3) time-series plots of the low-pass filtered data. Temperature and salinity data are presented as plots of raw and low-pass filtered time series. The velocity and salinity data collected during this study document a period when the residual current patterns and salt field were significantly altered by large Delta outflow (three peaks in excess of 2,000 cubic meters per second). Residual current profiles were consistently seaward with magnitudes that fluctuated primarily in concert with Delta outflow and secondarily with the spring-neap tide cycle. The freshwater inputs advected salinity seaward of Suisun Bay for most of this study. Except for a 10-day period at the beginning of the study, dynamically significant salinities (>2) were seaward of Suisun Bay, which resulted in little or no gravitational circulation transport.
NASA Astrophysics Data System (ADS)
Spaans, K.; Hooper, A. J.
2017-12-01
The short revisit time and high data acquisition rates of current satellites have resulted in increased interest in the development of deformation monitoring and rapid disaster response capability, using InSAR. Fast, efficient data processing methodologies are required to deliver the timely results necessary for this, and also to limit computing resources required to process the large quantities of data being acquired. Contrary to volcano or earthquake applications, urban monitoring requires high resolution processing, in order to differentiate movements between buildings, or between buildings and the surrounding land. Here we present Rapid time series InSAR (RapidSAR), a method that can efficiently update high resolution time series of interferograms, and demonstrate its effectiveness over urban areas. The RapidSAR method estimates the coherence of pixels on an interferogram-by-interferogram basis. This allows for rapid ingestion of newly acquired images without the need to reprocess the earlier acquired part of the time series. The coherence estimate is based on ensembles of neighbouring pixels with similar amplitude behaviour through time, which are identified on an initial set of interferograms, and need be re-evaluated only occasionally. By taking into account scattering properties of points during coherence estimation, a high quality coherence estimate is achieved, allowing point selection at full resolution. The individual point selection maximizes the amount of information that can be extracted from each interferogram, as no selection compromise has to be reached between high and low coherence interferograms. In other words, points do not have to be coherent throughout the time series to contribute to the deformation time series. We demonstrate the effectiveness of our method over urban areas in the UK. We show how the algorithm successfully extracts high density time series from full resolution Sentinel-1 interferograms, and distinguish clearly between buildings and surrounding vegetation or streets. The fact that new interferograms can be processed separately from the remainder of the time series helps manage the high data volumes, both in space and time, generated by current missions.
Multidimensional stock network analysis: An Escoufier's RV coefficient approach
NASA Astrophysics Data System (ADS)
Lee, Gan Siew; Djauhari, Maman A.
2013-09-01
The current practice of stocks network analysis is based on the assumption that the time series of closed stock price could represent the behaviour of the each stock. This assumption leads to consider minimal spanning tree (MST) and sub-dominant ultrametric (SDU) as an indispensible tool to filter the economic information contained in the network. Recently, there is an attempt where researchers represent stock not only as a univariate time series of closed price but as a bivariate time series of closed price and volume. In this case, they developed the so-called multidimensional MST to filter the important economic information. However, in this paper, we show that their approach is only applicable for that bivariate time series only. This leads us to introduce a new methodology to construct MST where each stock is represented by a multivariate time series. An example of Malaysian stock exchange will be presented and discussed to illustrate the advantages of the method.
Zhou, Wei; Wen, Junhao; Qu, Qiang; Zeng, Jun; Cheng, Tian
2018-01-01
Recommender systems are vulnerable to shilling attacks. Forged user-generated content data, such as user ratings and reviews, are used by attackers to manipulate recommendation rankings. Shilling attack detection in recommender systems is of great significance to maintain the fairness and sustainability of recommender systems. The current studies have problems in terms of the poor universality of algorithms, difficulty in selection of user profile attributes, and lack of an optimization mechanism. In this paper, a shilling behaviour detection structure based on abnormal group user findings and rating time series analysis is proposed. This paper adds to the current understanding in the field by studying the credibility evaluation model in-depth based on the rating prediction model to derive proximity-based predictions. A method for detecting suspicious ratings based on suspicious time windows and target item analysis is proposed. Suspicious rating time segments are determined by constructing a time series, and data streams of the rating items are examined and suspicious rating segments are checked. To analyse features of shilling attacks by a group user's credibility, an abnormal group user discovery method based on time series and time window is proposed. Standard testing datasets are used to verify the effect of the proposed method.
Wen, Junhao; Qu, Qiang; Zeng, Jun; Cheng, Tian
2018-01-01
Recommender systems are vulnerable to shilling attacks. Forged user-generated content data, such as user ratings and reviews, are used by attackers to manipulate recommendation rankings. Shilling attack detection in recommender systems is of great significance to maintain the fairness and sustainability of recommender systems. The current studies have problems in terms of the poor universality of algorithms, difficulty in selection of user profile attributes, and lack of an optimization mechanism. In this paper, a shilling behaviour detection structure based on abnormal group user findings and rating time series analysis is proposed. This paper adds to the current understanding in the field by studying the credibility evaluation model in-depth based on the rating prediction model to derive proximity-based predictions. A method for detecting suspicious ratings based on suspicious time windows and target item analysis is proposed. Suspicious rating time segments are determined by constructing a time series, and data streams of the rating items are examined and suspicious rating segments are checked. To analyse features of shilling attacks by a group user’s credibility, an abnormal group user discovery method based on time series and time window is proposed. Standard testing datasets are used to verify the effect of the proposed method. PMID:29742134
The Chern-Simons Current in Systems of DNA-RNA Transcriptions
NASA Astrophysics Data System (ADS)
Capozziello, Salvatore; Pincak, Richard; Kanjamapornkul, Kabin; Saridakis, Emmanuel N.
2018-04-01
A Chern-Simons current, coming from ghost and anti-ghost fields of supersymmetry theory, can be used to define a spectrum of gene expression in new time series data where a spinor field, as alternative representation of a gene, is adopted instead of using the standard alphabet sequence of bases $A, T, C, G, U$. After a general discussion on the use of supersymmetry in biological systems, we give examples of the use of supersymmetry for living organism, discuss the codon and anti-codon ghost fields and develop an algebraic construction for the trash DNA, the DNA area which does not seem active in biological systems. As a general result, all hidden states of codon can be computed by Chern-Simons 3 forms. Finally, we plot a time series of genetic variations of viral glycoprotein gene and host T-cell receptor gene by using a gene tensor correlation network related to the Chern-Simons current. An empirical analysis of genetic shift, in host cell receptor genes with separated cluster of gene and genetic drift in viral gene, is obtained by using a tensor correlation plot over time series data derived as the empirical mode decomposition of Chern-Simons current.
Self-calibrating multiplexer circuit
Wahl, Chris P.
1997-01-01
A time domain multiplexer system with automatic determination of acceptable multiplexer output limits, error determination, or correction is comprised of a time domain multiplexer, a computer, a constant current source capable of at least three distinct current levels, and two series resistances employed for calibration and testing. A two point linear calibration curve defining acceptable multiplexer voltage limits may be defined by the computer by determining the voltage output of the multiplexer to very accurately known input signals developed from predetermined current levels across the series resistances. Drift in the multiplexer may be detected by the computer when the output voltage limits, expected during normal operation, are exceeded, or the relationship defined by the calibration curve is invalidated.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-20
... survey catch shows no trend over the full survey time series and is currently at about 40 cm TL (16 in... highest value in the time series and the autumn survey nearing the peak values found in the 1960s. In 2007... not initiate a review of the status of these species at this time. FOR FURTHER INFORMATION CONTACT...
Recurrent Neural Network Applications for Astronomical Time Series
NASA Astrophysics Data System (ADS)
Protopapas, Pavlos
2017-06-01
The benefits of good predictive models in astronomy lie in early event prediction systems and effective resource allocation. Current time series methods applicable to regular time series have not evolved to generalize for irregular time series. In this talk, I will describe two Recurrent Neural Network methods, Long Short-Term Memory (LSTM) and Echo State Networks (ESNs) for predicting irregular time series. Feature engineering along with a non-linear modeling proved to be an effective predictor. For noisy time series, the prediction is improved by training the network on error realizations using the error estimates from astronomical light curves. In addition to this, we propose a new neural network architecture to remove correlation from the residuals in order to improve prediction and compensate for the noisy data. Finally, I show how to set hyperparameters for a stable and performant solution correctly. In this work, we circumvent this obstacle by optimizing ESN hyperparameters using Bayesian optimization with Gaussian Process priors. This automates the tuning procedure, enabling users to employ the power of RNN without needing an in-depth understanding of the tuning procedure.
NASA Astrophysics Data System (ADS)
Sánchez-Alzola, A.; Martí, J.; García-Yeguas, A.; Gil, A. J.
2016-11-01
In this paper we present the current crustal deformation model of Tenerife Island derived from daily CGPS time series processing (2008-2015). Our results include the position time series, a global velocity estimation and the current crustal deformation on the island in terms of strain tensors. We detect a measurable subsidence of 1.5-2 mm/yr. in the proximities of the Cañadas-Teide-Pico Viejo (CTPV) complex. These values are higher in the central part of the complex and could be explained by a lateral spreading of the elastic lithosphere combined with the effect of the drastic descent of the water table in the island experienced during recent decades. The results show that the Anaga massif is stable in both its horizontal and vertical components. The strain tensor analysis shows a 70 nstrain/yr. E-W compression in the central complex, perpendicular to the 2004 sismo-volcanic area, and 50 nstrain/yr. SW-NE extension towards the Northeast ridge. The residual velocity and strain patterns coincide with a decline in volcanic activity since the 2004 unrest.
26 CFR 1.454-1 - Obligations issued at discount.
Code of Federal Regulations, 2010 CFR
2010-04-01
... series E bond, at which time the stated redemption value was $674.60. A never elected under section 454(a... obligation, in which he retains his investment in a matured series E U.S. savings bond, or (iii) A nontransferable obligation (whether or not a current income obligation) of the United States for which a series E...
NASA Astrophysics Data System (ADS)
Du, Kongchang; Zhao, Ying; Lei, Jiaqiang
2017-09-01
In hydrological time series prediction, singular spectrum analysis (SSA) and discrete wavelet transform (DWT) are widely used as preprocessing techniques for artificial neural network (ANN) and support vector machine (SVM) predictors. These hybrid or ensemble models seem to largely reduce the prediction error. In current literature researchers apply these techniques to the whole observed time series and then obtain a set of reconstructed or decomposed time series as inputs to ANN or SVM. However, through two comparative experiments and mathematical deduction we found the usage of SSA and DWT in building hybrid models is incorrect. Since SSA and DWT adopt 'future' values to perform the calculation, the series generated by SSA reconstruction or DWT decomposition contain information of 'future' values. These hybrid models caused incorrect 'high' prediction performance and may cause large errors in practice.
GrammarViz 3.0: Interactive Discovery of Variable-Length Time Series Patterns
Senin, Pavel; Lin, Jessica; Wang, Xing; ...
2018-02-23
The problems of recurrent and anomalous pattern discovery in time series, e.g., motifs and discords, respectively, have received a lot of attention from researchers in the past decade. However, since the pattern search space is usually intractable, most existing detection algorithms require that the patterns have discriminative characteristics and have its length known in advance and provided as input, which is an unreasonable requirement for many real-world problems. In addition, patterns of similar structure, but of different lengths may co-exist in a time series. In order to address these issues, we have developed algorithms for variable-length time series pattern discoverymore » that are based on symbolic discretization and grammar inference—two techniques whose combination enables the structured reduction of the search space and discovery of the candidate patterns in linear time. In this work, we present GrammarViz 3.0—a software package that provides implementations of proposed algorithms and graphical user interface for interactive variable-length time series pattern discovery. The current version of the software provides an alternative grammar inference algorithm that improves the time series motif discovery workflow, and introduces an experimental procedure for automated discretization parameter selection that builds upon the minimum cardinality maximum cover principle and aids the time series recurrent and anomalous pattern discovery.« less
GrammarViz 3.0: Interactive Discovery of Variable-Length Time Series Patterns
DOE Office of Scientific and Technical Information (OSTI.GOV)
Senin, Pavel; Lin, Jessica; Wang, Xing
The problems of recurrent and anomalous pattern discovery in time series, e.g., motifs and discords, respectively, have received a lot of attention from researchers in the past decade. However, since the pattern search space is usually intractable, most existing detection algorithms require that the patterns have discriminative characteristics and have its length known in advance and provided as input, which is an unreasonable requirement for many real-world problems. In addition, patterns of similar structure, but of different lengths may co-exist in a time series. In order to address these issues, we have developed algorithms for variable-length time series pattern discoverymore » that are based on symbolic discretization and grammar inference—two techniques whose combination enables the structured reduction of the search space and discovery of the candidate patterns in linear time. In this work, we present GrammarViz 3.0—a software package that provides implementations of proposed algorithms and graphical user interface for interactive variable-length time series pattern discovery. The current version of the software provides an alternative grammar inference algorithm that improves the time series motif discovery workflow, and introduces an experimental procedure for automated discretization parameter selection that builds upon the minimum cardinality maximum cover principle and aids the time series recurrent and anomalous pattern discovery.« less
Computerized Interpretation of Dynamic Breast MRI
2005-05-01
The interpretation criteria in the current literature fall Breast MRI has emerged as a promising modality for the into two major categories: 5’ 14...is that theraphy , current interpretation schemes might not be sufficiently ro- Despite its well-recognized advantages, applications of bust. MRI in...postcontrast series For the manual delineation, a radiologist (U.B.), blinded were then taken with a time interval of 60 s. Each series to the histological
Solutions for transients in arbitrarily branching cables: III. Voltage clamp problems.
Major, G
1993-07-01
Branched cable voltage recording and voltage clamp analytical solutions derived in two previous papers are used to explore practical issues concerning voltage clamp. Single exponentials can be fitted reasonably well to the decay phase of clamped synaptic currents, although they contain many underlying components. The effective time constant depends on the fit interval. The smoothing effects on synaptic clamp currents of dendritic cables and series resistance are explored with a single cylinder + soma model, for inputs with different time courses. "Soma" and "cable" charging currents cannot be separated easily when the soma is much smaller than the dendrites. Subtractive soma capacitance compensation and series resistance compensation are discussed. In a hippocampal CA1 pyramidal neurone model, voltage control at most dendritic sites is extremely poor. Parameter dependencies are illustrated. The effects of series resistance compound those of dendritic cables and depend on the "effective capacitance" of the cell. Plausible combinations of parameters can cause order-of-magnitude distortions to clamp current waveform measures of simulated Schaeffer collateral inputs. These voltage clamp problems are unlikely to be solved by the use of switch clamp methods.
NASA Astrophysics Data System (ADS)
Teng, W. L.; Shannon, H.
2010-12-01
The USDA World Agricultural Outlook Board (WAOB) coordinates the development of the monthly World Agricultural Supply and Demand Estimates (WASDE) for the U.S. and major foreign producing countries. Given the significant effect of weather on crop progress, conditions, and production, WAOB prepares frequent agricultural weather assessments in the Global Agricultural Decision Support Environment (GLADSE). Because the timing of the precipitation is often as important as the amount, in their effects on crop production, WAOB frequently examines precipitation time series to estimate crop productivity. An effective method for such assessment is the use of analog year comparisons, where precipitation time series, based on surface weather stations, from several historical years are compared with the time series from the current year. Once analog years are identified, crop yields can be estimated for the current season based on observed yields from the analog years, because of the similarities in the precipitation patterns. In this study, NASA satellite precipitation and soil moisture time series are used to identify analog years. Given that soil moisture often has a more direct effect than does precipitation on crop water availability, the time series of soil moisture could be more effective than that of precipitation, in identifying those years with similar crop yields. Retrospective analyses of analogs will be conducted to determine any reduction in the level of uncertainty in identifying analog years, and any reduction in false negatives or false positives. The comparison of analog years could potentially be improved by quantifying the selection of analogs, instead of the current visual inspection method. Various approaches to quantifying are currently being evaluated. This study is part of a larger effort to improve WAOB estimates by integrating NASA remote sensing soil moisture observations and research results into GLADSE, including (1) the integration of the Land Parameter Retrieval Model (LPRM) soil moisture algorithm for operational production and (2) the assimilation of LPRM soil moisture into the USDA Environmental Policy Integrated Climate (EPIC) crop model.
Modern trends in Class III orthognathic treatment: A time series analysis.
Lee, Chang-Hoon; Park, Hyun-Hee; Seo, Byoung-Moo; Lee, Shin-Jae
2017-03-01
To examine the current trends in surgical-orthodontic treatment for patients with Class III malocclusion using time-series analysis. The records of 2994 consecutive patients who underwent orthognathic surgery from January 1, 2004, through December 31, 2015, at Seoul National University Dental Hospital, Seoul, Korea, were reviewed. Clinical data from each surgical and orthodontic treatment record included patient's sex, age at the time of surgery, malocclusion classification, type of orthognathic surgical procedure, place where the orthodontic treatment was performed, orthodontic treatment modality, and time elapsed for pre- and postoperative orthodontic treatment. Out of the orthognathic surgery patients, 86% had Class III malocclusion. Among them, two-jaw surgeries have become by far the most common orthognathic surgical treatment these days. The age at the time of surgery and the number of new patients had seasonal variations, which demonstrated opposing patterns. There was neither positive nor negative correlation between pre- and postoperative orthodontic treatment time. Elapsed orthodontic treatment time for both before and after Class III orthognathic surgeries has been decreasing over the years. Results of the time series analysis might provide clinicians with some insights into current surgical and orthodontic management.
Barr, Margo L; Ferguson, Raymond A; Steel, David G
2014-08-12
Since 1997, the NSW Population Health Survey (NSWPHS) had selected the sample using random digit dialing of landline telephone numbers. When the survey began coverage of the population by landline phone frames was high (96%). As landline coverage in Australia has declined and continues to do so, in 2012, a sample of mobile telephone numbers was added to the survey using an overlapping dual-frame design. Details of the methodology are published elsewhere. This paper discusses the impacts of the sampling frame change on the time series, and provides possible approaches to handling these impacts. Prevalence estimates were calculated for type of phone-use, and a range of health indicators. Prevalence ratios (PR) for each of the health indicators were also calculated using Poisson regression analysis with robust variance estimation by type of phone-use. Health estimates for 2012 were compared to 2011. The full time series was examined for selected health indicators. It was estimated from the 2012 NSWPHS that 20.0% of the NSW population were mobile-only phone users. Looking at the full time series for overweight or obese and current smoking if the NSWPHS had continued to be undertaken only using a landline frame, overweight or obese would have been shown to continue to increase and current smoking would have been shown to continue to decrease. However, with the introduction of the overlapping dual-frame design in 2012, overweight or obese increased until 2011 and then decreased in 2012, and current smoking decreased until 2011, and then increased in 2012. Our examination of these time series showed that the changes were a consequence of the sampling frame change and were not real changes. Both the backcasting method and the minimal coverage method could adequately adjust for the design change and allow for the continuation of the time series. The inclusion of the mobile telephone numbers, through an overlapping dual-frame design, did impact on the time series for some of the health indicators collected through the NSWPHS, but only in that it corrected the estimates that were being calculated from a sample frame that was progressively covering less of the population.
NASA Astrophysics Data System (ADS)
Harris, Courtney K.; Wiberg, Patricia L.
1997-09-01
Modeling shelf sediment transport rates and bed reworking depths is problematic when the wave and current forcing conditions are not precisely known, as is usually the case when long-term sedimentation patterns are of interest. Two approaches to modeling sediment transport under such circumstances are considered. The first relies on measured or simulated time series of flow conditions to drive model calculations. The second approach uses as model input probability distribution functions of bottom boundary layer flow conditions developed from wave and current measurements. Sediment transport rates, frequency of bed resuspension by waves and currents, and bed reworking calculated using the two methods are compared at the mid-shelf STRESS (Sediment TRansport on Shelves and Slopes) site on the northern California continental shelf. Current, wave and resuspension measurements at the site are used to generate model inputs and test model results. An 11-year record of bottom wave orbital velocity, calculated from surface wave spectra measured by the National Data Buoy Center (NDBC) Buoy 46013 and verified against bottom tripod measurements, is used to characterize the frequency and duration of wave-driven transport events and to estimate the joint probability distribution of wave orbital velocity and period. A 109-day record of hourly current measurements 10 m above bottom is used to estimate the probability distribution of bottom boundary layer current velocity at this site and to develop an auto-regressive model to simulate current velocities for times when direct measurements of currents are not available. Frequency of transport, the maximum volume of suspended sediment, and average flux calculated using measured wave and simulated current time series agree well with values calculated using measured time series. A probabilistic approach is more amenable to calculations over time scales longer than existing wave records, but it tends to underestimate net transport because it does not capture the episodic nature of transport events. Both methods enable estimates to be made of the uncertainty in transport quantities that arise from an incomplete knowledge of the specific timing of wave and current conditions. 1997 Elsevier Science Ltd
26 CFR 1.454-1 - Obligations issued at discount.
Code of Federal Regulations, 2013 CFR
2013-04-01
... series E bond, at which time the stated redemption value was $674.60. A never elected under section 454(a... current income obligation, in which he retains his investment in a matured series E U.S. savings bond, or... for which a series E U.S. savings bond was exchanged (whether or not at final maturity) in an exchange...
26 CFR 1.454-1 - Obligations issued at discount.
Code of Federal Regulations, 2011 CFR
2011-04-01
... series E bond, at which time the stated redemption value was $674.60. A never elected under section 454(a... current income obligation, in which he retains his investment in a matured series E U.S. savings bond, or... for which a series E U.S. savings bond was exchanged (whether or not at final maturity) in an exchange...
26 CFR 1.454-1 - Obligations issued at discount.
Code of Federal Regulations, 2012 CFR
2012-04-01
... series E bond, at which time the stated redemption value was $674.60. A never elected under section 454(a... current income obligation, in which he retains his investment in a matured series E U.S. savings bond, or... for which a series E U.S. savings bond was exchanged (whether or not at final maturity) in an exchange...
Evaluation of scaling invariance embedded in short time series.
Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping
2014-01-01
Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length ~10(2). Calculations with specified Hurst exponent values of 0.2,0.3,...,0.9 show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias (≤0.03) and sharp confidential interval (standard deviation ≤0.05). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records.
Evaluation of Scaling Invariance Embedded in Short Time Series
Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping
2014-01-01
Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length . Calculations with specified Hurst exponent values of show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias () and sharp confidential interval (standard deviation ). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records. PMID:25549356
Nonlinear time-series analysis of current signal in cathodic contact glow discharge electrolysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allagui, Anis, E-mail: aallagui@sharjah.ac.ae; Abdelkareem, Mohammad Ali; Rojas, Andrea Espinel
In the standard two-electrode configuration employed in electrolytic process, when the control dc voltage is brought to a critical value, the system undergoes a transition from conventional electrolysis to contact glow discharge electrolysis (CGDE), which has also been referred to as liquid-submerged micro-plasma, glow discharge plasma electrolysis, electrode effect, electrolytic plasma, etc. The light-emitting process is associated with the development of an irregular and erratic current time-series which has been arbitrarily labelled as “random,” and thus dissuaded further research in this direction. Here, we examine the current time-series signals measured in cathodic CGDE configuration in a concentrated KOH solution atmore » different dc bias voltages greater than the critical voltage. We show that the signals are, in fact, not random according to the NIST SP. 800-22 test suite definition. We also demonstrate that post-processing low-pass filtered sequences requires less time than the native as-measured sequences, suggesting a superposition of low frequency chaotic fluctuations and high frequency behaviors (which may be produced by more than one possible source of entropy). Using an array of nonlinear time-series analyses for dynamical systems, i.e., the computation of largest Lyapunov exponents and correlation dimensions, and re-construction of phase portraits, we found that low-pass filtered datasets undergo a transition from quasi-periodic to chaotic to quasi-hyper-chaotic behavior, and back again to chaos when the voltage controlling-parameter is increased. The high frequency part of the signals is discussed in terms of highly nonlinear turbulent motion developed around the working electrode.« less
Models for forecasting hospital bed requirements in the acute sector.
Farmer, R D; Emami, J
1990-01-01
STUDY OBJECTIVE--The aim was to evaluate the current approach to forecasting hospital bed requirements. DESIGN--The study was a time series and regression analysis. The time series for mean duration of stay for general surgery in the age group 15-44 years (1969-1982) was used in the evaluation of different methods of forecasting future values of mean duration of stay and its subsequent use in the formation of hospital bed requirements. RESULTS--It has been suggested that the simple trend fitting approach suffers from model specification error and imposes unjustified restrictions on the data. Time series approach (Box-Jenkins method) was shown to be a more appropriate way of modelling the data. CONCLUSION--The simple trend fitting approach is inferior to the time series approach in modelling hospital bed requirements. PMID:2277253
NASA Astrophysics Data System (ADS)
Jayawardena, Adikaramge Asiri
The goal of this dissertation is to identify electrical and thermal parameters of an LED package that can be used to predict catastrophic failure real-time in an application. Through an experimental study the series electrical resistance and thermal resistance were identified as good indicators of contact failure of LED packages. This study investigated the long-term changes in series electrical resistance and thermal resistance of LED packages at three different current and junction temperature stress conditions. Experiment results showed that the series electrical resistance went through four phases of change; including periods of latency, rapid increase, saturation, and finally a sharp decline just before failure. Formation of voids in the contact metallization was identified as the underlying mechanism for series resistance increase. The rate of series resistance change was linked to void growth using the theory of electromigration. The rate of increase of series resistance is dependent on temperature and current density. The results indicate that void growth occurred in the cap (Au) layer, was constrained by the contact metal (Ni) layer, preventing open circuit failure of contact metal layer. Short circuit failure occurred due to electromigration induced metal diffusion along dislocations in GaN. The increase in ideality factor, and reverse leakage current with time provided further evidence to presence of metal in the semiconductor. An empirical model was derived for estimation of LED package failure time due to metal diffusion. The model is based on the experimental results and theories of electromigration and diffusion. Furthermore, the experimental results showed that the thermal resistance of LED packages increased with aging time. A relationship between thermal resistance change rate, with case temperature and temperature gradient within the LED package was developed. The results showed that dislocation creep is responsible for creep induced plastic deformation in the die-attach solder. The temperatures inside the LED package reached the melting point of die-attach solder due to delamination just before catastrophic open circuit failure. A combined model that could estimate life of LED packages based on catastrophic failure of thermal and electrical contacts is presented for the first time. This model can be used to make a-priori or real-time estimation of LED package life based on catastrophic failure. Finally, to illustrate the usefulness of the findings from this thesis, two different implementations of real-time life prediction using prognostics and health monitoring techniques are discussed.
NASA Technical Reports Server (NTRS)
Mlynczak, Martin G.; Martin-Torres, F. Javier; Mertens, Christopher J.; Marshall, B. Thomas; Thompson, R. Earl; Kozyra, Janet U.; Remsberg, Ellis E.; Gordley, Larry L.; Russell, James M.; Woods, Thomas
2008-01-01
We examine time series of the daily global power (W) radiated by carbon dioxide (at 15 microns) and by nitric oxide (at 5.3 microns) from the Earth s thermosphere between 100 km and 200 km altitude. Also examined is a time series of the daily absorbed solar ultraviolet power in the same altitude region in the wavelength span 0 to 175 nm. The infrared data are derived from the SABER instrument and the solar data are derived from the SEE instrument, both on the NASA TIMED satellite. The time series cover nearly 5 years from 2002 through 2006. The infrared and solar time series exhibit a decrease in radiated and absorbed power consistent with the declining phase of the current 11-year solar cycle. The infrared time series also exhibits high frequency variations that are not evident in the solar power time series. Spectral analysis shows a statistically significant 9-day periodicity in the infrared data but not in the solar data. A very strong 9-day periodicity is also found to exist in the time series of daily A(sub p) and K(sub p) geomagnetic indexes. These 9-day periodicities are linked to the recurrence of coronal holes on the Sun. These results demonstrate a direct coupling between the upper atmosphere of the Sun and the infrared energy budget of the thermosphere.
78 FR 31386 - Airworthiness Directives; Airbus Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-24
... Airworthiness Directives; Airbus Airplanes AGENCY: Federal Aviation Administration (FAA), Department of...) for all Airbus Model A330-200 and -300 series airplanes; and Model A340-200, -300, -500, and -600 series airplanes. That AD currently requires a one-time detailed inspection of both main landing gear...
Coherence and Chaos Phenomena in Josephson Oscillators for Superconducting Electronics.
1989-01-25
represents dissipation due j+(a+/b)+ b--i(a-) to the surface resistance of the superconducting films , y is the uniform bias current normalized to the...represents series loss due series of time-dependent Fourier spatial compo- to surface resistance of the superconducting films , nents. Tis approach provides...case is that in which there is no ing films , y is the spatially uniform bias current normal- external magnetic field applied to the junction. In this
Evaluating disease management program effectiveness: an introduction to time-series analysis.
Linden, Ariel; Adams, John L; Roberts, Nancy
2003-01-01
Currently, the most widely used method in the disease management (DM) industry for evaluating program effectiveness is referred to as the "total population approach." This model is a pretest-posttest design, with the most basic limitation being that without a control group, there may be sources of bias and/or competing extraneous confounding factors that offer a plausible rationale explaining the change from baseline. Furthermore, with the current inclination of DM programs to use financial indicators rather than program-specific utilization indicators as the principal measure of program success, additional biases are introduced that may cloud evaluation results. This paper presents a non-technical introduction to time-series analysis (using disease-specific utilization measures) as an alternative, and more appropriate, approach to evaluating DM program effectiveness than the current total population approach.
Headlines: Planet Earth: Improving Climate Literacy with Short Format News Videos
NASA Astrophysics Data System (ADS)
Tenenbaum, L. F.; Kulikov, A.; Jackson, R.
2012-12-01
One of the challenges of communicating climate science is the sense that climate change is remote and unconnected to daily life--something that's happening to someone else or in the future. To help face this challenge, NASA's Global Climate Change website http://climate.nasa.gov has launched a new video series, "Headlines: Planet Earth," which focuses on current climate news events. This rapid-response video series uses 3D video visualization technology combined with real-time satellite data and images, to throw a spotlight on real-world events.. The "Headlines: Planet Earth" news video products will be deployed frequently, ensuring timeliness. NASA's Global Climate Change Website makes extensive use of interactive media, immersive visualizations, ground-based and remote images, narrated and time-lapse videos, time-series animations, and real-time scientific data, plus maps and user-friendly graphics that make the scientific content both accessible and engaging to the public. The site has also won two consecutive Webby Awards for Best Science Website. Connecting climate science to current real-world events will contribute to improving climate literacy by making climate science relevant to everyday life.
Progress Report on the Airborne Metadata and Time Series Working Groups of the 2016 ESDSWG
NASA Astrophysics Data System (ADS)
Evans, K. D.; Northup, E. A.; Chen, G.; Conover, H.; Ames, D. P.; Teng, W. L.; Olding, S. W.; Krotkov, N. A.
2016-12-01
NASA's Earth Science Data Systems Working Groups (ESDSWG) was created over 10 years ago. The role of the ESDSWG is to make recommendations relevant to NASA's Earth science data systems from users' experiences. Each group works independently focusing on a unique topic. Participation in ESDSWG groups comes from a variety of NASA-funded science and technology projects, including MEaSUREs and ROSS. Participants include NASA information technology experts, affiliated contractor staff and other interested community members from academia and industry. Recommendations from the ESDSWG groups will enhance NASA's efforts to develop long term data products. The Airborne Metadata Working Group is evaluating the suitability of the current Common Metadata Repository (CMR) and Unified Metadata Model (UMM) for airborne data sets and to develop new recommendations as necessary. The overarching goal is to enhance the usability, interoperability, discovery and distribution of airborne observational data sets. This will be done by assessing the suitability (gaps) of the current UMM model for airborne data using lessons learned from current and past field campaigns, listening to user needs and community recommendations and assessing the suitability of ISO metadata and other standards to fill the gaps. The Time Series Working Group (TSWG) is a continuation of the 2015 Time Series/WaterML2 Working Group. The TSWG is using a case study-driven approach to test the new Open Geospatial Consortium (OGC) TimeseriesML standard to determine any deficiencies with respect to its ability to fully describe and encode NASA earth observation-derived time series data. To do this, the time series working group is engaging with the OGC TimeseriesML Standards Working Group (SWG) regarding unsatisfied needs and possible solutions. The effort will end with the drafting of an OGC Engineering Report based on the use cases and interactions with the OGC TimeseriesML SWG. Progress towards finalizing recommendations will be presented at the meeting.
Multivariate multiscale entropy of financial markets
NASA Astrophysics Data System (ADS)
Lu, Yunfan; Wang, Jun
2017-11-01
In current process of quantifying the dynamical properties of the complex phenomena in financial market system, the multivariate financial time series are widely concerned. In this work, considering the shortcomings and limitations of univariate multiscale entropy in analyzing the multivariate time series, the multivariate multiscale sample entropy (MMSE), which can evaluate the complexity in multiple data channels over different timescales, is applied to quantify the complexity of financial markets. Its effectiveness and advantages have been detected with numerical simulations with two well-known synthetic noise signals. For the first time, the complexity of four generated trivariate return series for each stock trading hour in China stock markets is quantified thanks to the interdisciplinary application of this method. We find that the complexity of trivariate return series in each hour show a significant decreasing trend with the stock trading time progressing. Further, the shuffled multivariate return series and the absolute multivariate return series are also analyzed. As another new attempt, quantifying the complexity of global stock markets (Asia, Europe and America) is carried out by analyzing the multivariate returns from them. Finally we utilize the multivariate multiscale entropy to assess the relative complexity of normalized multivariate return volatility series with different degrees.
NASA Astrophysics Data System (ADS)
Honegger, D. A.; Haller, M. C.; Diaz Mendez, G. M.; Pittman, R.; Catalan, P. A.
2012-12-01
Land-based X-band marine radar observations were collected as part of the month-long DARLA-MURI / RIVET-DRI field experiment at New River Inlet, NC in May 2012. Here we present a synopsis of preliminary results utilizing microwave radar backscatter time series collected from an antenna located 400 m inside the inlet mouth and with a footprint spanning 1000 m beyond the ebb shoals. Two crucial factors in the forcing and constraining of nearshore numerical models are accurate bathymetry and offshore variability in the wave field. Image time series of radar backscatter from surface gravity waves can be utilized to infer these parameters over a large swath and during times of poor optical visibility. Presented are radar-derived wavenumber vector maps obtained from the Plant et al. (2008) algorithm and bathymetric estimates as calculated using Holman et al. (JGR, in review). We also evaluate the effects of tidal currents on the wave directions and depth inversion accuracy. In addition, shifts in the average wave breaking patterns at tidal frequencies shed light on depth- (and possibly current-) induced breaking as a function of tide level and tidal current velocity, while shifts over longer timescales imply bedform movement during the course of the experiment. Lastly, lowpass filtered radar image time series of backscatter intensity are shown to identify the structure and propagation of tidal plume fronts and multiscale ebb jets at the offshore shoal boundary.
Series transistors isolate amplifier from flyback voltage
NASA Technical Reports Server (NTRS)
Banks, W.
1967-01-01
Circuit enables high sawtooth currents to be passed through a deflection coil and isolate the coil driving amplifier from the flyback voltage. It incorporates a switch consisting of transistors in series with the driving amplifier and deflection coil. The switch disconnects the deflection coil from the amplifier during the retrace time.
Time series smoother for effect detection.
You, Cheng; Lin, Dennis K J; Young, S Stanley
2018-01-01
In environmental epidemiology, it is often encountered that multiple time series data with a long-term trend, including seasonality, cannot be fully adjusted by the observed covariates. The long-term trend is difficult to separate from abnormal short-term signals of interest. This paper addresses how to estimate the long-term trend in order to recover short-term signals. Our case study demonstrates that the current spline smoothing methods can result in significant positive and negative cross-correlations from the same dataset, depending on how the smoothing parameters are chosen. To circumvent this dilemma, three classes of time series smoothers are proposed to detrend time series data. These smoothers do not require fine tuning of parameters and can be applied to recover short-term signals. The properties of these smoothers are shown with both a case study using a factorial design and a simulation study using datasets generated from the original dataset. General guidelines are provided on how to discover short-term signals from time series with a long-term trend. The benefit of this research is that a problem is identified and characteristics of possible solutions are determined.
Time series smoother for effect detection
Lin, Dennis K. J.; Young, S. Stanley
2018-01-01
In environmental epidemiology, it is often encountered that multiple time series data with a long-term trend, including seasonality, cannot be fully adjusted by the observed covariates. The long-term trend is difficult to separate from abnormal short-term signals of interest. This paper addresses how to estimate the long-term trend in order to recover short-term signals. Our case study demonstrates that the current spline smoothing methods can result in significant positive and negative cross-correlations from the same dataset, depending on how the smoothing parameters are chosen. To circumvent this dilemma, three classes of time series smoothers are proposed to detrend time series data. These smoothers do not require fine tuning of parameters and can be applied to recover short-term signals. The properties of these smoothers are shown with both a case study using a factorial design and a simulation study using datasets generated from the original dataset. General guidelines are provided on how to discover short-term signals from time series with a long-term trend. The benefit of this research is that a problem is identified and characteristics of possible solutions are determined. PMID:29684033
On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series
Fransson, Peter
2016-01-01
Abstract Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box–Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed. PMID:27784176
On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series.
Thompson, William Hedley; Fransson, Peter
2016-12-01
Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box-Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed.
Permutation entropy of finite-length white-noise time series.
Little, Douglas J; Kane, Deb M
2016-08-01
Permutation entropy (PE) is commonly used to discriminate complex structure from white noise in a time series. While the PE of white noise is well understood in the long time-series limit, analysis in the general case is currently lacking. Here the expectation value and variance of white-noise PE are derived as functions of the number of ordinal pattern trials, N, and the embedding dimension, D. It is demonstrated that the probability distribution of the white-noise PE converges to a χ^{2} distribution with D!-1 degrees of freedom as N becomes large. It is further demonstrated that the PE variance for an arbitrary time series can be estimated as the variance of a related metric, the Kullback-Leibler entropy (KLE), allowing the qualitative N≫D! condition to be recast as a quantitative estimate of the N required to achieve a desired PE calculation precision. Application of this theory to statistical inference is demonstrated in the case of an experimentally obtained noise series, where the probability of obtaining the observed PE value was calculated assuming a white-noise time series. Standard statistical inference can be used to draw conclusions whether the white-noise null hypothesis can be accepted or rejected. This methodology can be applied to other null hypotheses, such as discriminating whether two time series are generated from different complex system states.
Cycles in oceanic teleconnections and global temperature change
NASA Astrophysics Data System (ADS)
Seip, Knut L.; Grøn, Øyvind
2018-06-01
Three large ocean currents are represented by proxy time series: the North Atlantic Oscillation (NAO), the Southern Oscillation Index (SOI), and the Pacific Decadal Oscillation (PDO). We here show how proxies for the currents interact with each other and with the global temperature anomaly (GTA). Our results are obtained by a novel method, which identifies running average leading-lagging (LL) relations, between paired series. We find common cycle times for a paired series of 6-7 and 25-28 years and identify years when the LL relations switch. Switching occurs with 18.4 ± 14.3-year intervals for the short 6-7-year cycles and with 27 ± 15-year intervals for the 25-28-year cycles. During the period 1940-1950, the LL relations for the long cycles were circular (nomenclature x leads y: x → y): GTA → NAO → SOI → PDO → GTA. However, after 1960, the LL relations become more complex and there are indications that GTA leads to both NAO and PDO. The switching years are related to ocean current tie points and reversals reported in the literature.
NASA Astrophysics Data System (ADS)
Phillips, D. A.; Herring, T.; Melbourne, T. I.; Murray, M. H.; Szeliga, W. M.; Floyd, M.; Puskas, C. M.; King, R. W.; Boler, F. M.; Meertens, C. M.; Mattioli, G. S.
2017-12-01
The Geodesy Advancing Geosciences and EarthScope (GAGE) Facility, operated by UNAVCO, provides a diverse suite of geodetic data, derived products and cyberinfrastructure services to support community Earth science research and education. GPS data and products including decadal station position time series and velocities are provided for 2000+ continuous GPS stations from the Plate Boundary Observatory (PBO) and other networks distributed throughout the high Arctic, North America, and Caribbean regions. The position time series contain a multitude of signals in addition to the secular motions, including coseismic and postseismic displacements, interseismic strain accumulation, and transient signals associated with hydrologic and other processes. We present our latest velocity field solutions, new time series offset estimate products, and new time series examples associated with various phenomena. Position time series, and the signals they contain, are inherently dependent upon analysis parameters such as network scaling and reference frame realization. The estimation of scale changes for example, a common practice, has large impacts on vertical motion estimates. GAGE/PBO velocities and time series are currently provided in IGS (IGb08) and North America (NAM08, IGb08 rotated to a fixed North America Plate) reference frames. We are reprocessing all data (1996 to present) as part of the transition from IGb08 to IGS14 that began in 2017. New NAM14 and IGS14 data products are discussed. GAGE/PBO GPS data products are currently generated using onsite computing clusters. As part of an NSF funded EarthCube Building Blocks project called "Deploying MultiFacility Cyberinfrastructure in Commercial and Private Cloud-based Systems (GeoSciCloud)", we are investigating performance, cost, and efficiency differences between local computing resources and cloud based resources. Test environments include a commercial cloud provider (Amazon/AWS), NSF cloud-like infrastructures within XSEDE (TACC, the Texas Advanced Computing Center), and in-house cyberinfrastructures. Preliminary findings from this effort are presented. Web services developed by UNAVCO to facilitate the discovery, customization and dissemination of GPS data and products are also presented.
Hydrodynamic and suspended-solids concentration measurements in Suisun Bay, California, 1995
Cuetara, Jay I.; Burau, Jon R.; Schoellhamer, David H.
2001-01-01
Sea level, current velocity, water temperature, salinity (computed from conductivity and temperature), and suspended-solids data collected in Suisun Bay, California, from May 30, 1995, through October 27, 1995, by the U.S. Geological Survey are documented in this report. Data were collected concurrently at 21 sites. Various parameters were measured at each site. Velocity-profile data were collected at 6 sites, single-point velocity measurements were made at 9 sites, salinity data were collected at 20 sites, and suspended-solids concentrations were measured at 10 sites. Sea-level and velocity data are presented in three forms; harmonic analysis results; time-series plots (sea level, current speed, and current direction versus time); and time-series plots of low-pass-filtered time series. Temperature, salinity, and suspended-solids data are presented as plots of raw and low-pass-filtered time series.The velocity and salinity data presented in this report document a period when the residual current patterns and salt field were transitioning from a freshwater-inflow-dominated condition towards a quasi steady-state summer condition when density-driven circulation and tidal nonlinearities became relatively more important as long-term transport mechanisms. Sacramento-San Joaquin River Delta outflow was high prior to and during this study, so the tidally averaged salinities were abnormally low for this time of year. For example, the tidally averaged salinities varied from 0-12 at Martinez, the western border of Suisun Bay, to a maximum of 2 at Mallard Island, the eastern border of Suisun Bay. Even though salinities increased overall in Suisun Bay during the study period, the near-bed residual currents primarily were directed seaward. Therefore, salinity intrusion through Suisun Bay towards the Delta primarily was accomplished in the absence of the tidally averaged, two-layer flow known as gravitational circulation where, by definition, the net currents are landward at the bed. The Folsom Dam spillway gate failure on July 17, 1995, was analyzed to determine the effect on the hydrodynamics of Suisun Bay. The peak flow of the American River reached roughly 1,000 cubic meters per second as a result of the failure, which is relatively small. This was roughly 15 percent of the approximate 7,000 cubic meters per second tidal flows that occur daily in Suisun Bay and was likely attenuated greatly. Based on analysis of tidally averaged near-bed salinity and depth-averaged currents after the failure, the effect was essentially nonexistent and is indistinguishable from the natural variability.
Evolution of the Sunspot Number and Solar Wind B Time Series
NASA Astrophysics Data System (ADS)
Cliver, Edward W.; Herbst, Konstantin
2018-03-01
The past two decades have witnessed significant changes in our knowledge of long-term solar and solar wind activity. The sunspot number time series (1700-present) developed by Rudolf Wolf during the second half of the 19th century was revised and extended by the group sunspot number series (1610-1995) of Hoyt and Schatten during the 1990s. The group sunspot number is significantly lower than the Wolf series before ˜1885. An effort from 2011-2015 to understand and remove differences between these two series via a series of workshops had the unintended consequence of prompting several alternative constructions of the sunspot number. Thus it has been necessary to expand and extend the sunspot number reconciliation process. On the solar wind side, after a decade of controversy, an ISSI International Team used geomagnetic and sunspot data to obtain a high-confidence time series of the solar wind magnetic field strength (B) from 1750-present that can be compared with two independent long-term (> ˜600 year) series of annual B-values based on cosmogenic nuclides. In this paper, we trace the twists and turns leading to our current understanding of long-term solar and solar wind activity.
State Energy Price and Expenditure Estimates
2017-01-01
The State Energy Price and Expenditure Estimates provide data on energy prices in current dollars per million Btu and expenditures in current dollars, by state and for the United States, by energy source and by sector in annual time-series back to 1970
78 FR 17297 - Airworthiness Directives; Rolls-Royce plc Turbofan Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-21
... Airworthiness Directives; Rolls-Royce plc Turbofan Engines AGENCY: Federal Aviation Administration (FAA), DOT... (AD) for all Rolls-Royce plc (RR) RB211 Trent 500 series turbofan engines. That AD currently requires... 9, 2012), for all RR RB211 Trent 500 series turbofan engines. That AD requires a one-time inspection...
78 FR 1991 - Major Capital Investment Projects
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-09
...) published on June 3, 2010 (75 FR 31383), which posed a series of questions about the current regulation and... system in which well- justified projects are funded. At the same time, FTA seeks to ensure that it does...; to use a series of standard factors in a simple spreadsheet to calculate vehicle miles traveled (VMT...
Using satellite laser ranging to measure ice mass change in Greenland and Antarctica
NASA Astrophysics Data System (ADS)
Bonin, Jennifer A.; Chambers, Don P.; Cheng, Minkang
2018-01-01
A least squares inversion of satellite laser ranging (SLR) data over Greenland and Antarctica could extend gravimetry-based estimates of mass loss back to the early 1990s and fill any future gap between the current Gravity Recovery and Climate Experiment (GRACE) and the future GRACE Follow-On mission. The results of a simulation suggest that, while separating the mass change between Greenland and Antarctica is not possible at the limited spatial resolution of the SLR data, estimating the total combined mass change of the two areas is feasible. When the method is applied to real SLR and GRACE gravity series, we find significantly different estimates of inverted mass loss. There are large, unpredictable, interannual differences between the two inverted data types, making us conclude that the current 5×5 spherical harmonic SLR series cannot be used to stand in for GRACE. However, a comparison with the longer IMBIE time series suggests that on a 20-year time frame, the inverted SLR series' interannual excursions may average out, and the long-term mass loss estimate may be reasonable.
Atmospheric turbulence simulation for Shuttle orbiter
NASA Technical Reports Server (NTRS)
Tatom, F. B.; Smith, S. R.
1979-01-01
An improved non-recursive model for atmospheric turbulence along the flight path of the Shuttle Orbiter is developed which provides for simulation of instantaneous vertical and horizontal gusts at the vehicle center-of-gravity, and also for simulation of instantaneous gust gradients. Based on this model the time series for both gusts and gust gradients are generated and stored on a series of magnetic tapes. Section 2 provides a description of the various technical considerations associated with the turbulence simulation model. Included in this section are descriptions of the digital filter simulation model, the von Karman spectra with finite upper limits, and the final non recursive turbulence simulation model which was used to generate the time series. Section 2 provides a description of the various technical considerations associated with the turbulence simulation model. Included in this section are descriptions of the digial filter simulation model, the von Karman spectra with finite upper limits, and the final non recursive turbulence simulation model which was used to generate the time series. Section 3 provides a description of the time series as currently recorded on magnetic tape. Conclusions and recommendations are presented in Section 4.
Electrical Evaluation of RCA MWS5001D Random Access Memory, Volume 5, Appendix D
NASA Technical Reports Server (NTRS)
Klute, A.
1979-01-01
The electrical characterization and qualification test results are presented for the RCA MWS 5001D random access memory. The tests included functional tests, AC and DC parametric tests, AC parametric worst-case pattern selection test, determination of worst-case transition for setup and hold times, and a series of schmoo plots. Average input high current, worst case input high current, output low current, and data setup time are some of the results presented.
NASA Astrophysics Data System (ADS)
Alken, P.; Chulliat, A.; Maus, S.
2012-12-01
The day-time eastward equatorial electric field (EEF) in the ionospheric E-region plays an important role in equatorial ionospheric dynamics. It is responsible for driving the equatorial electrojet (EEJ) current system, equatorial vertical ion drifts, and the equatorial ionization anomaly (EIA). Due to its importance, there is much interest in accurately measuring and modeling the EEF. However, there are limited sources of direct EEF measurements with full temporal and spatial coverage of the equatorial ionosphere. In this work, we propose a method of estimating a continuous day-time time series of the EEF at any longitude, provided there is a pair of ground magnetic observatories in the region which can accurately track changes in the strength of the EEJ. First, we derive a climatological unit latitudinal current profile from direct overflights of the CHAMP satellite and use delta H measurements from the ground observatory pair to determine the magnitude of the current. The time series of current profiles is then inverted for the EEF by solving the governing electrodynamic equations. While this method has previously been applied and validated in the Peruvian sector, in this work we demonstrate the method using a pair of magnetometers in Africa (Samogossoni, SAM, 0.18 degrees magnetic latitude and Tamanrasset, TAM, 11.5 degrees magnetic latitude) and validate the resulting EEF values against the CINDI ion velocity meter (IVM) instrument on the C/NOFS satellite. We find a very good 80% correlation with C/NOFS IVM measurements and a root-mean-square difference of 9 m/s in vertical drift velocity. This technique can be extended to any pair of ground observatories which can capture the day-time strength of the EEJ. We plan to apply this work to more observatory pairs around the globe and distribute real-time equatorial electric field values to the community.
Series resonant converter with auxiliary winding turns: analysis, design and implementation
NASA Astrophysics Data System (ADS)
Lin, Bor-Ren
2018-05-01
Conventional series resonant converters have researched and applied for high-efficiency power units due to the benefit of its low switching losses. The main problems of series resonant converters are wide frequency variation and high circulating current. Thus, resonant converter is limited at narrow input voltage range and large input capacitor is normally adopted in commercial power units to provide the minimum hold-up time requirement when AC power is off. To overcome these problems, the resonant converter with auxiliary secondary windings are presented in this paper to achieve high voltage gain at low input voltage case such as hold-up time duration when utility power is off. Since the high voltage gain is used at low input voltage cased, the frequency variation of the proposed converter compared to the conventional resonant converter is reduced. Compared to conventional resonant converter, the hold-up time in the proposed converter is more than 40ms. The larger magnetising inductance of transformer is used to reduce the circulating current losses. Finally, a laboratory prototype is constructed and experiments are provided to verify the converter performance.
CI2 for creating and comparing confidence-intervals for time-series bivariate plots.
Mullineaux, David R
2017-02-01
Currently no method exists for calculating and comparing the confidence-intervals (CI) for the time-series of a bivariate plot. The study's aim was to develop 'CI2' as a method to calculate the CI on time-series bivariate plots, and to identify if the CI between two bivariate time-series overlap. The test data were the knee and ankle angles from 10 healthy participants running on a motorised standard-treadmill and non-motorised curved-treadmill. For a recommended 10+ trials, CI2 involved calculating 95% confidence-ellipses at each time-point, then taking as the CI the points on the ellipses that were perpendicular to the direction vector between the means of two adjacent time-points. Consecutive pairs of CI created convex quadrilaterals, and any overlap of these quadrilaterals at the same time or ±1 frame as a time-lag calculated using cross-correlations, indicated where the two time-series differed. CI2 showed no group differences between left and right legs on both treadmills, but the same legs between treadmills for all participants showed differences of less knee extension on the curved-treadmill before heel-strike. To improve and standardise the use of CI2 it is recommended to remove outlier time-series, use 95% confidence-ellipses, and scale the ellipse by the fixed Chi-square value as opposed to the sample-size dependent F-value. For practical use, and to aid in standardisation or future development of CI2, Matlab code is provided. CI2 provides an effective method to quantify the CI of bivariate plots, and to explore the differences in CI between two bivariate time-series. Copyright © 2016 Elsevier B.V. All rights reserved.
Time is an affliction: Why ecology cannot be as predictive as physics and why it needs time series
NASA Astrophysics Data System (ADS)
Boero, F.; Kraberg, A. C.; Krause, G.; Wiltshire, K. H.
2015-07-01
Ecological systems depend on both constraints and historical contingencies, both of which shape their present observable system state. In contrast to ahistorical systems, which are governed solely by constraints (i.e. laws), historical systems and their dynamics can be understood only if properly described, in the course of time. Describing these dynamics and understanding long-term variability can be seen as the mission of long time series measuring not only simple abiotic features but also complex biological variables, such as species diversity and abundances, allowing deep insights in the functioning of food webs and ecosystems in general. Long time-series are irreplaceable for understanding change, and crucially inherent system variability and thus envisaging future scenarios. This notwithstanding current policies in funding and evaluating scientific research discourage the maintenance of long term series, despite a clear need for long-term strategies to cope with climate change. Time series are crucial for a pursuit of the much invoked Ecosystem Approach and to the passage from simple monitoring programs of large-scale and long-term Earth observatories - thus promoting a better understanding of the causes and effects of change in ecosystems. The few ongoing long time series in European waters must be integrated and networked so as to facilitate the formation of nodes of a series of observatories which, together, should allow the long-term management of the features and characteristics of European waters. Human capacity building in this region of expertise and a stronger societal involvement are also urgently needed, since the expertise in recognizing and describing species and therefore recording them reliably in the context of time series is rapidly vanishing from the European Scientific community.
Superconducting fault current limiter for railway transport
NASA Astrophysics Data System (ADS)
Fisher, L. M.; Alferov, D. F.; Akhmetgareev, M. R.; Budovskii, A. I.; Evsin, D. V.; Voloshin, I. F.; Kalinov, A. V.
2015-12-01
A resistive switching superconducting fault current limiter (SFCL) for DC networks with voltage of 3.5 kV and nominal current of 2 kA is developed. The SFCL consists of two series-connected units: block of superconducting modules and high-speed vacuum breaker with total disconnection time not more than 8 ms. The results of laboratory tests of superconducting SFCL modules in current limiting mode are presented. The recovery time of superconductivity is experimentally determined. The possibility of application of SFCL on traction substations of Russian Railways is considered.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-30
..., and (ii) terminate the opening process when away markets become crossed during the opening process. A new opening process for the affected series would commence at the time the Away Best Bid/Offer (``ABBO... PHLX XL system currently calculates the OQR without regard to away market(s) in the affected series...
Wavelet analysis in ecology and epidemiology: impact of statistical tests
Cazelles, Bernard; Cazelles, Kévin; Chavez, Mario
2014-01-01
Wavelet analysis is now frequently used to extract information from ecological and epidemiological time series. Statistical hypothesis tests are conducted on associated wavelet quantities to assess the likelihood that they are due to a random process. Such random processes represent null models and are generally based on synthetic data that share some statistical characteristics with the original time series. This allows the comparison of null statistics with those obtained from original time series. When creating synthetic datasets, different techniques of resampling result in different characteristics shared by the synthetic time series. Therefore, it becomes crucial to consider the impact of the resampling method on the results. We have addressed this point by comparing seven different statistical testing methods applied with different real and simulated data. Our results show that statistical assessment of periodic patterns is strongly affected by the choice of the resampling method, so two different resampling techniques could lead to two different conclusions about the same time series. Moreover, our results clearly show the inadequacy of resampling series generated by white noise and red noise that are nevertheless the methods currently used in the wide majority of wavelets applications. Our results highlight that the characteristics of a time series, namely its Fourier spectrum and autocorrelation, are important to consider when choosing the resampling technique. Results suggest that data-driven resampling methods should be used such as the hidden Markov model algorithm and the ‘beta-surrogate’ method. PMID:24284892
Wavelet analysis in ecology and epidemiology: impact of statistical tests.
Cazelles, Bernard; Cazelles, Kévin; Chavez, Mario
2014-02-06
Wavelet analysis is now frequently used to extract information from ecological and epidemiological time series. Statistical hypothesis tests are conducted on associated wavelet quantities to assess the likelihood that they are due to a random process. Such random processes represent null models and are generally based on synthetic data that share some statistical characteristics with the original time series. This allows the comparison of null statistics with those obtained from original time series. When creating synthetic datasets, different techniques of resampling result in different characteristics shared by the synthetic time series. Therefore, it becomes crucial to consider the impact of the resampling method on the results. We have addressed this point by comparing seven different statistical testing methods applied with different real and simulated data. Our results show that statistical assessment of periodic patterns is strongly affected by the choice of the resampling method, so two different resampling techniques could lead to two different conclusions about the same time series. Moreover, our results clearly show the inadequacy of resampling series generated by white noise and red noise that are nevertheless the methods currently used in the wide majority of wavelets applications. Our results highlight that the characteristics of a time series, namely its Fourier spectrum and autocorrelation, are important to consider when choosing the resampling technique. Results suggest that data-driven resampling methods should be used such as the hidden Markov model algorithm and the 'beta-surrogate' method.
PSO-MISMO modeling strategy for multistep-ahead time series prediction.
Bao, Yukun; Xiong, Tao; Hu, Zhongyi
2014-05-01
Multistep-ahead time series prediction is one of the most challenging research topics in the field of time series modeling and prediction, and is continually under research. Recently, the multiple-input several multiple-outputs (MISMO) modeling strategy has been proposed as a promising alternative for multistep-ahead time series prediction, exhibiting advantages compared with the two currently dominating strategies, the iterated and the direct strategies. Built on the established MISMO strategy, this paper proposes a particle swarm optimization (PSO)-based MISMO modeling strategy, which is capable of determining the number of sub-models in a self-adaptive mode, with varying prediction horizons. Rather than deriving crisp divides with equal-size s prediction horizons from the established MISMO, the proposed PSO-MISMO strategy, implemented with neural networks, employs a heuristic to create flexible divides with varying sizes of prediction horizons and to generate corresponding sub-models, providing considerable flexibility in model construction, which has been validated with simulated and real datasets.
Time series analysis for psychological research: examining and forecasting change
Jebb, Andrew T.; Tay, Louis; Wang, Wei; Huang, Qiming
2015-01-01
Psychological research has increasingly recognized the importance of integrating temporal dynamics into its theories, and innovations in longitudinal designs and analyses have allowed such theories to be formalized and tested. However, psychological researchers may be relatively unequipped to analyze such data, given its many characteristics and the general complexities involved in longitudinal modeling. The current paper introduces time series analysis to psychological research, an analytic domain that has been essential for understanding and predicting the behavior of variables across many diverse fields. First, the characteristics of time series data are discussed. Second, different time series modeling techniques are surveyed that can address various topics of interest to psychological researchers, including describing the pattern of change in a variable, modeling seasonal effects, assessing the immediate and long-term impact of a salient event, and forecasting future values. To illustrate these methods, an illustrative example based on online job search behavior is used throughout the paper, and a software tutorial in R for these analyses is provided in the Supplementary Materials. PMID:26106341
Time series analysis for psychological research: examining and forecasting change.
Jebb, Andrew T; Tay, Louis; Wang, Wei; Huang, Qiming
2015-01-01
Psychological research has increasingly recognized the importance of integrating temporal dynamics into its theories, and innovations in longitudinal designs and analyses have allowed such theories to be formalized and tested. However, psychological researchers may be relatively unequipped to analyze such data, given its many characteristics and the general complexities involved in longitudinal modeling. The current paper introduces time series analysis to psychological research, an analytic domain that has been essential for understanding and predicting the behavior of variables across many diverse fields. First, the characteristics of time series data are discussed. Second, different time series modeling techniques are surveyed that can address various topics of interest to psychological researchers, including describing the pattern of change in a variable, modeling seasonal effects, assessing the immediate and long-term impact of a salient event, and forecasting future values. To illustrate these methods, an illustrative example based on online job search behavior is used throughout the paper, and a software tutorial in R for these analyses is provided in the Supplementary Materials.
Variations in Stratospheric Inorganic Chlorine Between 1991 and 2006
NASA Technical Reports Server (NTRS)
Lary, D. J.; Waugh, D. W.; Douglass, A. R.; Stolarski, R. S.; Newman, P. A.; Mussa, H.
2007-01-01
So how quickly will the ozone hole recover? This depends on how quickly the chlorine content (Cl2) of the atmosphere will decline. The ozone hole forms over the Antarctic each southern spring (September and October). The extremely small ozone amounts in the ozone hole are there because of chemical reactions of ozone with chlorine. This chlorine originates largely from industrially produced chlorofluorocarbon (CFC) compounds. An international agreement, the Montreal Protocol, is drastically reducing the amount of chlorine-containing compounds that we are releasing into the atmosphere. To be able to attribute changes in stratospheric ozone to changes in chlorine we need to know the distribution of atmospheric chlorine. However, due to a lack of continuous observations of all the key chlorine gases, producing a continuous time series of stratospheric chlorine has not been achieved to date. We have for the first time devised a technique to make a 17-year time series for stratospheric chlorine that uses the long time series of HCl observations made from several space borne instruments and a neural network. The neural networks allow us to both inter-calibrate the various HCl instruments and to infer the total amount of atmospheric chlorine from HCl. These new estimates of Cl, provide a much needed critical test for current global models that currently predict significant differences in both Cl(sub y) and ozone recovery. These models exhibit differences in their projection of the recovery time and our chlorine content time series will help separate the good from the bad in these projections.
Design and Development of a Series Switch for High Voltage in RF Heating
NASA Astrophysics Data System (ADS)
Patel, Himanshu K.; Shah, Deep; Thacker, Mauli; Shah, Atman
2013-02-01
Plasma is the fourth state of matter. To sustain plasma in its ionic form very high temperature is essential. RF heating systems are used to provide the required temperature. Arching phenomenon in these systems can cause enormous damage to the RF tube. Heavy current flows across the anode-cathode junction, which need to be suppressed in minimal time for its protection. Fast-switching circuit breakers are used to cut-off the load from the supply in cases of arching. The crowbar interrupts the connection between the high voltage power supply (HVPS) and the RF tube for a temporary period between which the series switch has to open. The crowbar shunts the current across the load but in the process leads to short circuiting the HVPS. Thus, to protect the load as well as the HVPS a series switch is necessary. This paper presents the design and development of high voltage Series Switch for the high power switching applications. Fiber optic based Optimum triggering scheme is designed and tested to restrict the time delay well within the stipulated limits. The design is well supported with the experimental results for the whole set-up along with the series switch at various voltage level before its approval for operation at 5.2 kV.
The Wonders of Physics Outreach Program
NASA Astrophysics Data System (ADS)
Sprott, J. C.; Mirus, K. A.; Newman, D. E.; Watts, C.; Feeley, R. E.; Fernandez, E.; Fontana, P. W.; Krajewski, T.; Lovell, T. W.; Oliva, S.; Stoneking, M. R.; Thomas, M. A.; Jaimison, W.; Maas, K.; Milbrandt, R.; Mullman, K.; Narf, S.; Nesnidal, R.; Nonn, P.
1996-11-01
One important step toward public education about fusion energy is to first elevate the public's appreciation of science in general. Toward this end, the Wonders of Physics program was started at the University of Wisconsin-Madison in 1984 as a public lecture and demonstration series in an attempt to stem a growing tide of science illiteracy and to bolster the public's perception of the scientific enterprise. Since that time, it has grown into a public outreach endeavor which consists of a traveling demonstration show, educational pamphlets, videos, software, a website (http://sprott.physics.wisc.edu/wop.htm), and the annual public lecture demonstration series including tours highlighting the Madison Symmetric Torus and departmental facilities. The presentation has been made about 400 times to a total audience in excess of 50,000. Sample educational materials and Lecture Kits will be available at the poster session. Currently at Oak Ridge National Laboratories. Currently at Max Planck Institut fuer Plasmaphysik. *Currently at Johnson Controls.
SaaS Platform for Time Series Data Handling
NASA Astrophysics Data System (ADS)
Oplachko, Ekaterina; Rykunov, Stanislav; Ustinin, Mikhail
2018-02-01
The paper is devoted to the description of MathBrain, a cloud-based resource, which works as a "Software as a Service" model. It is designed to maximize the efficiency of the current technology and to provide a tool for time series data handling. The resource provides access to the following analysis methods: direct and inverse Fourier transforms, Principal component analysis and Independent component analysis decompositions, quantitative analysis, magnetoencephalography inverse problem solution in a single dipole model based on multichannel spectral data.
Status of CSR RL06 GRACE reprocessing and preliminary results
NASA Astrophysics Data System (ADS)
Save, H.
2017-12-01
The GRACE project plans to re-processes the GRACE mission data in order to be consistent with the first gravity products released by the GRACE-FO project. The RL06 reprocessing will harmonize the GRACE time-series with the first release of GRACE-FO. This paper catalogues the changes in the upcoming RL06 release and discusses the quality improvements as compared to the current RL05 release. The processing and parameterization changes as compared to the current release are also discussed. This paper discusses the evolution of the quality of the GRACE solutions and characterize the errors over the past few years. The possible challenges associated with connecting the GRACE time series with that from GRACE-FO are also discussed.
NASA Astrophysics Data System (ADS)
Nickles, C.; Zhao, Y.; Beighley, E.; Durand, M. T.; David, C. H.; Lee, H.
2017-12-01
The Surface Water and Ocean Topography (SWOT) satellite mission is jointly developed by NASA, the French space agency (CNES), with participation from the Canadian and UK space agencies to serve both the hydrology and oceanography communities. The SWOT mission will sample global surface water extents and elevations (lakes/reservoirs, rivers, estuaries, oceans, sea and land ice) at a finer spatial resolution than is currently possible enabling hydrologic discovery, model advancements and new applications that are not currently possible or likely even conceivable. Although the mission will provide global cover, analysis and interpolation of the data generated from the irregular space/time sampling represents a significant challenge. In this study, we explore the applicability of the unique space/time sampling for understanding river discharge dynamics throughout the Ohio River Basin. River network topology, SWOT sampling (i.e., orbit and identified SWOT river reaches) and spatial interpolation concepts are used to quantify the fraction of effective sampling of river reaches each day of the three-year mission. Streamflow statistics for SWOT generated river discharge time series are compared to continuous daily river discharge series. Relationships are presented to transform SWOT generated streamflow statistics to equivalent continuous daily discharge time series statistics intended to support hydrologic applications using low-flow and annual flow duration statistics.
NASA Astrophysics Data System (ADS)
Reinovsky, R. E.; Levi, P. S.; Bueck, J. C.; Goforth, J. H.
The Air Force Weapons Laboratory, working jointly with Los Alamos National Laboratory, has conducted a series of experiments directed at exploring composite, or staged, switching techniques for use in opening switches in applications which require the conduction of very high currents (or current densities) with very low losses for relatively long times (several tens of microseconds), and the interruption of these currents in much shorter times (ultimately a few hundred nanoseconds). The results of those experiments are reported.
NASA Astrophysics Data System (ADS)
Wu, Xiaoping; Abbondanza, Claudio; Altamimi, Zuheir; Chin, T. Mike; Collilieux, Xavier; Gross, Richard S.; Heflin, Michael B.; Jiang, Yan; Parker, Jay W.
2015-05-01
The current International Terrestrial Reference Frame is based on a piecewise linear site motion model and realized by reference epoch coordinates and velocities for a global set of stations. Although linear motions due to tectonic plates and glacial isostatic adjustment dominate geodetic signals, at today's millimeter precisions, nonlinear motions due to earthquakes, volcanic activities, ice mass losses, sea level rise, hydrological changes, and other processes become significant. Monitoring these (sometimes rapid) changes desires consistent and precise realization of the terrestrial reference frame (TRF) quasi-instantaneously. Here, we use a Kalman filter and smoother approach to combine time series from four space geodetic techniques to realize an experimental TRF through weekly time series of geocentric coordinates. In addition to secular, periodic, and stochastic components for station coordinates, the Kalman filter state variables also include daily Earth orientation parameters and transformation parameters from input data frames to the combined TRF. Local tie measurements among colocated stations are used at their known or nominal epochs of observation, with comotion constraints applied to almost all colocated stations. The filter/smoother approach unifies different geodetic time series in a single geocentric frame. Fragmented and multitechnique tracking records at colocation sites are bridged together to form longer and coherent motion time series. While the time series approach to TRF reflects the reality of a changing Earth more closely than the linear approximation model, the filter/smoother is computationally powerful and flexible to facilitate incorporation of other data types and more advanced characterization of stochastic behavior of geodetic time series.
Semi-autonomous remote sensing time series generation tool
NASA Astrophysics Data System (ADS)
Babu, Dinesh Kumar; Kaufmann, Christof; Schmidt, Marco; Dhams, Thorsten; Conrad, Christopher
2017-10-01
High spatial and temporal resolution data is vital for crop monitoring and phenology change detection. Due to the lack of satellite architecture and frequent cloud cover issues, availability of daily high spatial data is still far from reality. Remote sensing time series generation of high spatial and temporal data by data fusion seems to be a practical alternative. However, it is not an easy process, since it involves multiple steps and also requires multiple tools. In this paper, a framework of Geo Information System (GIS) based tool is presented for semi-autonomous time series generation. This tool will eliminate the difficulties by automating all the steps and enable the users to generate synthetic time series data with ease. Firstly, all the steps required for the time series generation process are identified and grouped into blocks based on their functionalities. Later two main frameworks are created, one to perform all the pre-processing steps on various satellite data and the other one to perform data fusion to generate time series. The two frameworks can be used individually to perform specific tasks or they could be combined to perform both the processes in one go. This tool can handle most of the known geo data formats currently available which makes it a generic tool for time series generation of various remote sensing satellite data. This tool is developed as a common platform with good interface which provides lot of functionalities to enable further development of more remote sensing applications. A detailed description on the capabilities and the advantages of the frameworks are given in this paper.
Mobile Visualization and Analysis Tools for Spatial Time-Series Data
NASA Astrophysics Data System (ADS)
Eberle, J.; Hüttich, C.; Schmullius, C.
2013-12-01
The Siberian Earth System Science Cluster (SIB-ESS-C) provides access and analysis services for spatial time-series data build on products from the Moderate Resolution Imaging Spectroradiometer (MODIS) and climate data from meteorological stations. Until now a webportal for data access, visualization and analysis with standard-compliant web services was developed for SIB-ESS-C. As a further enhancement a mobile app was developed to provide an easy access to these time-series data for field campaigns. The app sends the current position from the GPS receiver and a specific dataset (like land surface temperature or vegetation indices) - selected by the user - to our SIB-ESS-C web service and gets the requested time-series data for the identified pixel back in real-time. The data is then being plotted directly in the app. Furthermore the user has possibilities to analyze the time-series data for breaking points and other phenological values. These processings are executed on demand of the user on our SIB-ESS-C web server and results are transfered to the app. Any processing can also be done at the SIB-ESS-C webportal. The aim of this work is to make spatial time-series data and analysis functions available for end users without the need of data processing. In this presentation the author gives an overview on this new mobile app, the functionalities, the technical infrastructure as well as technological issues (how the app was developed, our made experiences).
Complex effusive events at Kilauea as documented by the GOES satellite and remote video cameras
Harris, A.J.L.; Thornber, C.R.
1999-01-01
GOES provides thermal data for all of the Hawaiian volcanoes once every 15 min. We show how volcanic radiance time series produced from this data stream can be used as a simple measure of effusive activity. Two types of radiance trends in these time series can be used to monitor effusive activity: (a) Gradual variations in radiance reveal steady flow-field extension and tube development. (b) Discrete spikes correlate with short bursts of activity, such as lava fountaining or lava-lake overflows. We are confident that any effusive event covering more than 10,000 m2 of ground in less than 60 min will be unambiguously detectable using this approach. We demonstrate this capability using GOES, video camera and ground-based observational data for the current eruption of Kilauea volcano (Hawai'i). A GOES radiance time series was constructed from 3987 images between 19 June and 12 August 1997. This time series displayed 24 radiance spikes elevated more than two standard deviations above the mean; 19 of these are correlated with video-recorded short-burst effusive events. Less ambiguous events are interpreted, assessed and related to specific volcanic events by simultaneous use of permanently recording video camera data and ground-observer reports. The GOES radiance time series are automatically processed on data reception and made available in near-real-time, so such time series can contribute to three main monitoring functions: (a) automatically alerting major effusive events; (b) event confirmation and assessment; and (c) establishing effusive event chronology.
... define geographic areas and how geography changes over time. Latest Information ... audio files from the Census Bureau, including "Profile America," a daily series of bite-sized statistics, placing current data in ...
Movie-maps of low-latitude magnetic storm disturbance
NASA Astrophysics Data System (ADS)
Love, Jeffrey J.; Gannon, Jennifer L.
2010-06-01
We present 29 movie-maps of low-latitude horizontal-intensity magnetic disturbance for the years 1999-2006: 28 recording magnetic storms and 1 magnetically quiescent period. The movie-maps are derived from magnetic vector time series data collected at up to 25 ground-based observatories. Using a technique similar to that used in the calculation of Dst, a quiet time baseline is subtracted from the time series from each observatory. The remaining disturbance time series are shown in a polar coordinate system that accommodates both Earth rotation and the universal time dependence of magnetospheric disturbance. Each magnetic storm recorded in the movie-maps is different. While some standard interpretations about the storm time equatorial ring current appear to apply to certain moments and certain phases of some storms, the movie-maps also show substantial variety in the local time distribution of low-latitude magnetic disturbance, especially during storm commencements and storm main phases. All movie-maps are available at the U.S. Geological Survey Geomagnetism Program Web site (http://geomag.usgs.gov).
Altiparmak, Fatih; Ferhatosmanoglu, Hakan; Erdal, Selnur; Trost, Donald C
2006-04-01
An effective analysis of clinical trials data involves analyzing different types of data such as heterogeneous and high dimensional time series data. The current time series analysis methods generally assume that the series at hand have sufficient length to apply statistical techniques to them. Other ideal case assumptions are that data are collected in equal length intervals, and while comparing time series, the lengths are usually expected to be equal to each other. However, these assumptions are not valid for many real data sets, especially for the clinical trials data sets. An addition, the data sources are different from each other, the data are heterogeneous, and the sensitivity of the experiments varies by the source. Approaches for mining time series data need to be revisited, keeping the wide range of requirements in mind. In this paper, we propose a novel approach for information mining that involves two major steps: applying a data mining algorithm over homogeneous subsets of data, and identifying common or distinct patterns over the information gathered in the first step. Our approach is implemented specifically for heterogeneous and high dimensional time series clinical trials data. Using this framework, we propose a new way of utilizing frequent itemset mining, as well as clustering and declustering techniques with novel distance metrics for measuring similarity between time series data. By clustering the data, we find groups of analytes (substances in blood) that are most strongly correlated. Most of these relationships already known are verified by the clinical panels, and, in addition, we identify novel groups that need further biomedical analysis. A slight modification to our algorithm results an effective declustering of high dimensional time series data, which is then used for "feature selection." Using industry-sponsored clinical trials data sets, we are able to identify a small set of analytes that effectively models the state of normal health.
Legacies of precipitation fluctuations on primary production: theory and data synthesis.
Sala, Osvaldo E; Gherardi, Laureano A; Reichmann, Lara; Jobbágy, Esteban; Peters, Debra
2012-11-19
Variability of above-ground net primary production (ANPP) of arid to sub-humid ecosystems displays a closer association with precipitation when considered across space (based on multiyear averages for different locations) than through time (based on year-to-year change at single locations). Here, we propose a theory of controls of ANPP based on four hypotheses about legacies of wet and dry years that explains space versus time differences in ANPP-precipitation relationships. We tested the hypotheses using 16 long-term series of ANPP. We found that legacies revealed by the association of current- versus previous-year conditions through the temporal series occur across all ecosystem types from deserts to mesic grasslands. Therefore, previous-year precipitation and ANPP control a significant fraction of current-year production. We developed unified models for the controls of ANPP through space and time. The relative importance of current-versus previous-year precipitation changes along a gradient of mean annual precipitation with the importance of current-year PPT decreasing, whereas the importance of previous-year PPT remains constant as mean annual precipitation increases. Finally, our results suggest that ANPP will respond to climate-change-driven alterations in water availability and, more importantly, that the magnitude of the response will increase with time.
NASA Astrophysics Data System (ADS)
Lakshmi, K.; Rama Mohan Rao, A.
2014-10-01
In this paper, a novel output-only damage-detection technique based on time-series models for structural health monitoring in the presence of environmental variability and measurement noise is presented. The large amount of data obtained in the form of time-history response is transformed using principal component analysis, in order to reduce the data size and thereby improve the computational efficiency of the proposed algorithm. The time instant of damage is obtained by fitting the acceleration time-history data from the structure using autoregressive (AR) and AR with exogenous inputs time-series prediction models. The probability density functions (PDFs) of damage features obtained from the variances of prediction errors corresponding to references and healthy current data are found to be shifting from each other due to the presence of various uncertainties such as environmental variability and measurement noise. Control limits using novelty index are obtained using the distances of the peaks of the PDF curves in healthy condition and used later for determining the current condition of the structure. Numerical simulation studies have been carried out using a simply supported beam and also validated using an experimental benchmark data corresponding to a three-storey-framed bookshelf structure proposed by Los Alamos National Laboratory. Studies carried out in this paper clearly indicate the efficiency of the proposed algorithm for damage detection in the presence of measurement noise and environmental variability.
Multi-Model Validation of Currents in the Chesapeake Bay Region in June 2010
2012-01-01
host “ DaVinci ” at the Naval Oceanographic Office (NAVOCEANO). The same model configuration also took approximately 1 hr of wall clock time for a 72-hr...comparable to the performance Navy DSRC host DaVinci . Products of water level and horizontal current maps as well as station time series, identical to...DSRC host DaVinci and required approximately 5 hrs of wall-clock time for 72-hr forecasts, including data Figure 10. The Chesapeake Bay Delft3D
A robust interrupted time series model for analyzing complex health care intervention data.
Cruz, Maricela; Bender, Miriam; Ombao, Hernando
2017-12-20
Current health policy calls for greater use of evidence-based care delivery services to improve patient quality and safety outcomes. Care delivery is complex, with interacting and interdependent components that challenge traditional statistical analytic techniques, in particular, when modeling a time series of outcomes data that might be "interrupted" by a change in a particular method of health care delivery. Interrupted time series (ITS) is a robust quasi-experimental design with the ability to infer the effectiveness of an intervention that accounts for data dependency. Current standardized methods for analyzing ITS data do not model changes in variation and correlation following the intervention. This is a key limitation since it is plausible for data variability and dependency to change because of the intervention. Moreover, present methodology either assumes a prespecified interruption time point with an instantaneous effect or removes data for which the effect of intervention is not fully realized. In this paper, we describe and develop a novel robust interrupted time series (robust-ITS) model that overcomes these omissions and limitations. The robust-ITS model formally performs inference on (1) identifying the change point; (2) differences in preintervention and postintervention correlation; (3) differences in the outcome variance preintervention and postintervention; and (4) differences in the mean preintervention and postintervention. We illustrate the proposed method by analyzing patient satisfaction data from a hospital that implemented and evaluated a new nursing care delivery model as the intervention of interest. The robust-ITS model is implemented in an R Shiny toolbox, which is freely available to the community. Copyright © 2017 John Wiley & Sons, Ltd.
Gartner, J.W.; Yost, B.T.
1988-01-01
Current meter data collected at 11 stations and water level data collected at one station in Suisun and San Pablo Bays, California, in 1986 are compiled in this report. Current-meter measurements include current speed and direction, and water temperature and salinity (computed from temperature and conductivity). For each of the 19 current-meter records, data are presented in two forms. These are: (1) results of harmonic analysis; and (2) plots of tidal current speed and direction versus time and plots of temperature and salinity versus time. Spatial distribution of the properties of tidal currents are given in graphic form. In addition, Eulerian residual currents have been compiled by using a vector-averaging technique. Water level data are presented in the form of a time-series plot and the results of harmonic analysis. (USGS)
Dynamical Networks Characterization of Space Weather Events
NASA Astrophysics Data System (ADS)
Orr, L.; Chapman, S. C.; Dods, J.; Gjerloev, J. W.
2017-12-01
Space weather can cause disturbances to satellite systems, impacting navigation technology and telecommunications; it can cause power loss and aviation disruption. A central aspect of the earth's magnetospheric response to space weather events are large scale and rapid changes in ionospheric current patterns. Space weather is highly dynamic and there are still many controversies about how the current system evolves in time. The recent SuperMAG initiative, collates ground-based vector magnetic field time series from over 200 magnetometers with 1-minute temporal resolution. In principle this combined dataset is an ideal candidate for quantification using dynamical networks. Network properties and parameters allow us to characterize the time dynamics of the full spatiotemporal pattern of the ionospheric current system. However, applying network methodologies to physical data presents new challenges. We establish whether a given pair of magnetometers are connected in the network by calculating their canonical cross correlation. The magnetometers are connected if their cross correlation exceeds a threshold. In our physical time series this threshold needs to be both station specific, as it varies with (non-linear) individual station sensitivity and location, and able to vary with season, which affects ground conductivity. Additionally, the earth rotates and therefore the ground stations move significantly on the timescales of geomagnetic disturbances. The magnetometers are non-uniformly spatially distributed. We will present new methodology which addresses these problems and in particular achieves dynamic normalization of the physical time series in order to form the network. Correlated disturbances across the magnetometers capture transient currents. Once the dynamical network has been obtained [1][2] from the full magnetometer data set it can be used to directly identify detailed inferred transient ionospheric current patterns and track their dynamics. We will show our first results that use network properties such as cliques and clustering coefficients to map these highly dynamic changes in ionospheric current patterns.[l] Dods et al, J. Geophys. Res 120, doi:10.1002/2015JA02 (2015). [2] Dods et al, J. Geophys. Res. 122, doi:10.1002/2016JA02 (2017).
Correlations of stock price fluctuations under multi-scale and multi-threshold scenarios
NASA Astrophysics Data System (ADS)
Sui, Guo; Li, Huajiao; Feng, Sida; Liu, Xueyong; Jiang, Meihui
2018-01-01
The multi-scale method is widely used in analyzing time series of financial markets and it can provide market information for different economic entities who focus on different periods. Through constructing multi-scale networks of price fluctuation correlation in the stock market, we can detect the topological relationship between each time series. Previous research has not addressed the problem that the original fluctuation correlation networks are fully connected networks and more information exists within these networks that is currently being utilized. Here we use listed coal companies as a case study. First, we decompose the original stock price fluctuation series into different time scales. Second, we construct the stock price fluctuation correlation networks at different time scales. Third, we delete the edges of the network based on thresholds and analyze the network indicators. Through combining the multi-scale method with the multi-threshold method, we bring to light the implicit information of fully connected networks.
Tuarob, Suppawong; Tucker, Conrad S; Kumara, Soundar; Giles, C Lee; Pincus, Aaron L; Conroy, David E; Ram, Nilam
2017-04-01
It is believed that anomalous mental states such as stress and anxiety not only cause suffering for the individuals, but also lead to tragedies in some extreme cases. The ability to predict the mental state of an individual at both current and future time periods could prove critical to healthcare practitioners. Currently, the practical way to predict an individual's mental state is through mental examinations that involve psychological experts performing the evaluations. However, such methods can be time and resource consuming, mitigating their broad applicability to a wide population. Furthermore, some individuals may also be unaware of their mental states or may feel uncomfortable to express themselves during the evaluations. Hence, their anomalous mental states could remain undetected for a prolonged period of time. The objective of this work is to demonstrate the ability of using advanced machine learning based approaches to generate mathematical models that predict current and future mental states of an individual. The problem of mental state prediction is transformed into the time series forecasting problem, where an individual is represented as a multivariate time series stream of monitored physical and behavioral attributes. A personalized mathematical model is then automatically generated to capture the dependencies among these attributes, which is used for prediction of mental states for each individual. In particular, we first illustrate the drawbacks of traditional multivariate time series forecasting methodologies such as vector autoregression. Then, we show that such issues could be mitigated by using machine learning regression techniques which are modified for capturing temporal dependencies in time series data. A case study using the data from 150 human participants illustrates that the proposed machine learning based forecasting methods are more suitable for high-dimensional psychological data than the traditional vector autoregressive model in terms of both magnitude of error and directional accuracy. These results not only present a successful usage of machine learning techniques in psychological studies, but also serve as a building block for multiple medical applications that could rely on an automated system to gauge individuals' mental states. Copyright © 2017 Elsevier Inc. All rights reserved.
Single-operator real-time ultrasound-guided spinal injection using SonixGPS™: a case series.
Brinkmann, Silke; Tang, Raymond; Sawka, Andrew; Vaghadia, Himat
2013-09-01
The SonixGPS™ is a novel needle tracking system that has recently been approved in Canada for ultrasound-guided needle interventions. It allows optimization of needle-beam alignment by providing a real-time display of current and predicted needle tip position. Currently, there is limited evidence on the effectiveness of this technique for performance of real-time spinal anesthesia. This case series reports performance of the SonixGPS system for real-time ultrasound-guided spinal anesthesia in elective patients scheduled for joint arthroplasty. In this single-centre case series, 20 American Society of Anesthesiologists' class I-II patients scheduled for lower limb joint arthroplasty were recruited to undergo real-time ultrasound-guided spinal anesthesia with the SonixGPS after written informed consent. The primary outcome for this clinical cases series was the success rate of spinal anesthesia, and the main secondary outcome was time required to perform spinal anesthesia. Successful spinal anesthesia for joint arthroplasty was achieved in 18/20 patients, and 17 of these required only a single skin puncture. In 7/20 (35%) patients, dural puncture was achieved on the first needle pass, and in 11/20 (55%) patients, dural puncture was achieved with two or three needle redirections. Median (range) time taken to perform the block was 8 (5-14) min. The study procedure was aborted in two cases because our clinical protocol dictated using a standard approach if spinal anesthesia was unsuccessful after three ultrasound-guided insertion attempts. These two cases were classified as failures. No complications, including paresthesia, were observed during the procedure. All patients with successful spinal anesthesia found the technique acceptable and were willing to undergo a repeat procedure if deemed necessary. This case series shows that real-time ultrasound-guided spinal anesthesia with the SonixGPS system is possible within an acceptable time frame. It proved effective with a low rate of failure and a low rate of complications. Our clinical experience suggests that a randomized trial is warranted to compare the SonixGPS with a standard block technique.
Phenology cameras observing boreal ecosystems of Finland
NASA Astrophysics Data System (ADS)
Peltoniemi, Mikko; Böttcher, Kristin; Aurela, Mika; Kolari, Pasi; Tanis, Cemal Melih; Linkosalmi, Maiju; Loehr, John; Metsämäki, Sari; Nadir Arslan, Ali
2016-04-01
Cameras have become useful tools for monitoring seasonality of ecosystems. Low-cost cameras facilitate validation of other measurements and allow extracting some key ecological features and moments from image time series. We installed a network of phenology cameras at selected ecosystem research sites in Finland. Cameras were installed above, on the level, or/and below the canopies. Current network hosts cameras taking time lapse images in coniferous and deciduous forests as well as at open wetlands offering thus possibilities to monitor various phenological and time-associated events and elements. In this poster, we present our camera network and give examples of image series use for research. We will show results about the stability of camera derived color signals, and based on that discuss about the applicability of cameras in monitoring time-dependent phenomena. We will also present results from comparisons between camera-derived color signal time series and daily satellite-derived time series (NVDI, NDWI, and fractional snow cover) from the Moderate Resolution Imaging Spectrometer (MODIS) at selected spruce and pine forests and in a wetland. We will discuss the applicability of cameras in supporting phenological observations derived from satellites, by considering the possibility of cameras to monitor both above and below canopy phenology and snow.
21 CFR 1020.31 - Radiographic equipment.
Code of Federal Regulations, 2011 CFR
2011-04-01
... time, but means may be provided to permit completion of any single exposure of the series in process.... Means shall be provided to terminate the exposure at a preset time interval, a preset product of current and time, a preset number of pulses, or a preset radiation exposure to the image receptor. (i) Except...
The Chern-Simons current in time series of knots and links in proteins
NASA Astrophysics Data System (ADS)
Capozziello, Salvatore; Pincak, Richard
2018-06-01
A superspace model of knots and links for DNA time series data is proposed to take into account the feedback loop from docking to undocking state of protein-protein interactions. In particular, the direction of interactions between the 8 hidden states of DNA is considered. It is a E8 ×E8 unified spin model where the genotype, from active and inactive side of DNA time data series, can be considered for any living organism. The mathematical model is borrowed from loop-quantum gravity and adapted to biology. It is used to derive equations for gene expression describing transitions from ground to excited states, and for the 8 coupling states between geneon and anti-geneon transposon and retrotransposon in trash DNA. Specifically, we adopt a modified Grothendieck cohomology and a modified Khovanov cohomology for biology. The result is a Chern-Simons current in (8 + 3) extradimensions of a given unoriented supermanifold with ghost fields of protein structures. The 8 dimensions come from the 8 hidden states of spinor field of genetic code. The extradimensions come from the 3 types of principle fiber bundle in the secondary protein.
Production and Uses of Multi-Decade Geodetic Earth Science Data Records
NASA Astrophysics Data System (ADS)
Bock, Y.; Kedar, S.; Moore, A. W.; Fang, P.; Liu, Z.; Sullivan, A.; Argus, D. F.; Jiang, S.; Marshall, S. T.
2017-12-01
The Solid Earth Science ESDR System (SESES) project funded under the NASA MEaSUREs program produces and disseminates mature, long-term, calibrated and validated, GNSS based Earth Science Data Records (ESDRs) that encompass multiple diverse areas of interest in Earth Science, such as tectonic motion, transient slip and earthquake dynamics, as well as meteorology, climate, and hydrology. The ESDRs now span twenty-five years for the earliest stations and today are available for thousands of global and regional stations. Using a unified metadata database and a combination of GNSS solutions generated by two independent analysis centers, the project currently produces four long-term ESDR's: Geodetic Displacement Time Series: Daily, combined, cleaned and filtered, GIPSY and GAMIT long-term time series of continuous GPS station positions (global and regional) in the latest version of ITRF, automatically updated weekly. Geodetic Velocities: Weekly updated velocity field + velocity field histories in various reference frames; compendium of all model parameters including earthquake catalog, coseismic offsets, and postseismic model parameters (exponential or logarithmic). Troposphere Delay Time Series: Long-term time series of troposphere delay (30-min resolution) at geodetic stations, necessarily estimated during position time series production and automatically updated weekly. Seismogeodetic records for historic earthquakes: High-rate broadband displacement and seismic velocity time series combining 1 Hz GPS displacements and 100 Hz accelerometer data for select large earthquakes and collocated cGPS and seismic instruments from regional networks. We present several recent notable examples of the ESDR's usage: A transient slip study that uses the combined position time series to unravel "tremor-less" slow tectonic transient events. Fault geometry determination from geodetic slip rates. Changes in water resources across California's physiographic provinces at a spatial resolution of 75 km. Retrospective study of a southern California summer monsoon event.
NASA Satellite Data for Seagrass Health Modeling and Monitoring
NASA Technical Reports Server (NTRS)
Spiering, Bruce A.; Underwood, Lauren; Ross, Kenton
2011-01-01
Time series derived information for coastal waters will be used to provide input data for the Fong and Harwell model. The current MODIS land mask limits where the model can be applied; this project will: a) Apply MODIS data with resolution higher than the standard products (250-m vs. 1-km). b) Seek to refine the land mask. c) Explore nearby areas to use as proxies for time series directly over the beds. Novel processing approaches will be leveraged from other NASA projects and customized as inputs for seagrass productivity modeling
Introduction to Time Series Analysis
NASA Technical Reports Server (NTRS)
Hardin, J. C.
1986-01-01
The field of time series analysis is explored from its logical foundations to the most modern data analysis techniques. The presentation is developed, as far as possible, for continuous data, so that the inevitable use of discrete mathematics is postponed until the reader has gained some familiarity with the concepts. The monograph seeks to provide the reader with both the theoretical overview and the practical details necessary to correctly apply the full range of these powerful techniques. In addition, the last chapter introduces many specialized areas where research is currently in progress.
National Health Expenditures, 1996
Levit, Katharine R.; Lazenby, Helen C.; Braden, Bradley R.; Cowan, Cathy A.; Sensenig, Arthur L.; McDonnell, Patricia A.; Stiller, Jean M.; Won, Darleen K.; Martin, Anne B.; Sivarajan, Lekha; Donham, Carolyn S.; Long, Anna M.; Stewart, Madie W.
1997-01-01
The national health expenditures (NHE) series presented in this report for 1960-96 provides a view of the economic history of health care in the United States through spending for health care services and the sources financing that care. In 1996 NHE topped $1 trillion. At the same time, spending grew at the slowest rate, 4.4 percent, ever recorded in the current series. For the first time, this article presents estimates of Medicare managed care payments by type of service, as well as nursing home and home health spending in hospital-based facilities. PMID:10179997
STEM connections to the GOES-R Satellite Series
NASA Astrophysics Data System (ADS)
Mooney, M. E.; Schmit, T.
2015-12-01
GOES-R, a new Geostationary Operational Environmental Satellite (GOES) is scheduled to be launched in October of 2016. Its role is to continue western hemisphere satellite coverage while the existing GOES series winds down its 20-year operation. However, instruments on the next generation GOES-R satellite series will provide major improvements to the current GOES, both in the frequency of images acquired and the spectral and spatial resolution of the images, providing a perfect conduit for STEM education. Most of these improvements will be provided by the Advanced Baseline Imager (ABI). ABI will provide three times more spectral information, four times the spatial resolution, and more than five times faster temporal coverage than the current GOES. Another exciting addition to the GOES-R satellite series will be the Geostationary Lightning Mapper (GLM). The all new GLM on GOES-R will measure total lightning activity continuously over the Americas and adjacent ocean regions with near uniform spatial resolution of approximately 10 km! Due to ABI, GLM and improved spacecraft calibration and navigation, the next generation GOES-R satellite series will usher in an exciting era of satellite applications and opportunities for STEM education. This session will present and demonstrate exciting next-gen imagery advancements and new HTML5 WebApps that demonstrate STEM connections to these improvements. Participants will also be invited to join the GOES-R Education Proving Ground, a national network of educators who will receive stipends to attend 4 webinars during the spring of 2016, pilot a STEM lesson plan, and organize a school-wide launch awareness event.
Recognition of predictors for mid-long term runoff prediction based on lasso
NASA Astrophysics Data System (ADS)
Xie, S.; Huang, Y.
2017-12-01
Reliable and accuracy mid-long term runoff prediction is of great importance in integrated management of reservoir. And many methods are proposed to model runoff time series. Almost all forecast lead times (LT) of these models are 1 month, and the predictors are previous runoff with different time lags. However, runoff prediction with increased LT, which is more beneficial, is not popular in current researches. It is because the connection between previous runoff and current runoff will be weakened with the increase of LT. So 74 atmospheric circulation factors (ACFs) together with pre-runoff are used as alternative predictors for mid-long term runoff prediction of Longyangxia reservoir in this study. Because pre-runoff and 74 ACFs with different time lags are so many and most of these factors are useless, lasso, which means `least absolutely shrinkage and selection operator', is used to recognize predictors. And the result demonstrates that 74 ACFs are beneficial for runoff prediction in both validation and test sets when LT is greater than 6. And there are 6 factors other than pre-runoff, most of which are with big time lag, are selected as predictors frequently. In order to verify the effect of 74 ACFs, 74 stochastic time series generated from normalized 74 ACFs are used as input of model. The result shows that these 74 stochastic time series are useless, which confirm the effect of 74 ACFs on mid-long term runoff prediction.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-09
... Airworthiness Directives; The Boeing Company Model 737-100, -200, -200C, -300, -400, and -500 Series Airplanes..., -200, -200C, -300, -400, and - 500 series airplanes. That AD currently requires a one-time inspection... 16211, March 31, 2006). The existing AD applies to all Model 737-100, -200, -200C, -300, -400, and -500...
NASA Astrophysics Data System (ADS)
Zakynthinaki, M. S.; Stirling, J. R.
2007-01-01
Stochastic optimization is applied to the problem of optimizing the fit of a model to the time series of raw physiological (heart rate) data. The physiological response to exercise has been recently modeled as a dynamical system. Fitting the model to a set of raw physiological time series data is, however, not a trivial task. For this reason and in order to calculate the optimal values of the parameters of the model, the present study implements the powerful stochastic optimization method ALOPEX IV, an algorithm that has been proven to be fast, effective and easy to implement. The optimal parameters of the model, calculated by the optimization method for the particular athlete, are very important as they characterize the athlete's current condition. The present study applies the ALOPEX IV stochastic optimization to the modeling of a set of heart rate time series data corresponding to different exercises of constant intensity. An analysis of the optimization algorithm, together with an analytic proof of its convergence (in the absence of noise), is also presented.
NASA Astrophysics Data System (ADS)
Kim, Youngsun
2017-05-01
The most common structure used for current transformers (CTs) consists of secondary windings around a ferromagnetic core past the primary current being measured. A CT used as a surge protection device (SPD) may experience large inrushes of current, like surges. However, when a large current flows into the primary winding, measuring the magnitude of the current is difficult because the ferromagnetic core becomes magnetically saturated. Several approaches to reduce the saturation effect are described in the literature. A Rogowski coil is representative of several devices that measure large currents. It is an electrical device that measures alternating current (AC) or high-frequency current. However, such devices are very expensive in application. In addition, the volume of a CT must be increased to measure sufficiently large currents, but for installation spaces that are too small, other methods must be used. To solve this problem, it is necessary to analyze the magnetic field and electromotive force (EMF) characteristics when designing a CT. Thus, we proposed an analysis method for the CT under an inrush current using the time-domain finite element method (TDFEM). The input source current of a surge waveform is expanded by a Fourier series to obtain an instantaneous value. An FEM model of the device is derived in a two-dimensional system and coupled with EMF circuits. The time-derivative term in the differential equation is solved in each time step by the finite difference method. It is concluded that the proposed algorithm is useful for analyzing CT characteristics, including the field distribution. Consequently, the proposed algorithm yields a reference for obtaining the effects of design parameters and magnetic materials for special shapes and sizes before the CT is designed and manufactured.
NASA Astrophysics Data System (ADS)
Olafsdottir, Kristin B.; Mudelsee, Manfred
2013-04-01
Estimation of the Pearson's correlation coefficient between two time series to evaluate the influences of one time depended variable on another is one of the most often used statistical method in climate sciences. Various methods are used to estimate confidence interval to support the correlation point estimate. Many of them make strong mathematical assumptions regarding distributional shape and serial correlation, which are rarely met. More robust statistical methods are needed to increase the accuracy of the confidence intervals. Bootstrap confidence intervals are estimated in the Fortran 90 program PearsonT (Mudelsee, 2003), where the main intention was to get an accurate confidence interval for correlation coefficient between two time series by taking the serial dependence of the process that generated the data into account. However, Monte Carlo experiments show that the coverage accuracy for smaller data sizes can be improved. Here we adapt the PearsonT program into a new version called PearsonT3, by calibrating the confidence interval to increase the coverage accuracy. Calibration is a bootstrap resampling technique, which basically performs a second bootstrap loop or resamples from the bootstrap resamples. It offers, like the non-calibrated bootstrap confidence intervals, robustness against the data distribution. Pairwise moving block bootstrap is used to preserve the serial correlation of both time series. The calibration is applied to standard error based bootstrap Student's t confidence intervals. The performances of the calibrated confidence intervals are examined with Monte Carlo simulations, and compared with the performances of confidence intervals without calibration, that is, PearsonT. The coverage accuracy is evidently better for the calibrated confidence intervals where the coverage error is acceptably small (i.e., within a few percentage points) already for data sizes as small as 20. One form of climate time series is output from numerical models which simulate the climate system. The method is applied to model data from the high resolution ocean model, INALT01 where the relationship between the Agulhas Leakage and the North Brazil Current is evaluated. Preliminary results show significant correlation between the two variables when there is 10 year lag between them, which is more or less the time that takes the Agulhas Leakage water to reach the North Brazil Current. Mudelsee, M., 2003. Estimating Pearson's correlation coefficient with bootstrap confidence interval from serially dependent time series. Mathematical Geology 35, 651-665.
Globally-Gridded Interpolated Night-Time Marine Air Temperatures 1900-2014
NASA Astrophysics Data System (ADS)
Junod, R.; Christy, J. R.
2016-12-01
Over the past century, climate records have pointed to an increase in global near-surface average temperature. Near-surface air temperature over the oceans is a relatively unused parameter in understanding the current state of climate, but is useful as an independent temperature metric over the oceans and serves as a geographical and physical complement to near-surface air temperature over land. Though versions of this dataset exist (i.e. HadMAT1 and HadNMAT2), it has been strongly recommended that various groups generate climate records independently. This University of Alabama in Huntsville (UAH) study began with the construction of monthly night-time marine air temperature (UAHNMAT) values from the early-twentieth century through to the present era. Data from the International Comprehensive Ocean and Atmosphere Data Set (ICOADS) were used to compile a time series of gridded UAHNMAT, (20S-70N). This time series was homogenized to correct for the many biases such as increasing ship height, solar deck heating, etc. The time series of UAHNMAT, once adjusted to a standard reference height, is gridded to 1.25° pentad grid boxes and interpolated using the kriging interpolation technique. This study will present results which quantify the variability and trends and compare to current trends of other related datasets that include HadNMAT2 and sea-surface temperatures (HadISST & ERSSTv4).
PEPSI-feed: linking PEPSI to the Vatican Advanced Technology Telescope using a 450m long fibre
NASA Astrophysics Data System (ADS)
Sablowski, D. P.; Weber, M.; Woche, M.; Ilyin, I.; Järvinen, A.; Strassmeier, K. G.; Gabor, P.
2016-07-01
Limited observing time at large telescopes equipped with the most powerful spectrographs makes it almost impossible to gain long and well-sampled time-series observations. Ditto, high-time-resolution observations of bright targets with high signal-to-noise are rare. By pulling an optical fibre of 450m length from the Vatican Advanced Technology Telescope (VATT) to the Large Binocular Telescope (LBT) to connect the Potsdam Echelle Polarimetric and Spectroscopic Instrument (PEPSI) to the VATT, allows for ultra-high resolution time-series measurements of bright targets. This article presents the fibre-link in detail from the technical point-of-view, demonstrates its performance from first observations, and sketches current applications.
Mayaud, C; Wagner, T; Benischke, R; Birk, S
2014-04-16
The Lurbach karst system (Styria, Austria) is drained by two major springs and replenished by both autogenic recharge from the karst massif itself and a sinking stream that originates in low permeable schists (allogenic recharge). Detailed data from two events recorded during a tracer experiment in 2008 demonstrate that an overflow from one of the sub-catchments to the other is activated if the discharge of the main spring exceeds a certain threshold. Time series analysis (autocorrelation and cross-correlation) was applied to examine to what extent the various available methods support the identification of the transient inter-catchment flow observed in this binary karst system. As inter-catchment flow is found to be intermittent, the evaluation was focused on single events. In order to support the interpretation of the results from the time series analysis a simplified groundwater flow model was built using MODFLOW. The groundwater model is based on the current conceptual understanding of the karst system and represents a synthetic karst aquifer for which the same methods were applied. Using the wetting capability package of MODFLOW, the model simulated an overflow similar to what has been observed during the tracer experiment. Various intensities of allogenic recharge were employed to generate synthetic discharge data for the time series analysis. In addition, geometric and hydraulic properties of the karst system were varied in several model scenarios. This approach helps to identify effects of allogenic recharge and aquifer properties in the results from the time series analysis. Comparing the results from the time series analysis of the observed data with those of the synthetic data a good agreement was found. For instance, the cross-correlograms show similar patterns with respect to time lags and maximum cross-correlation coefficients if appropriate hydraulic parameters are assigned to the groundwater model. The comparable behaviors of the real and the synthetic system allow to deduce that similar aquifer properties are relevant in both systems. In particular, the heterogeneity of aquifer parameters appears to be a controlling factor. Moreover, the location of the overflow connecting the sub-catchments of the two springs is found to be of primary importance, regarding the occurrence of inter-catchment flow. This further supports our current understanding of an overflow zone located in the upper part of the Lurbach karst aquifer. Thus, time series analysis of single events can potentially be used to characterize transient inter-catchment flow behavior of karst systems.
Mayaud, C.; Wagner, T.; Benischke, R.; Birk, S.
2014-01-01
Summary The Lurbach karst system (Styria, Austria) is drained by two major springs and replenished by both autogenic recharge from the karst massif itself and a sinking stream that originates in low permeable schists (allogenic recharge). Detailed data from two events recorded during a tracer experiment in 2008 demonstrate that an overflow from one of the sub-catchments to the other is activated if the discharge of the main spring exceeds a certain threshold. Time series analysis (autocorrelation and cross-correlation) was applied to examine to what extent the various available methods support the identification of the transient inter-catchment flow observed in this binary karst system. As inter-catchment flow is found to be intermittent, the evaluation was focused on single events. In order to support the interpretation of the results from the time series analysis a simplified groundwater flow model was built using MODFLOW. The groundwater model is based on the current conceptual understanding of the karst system and represents a synthetic karst aquifer for which the same methods were applied. Using the wetting capability package of MODFLOW, the model simulated an overflow similar to what has been observed during the tracer experiment. Various intensities of allogenic recharge were employed to generate synthetic discharge data for the time series analysis. In addition, geometric and hydraulic properties of the karst system were varied in several model scenarios. This approach helps to identify effects of allogenic recharge and aquifer properties in the results from the time series analysis. Comparing the results from the time series analysis of the observed data with those of the synthetic data a good agreement was found. For instance, the cross-correlograms show similar patterns with respect to time lags and maximum cross-correlation coefficients if appropriate hydraulic parameters are assigned to the groundwater model. The comparable behaviors of the real and the synthetic system allow to deduce that similar aquifer properties are relevant in both systems. In particular, the heterogeneity of aquifer parameters appears to be a controlling factor. Moreover, the location of the overflow connecting the sub-catchments of the two springs is found to be of primary importance, regarding the occurrence of inter-catchment flow. This further supports our current understanding of an overflow zone located in the upper part of the Lurbach karst aquifer. Thus, time series analysis of single events can potentially be used to characterize transient inter-catchment flow behavior of karst systems. PMID:24748687
Describing temporal variability of the mean Estonian precipitation series in climate time scale
NASA Astrophysics Data System (ADS)
Post, P.; Kärner, O.
2009-04-01
Applicability of the random walk type models to represent the temporal variability of various atmospheric temperature series has been successfully demonstrated recently (e.g. Kärner, 2002). Main problem in the temperature modeling is connected to the scale break in the generally self similar air temperature anomaly series (Kärner, 2005). The break separates short-range strong non-stationarity from nearly stationary longer range variability region. This is an indication of the fact that several geophysical time series show a short-range non-stationary behaviour and a stationary behaviour in longer range (Davis et al., 1996). In order to model series like that the choice of time step appears to be crucial. To characterize the long-range variability we can neglect the short-range non-stationary fluctuations, provided that we are able to model properly the long-range tendencies. The structure function (Monin and Yaglom, 1975) was used to determine an approximate segregation line between the short and the long scale in terms of modeling. The longer scale can be called climate one, because such models are applicable in scales over some decades. In order to get rid of the short-range fluctuations in daily series the variability can be examined using sufficiently long time step. In the present paper, we show that the same philosophy is useful to find a model to represent a climate-scale temporal variability of the Estonian daily mean precipitation amount series over 45 years (1961-2005). Temporal variability of the obtained daily time series is examined by means of an autoregressive and integrated moving average (ARIMA) family model of the type (0,1,1). This model is applicable for daily precipitation simulating if to select an appropriate time step that enables us to neglet the short-range non-stationary fluctuations. A considerably longer time step than one day (30 days) is used in the current paper to model the precipitation time series variability. Each ARIMA (0,1,1) model can be interpreted to be consisting of random walk in a noisy environment (Box and Jenkins, 1976). The fitted model appears to be weakly non-stationary, that gives us the possibility to use stationary approximation if only the noise component from that sum of white noise and random walk is exploited. We get a convenient routine to generate a stationary precipitation climatology with a reasonable accuracy, since the noise component variance is much larger than the dispersion of the random walk generator. This interpretation emphasizes dominating role of a random component in the precipitation series. The result is understandable due to a small territory of Estonia that is situated in the mid-latitude cyclone track. References Box, J.E.P. and G. Jenkins 1976: Time Series Analysis, Forecasting and Control (revised edn.), Holden Day San Francisco, CA, 575 pp. Davis, A., Marshak, A., Wiscombe, W. and R. Cahalan 1996: Multifractal characterizations of intermittency in nonstationary geophysical signals and fields.in G. Trevino et al. (eds) Current Topics in Nonsstationarity Analysis. World-Scientific, Singapore, 97-158. Kärner, O. 2002: On nonstationarity and antipersistency in global temperature series. J. Geophys. Res. D107; doi:10.1029/2001JD002024. Kärner, O. 2005: Some examples on negative feedback in the Earth climate system. Centr. European J. Phys. 3; 190-208. Monin, A.S. and A.M. Yaglom 1975: Statistical Fluid Mechanics, Vol 2. Mechanics of Turbulence , MIT Press Boston Mass, 886 pp.
NASA Astrophysics Data System (ADS)
Kube, R.; Garcia, O. E.; Theodorsen, A.; Brunner, D.; Kuang, A. Q.; LaBombard, B.; Terry, J. L.
2018-06-01
The Alcator C-Mod mirror Langmuir probe system has been used to sample data time series of fluctuating plasma parameters in the outboard mid-plane far scrape-off layer. We present a statistical analysis of one second long time series of electron density, temperature, radial electric drift velocity and the corresponding particle and electron heat fluxes. These are sampled during stationary plasma conditions in an ohmically heated, lower single null diverted discharge. The electron density and temperature are strongly correlated and feature fluctuation statistics similar to the ion saturation current. Both electron density and temperature time series are dominated by intermittent, large-amplitude burst with an exponential distribution of both burst amplitudes and waiting times between them. The characteristic time scale of the large-amplitude bursts is approximately 15 μ {{s}}. Large-amplitude velocity fluctuations feature a slightly faster characteristic time scale and appear at a faster rate than electron density and temperature fluctuations. Describing these time series as a superposition of uncorrelated exponential pulses, we find that probability distribution functions, power spectral densities as well as auto-correlation functions of the data time series agree well with predictions from the stochastic model. The electron particle and heat fluxes present large-amplitude fluctuations. For this low-density plasma, the radial electron heat flux is dominated by convection, that is, correlations of fluctuations in the electron density and radial velocity. Hot and dense blobs contribute only a minute fraction of the total fluctuation driven heat flux.
Simulation of Ground Winds Time Series for the NASA Crew Launch Vehicle (CLV)
NASA Technical Reports Server (NTRS)
Adelfang, Stanley I.
2008-01-01
Simulation of wind time series based on power spectrum density (PSD) and spectral coherence models for ground wind turbulence is described. The wind models, originally developed for the Shuttle program, are based on wind measurements at the NASA 150-m meteorological tower at Cape Canaveral, FL. The current application is for the design and/or protection of the CLV from wind effects during on-pad exposure during periods from as long as days prior to launch, to seconds or minutes just prior to launch and seconds after launch. The evaluation of vehicle response to wind will influence the design and operation of constraint systems for support of the on-pad vehicle. Longitudinal and lateral wind component time series are simulated at critical vehicle locations. The PSD model for wind turbulence is a function of mean wind speed, elevation and temporal frequency. Integration of the PSD equation over a selected frequency range yields the variance of the time series to be simulated. The square root of the PSD defines a low-pass filter that is applied to adjust the components of the Fast Fourier Transform (FFT) of Gaussian white noise. The first simulated time series near the top of the launch vehicle is the inverse transform of the adjusted FFT. Simulation of the wind component time series at the nearest adjacent location (and all other succeeding next nearest locations) is based on a model for the coherence between winds at two locations as a function of frequency and separation distance, where the adjacent locations are separated vertically and/or horizontally. The coherence function is used to calculate a coherence weighted FFT of the wind at the next nearest location, given the FFT of the simulated time series at the previous location and the essentially incoherent FFT of the wind at the selected location derived a priori from the PSD model. The simulated time series at each adjacent location is the inverse Fourier transform of the coherence weighted FFT. For a selected design case, the equations, the process and the simulated time series at multiple vehicle stations are presented.
Consistent Long-Time Series of GPS Satellite Antenna Phase Center Corrections
NASA Astrophysics Data System (ADS)
Steigenberger, P.; Schmid, R.; Rothacher, M.
2004-12-01
The current IGS processing strategy disregards satellite antenna phase center variations (pcvs) depending on the nadir angle and applies block-specific phase center offsets only. However, the transition from relative to absolute receiver antenna corrections presently under discussion necessitates the consideration of satellite antenna pcvs. Moreover, studies of several groups have shown that the offsets are not homogeneous within a satellite block. Manufacturer specifications seem to confirm this assumption. In order to get best possible antenna corrections, consistent ten-year time series (1994-2004) of satellite-specific pcvs and offsets were generated. This challenging effort became possible as part of the reprocessing of a global GPS network currently performed by the Technical Universities of Munich and Dresden. The data of about 160 stations since the official start of the IGS in 1994 have been reprocessed, as today's GPS time series are mostly inhomogeneous and inconsistent due to continuous improvements in the processing strategies and modeling of global GPS solutions. An analysis of the signals contained in the time series of the phase center offsets demonstrates amplitudes on the decimeter level, at least one order of magnitude worse than the desired accuracy. The periods partly arise from the GPS orbit configuration, as the orientation of the orbit planes with regard to the inertial system repeats after about 350 days due to the rotation of the ascending nodes. In addition, the rms values of the X- and Y-offsets show a high correlation with the angle between the orbit plane and the direction to the sun. The time series of the pcvs mainly point at the correlation with the global terrestrial scale. Solutions with relative and absolute phase center corrections, with block- and satellite-specific satellite antenna corrections demonstrate the effect of this parameter group on other global GPS parameters such as the terrestrial scale, station velocities, the geocenter position or the tropospheric delays. Thus, deeper insight into the so-called `Bermuda triangle' of several highly correlated parameters is given.
Luginaah, Isaac N.; Fung, Karen Y.; Gorey, Kevin M.; Webster, Greg; Wills, Chris
2005-01-01
This study is part of a larger research program to examine the relationship between ambient air quality and health in Windsor, Ontario, Canada. We assessed the association between air pollution and daily respiratory hospitalization for different age and sex groups from 1995 to 2000. The pollutants included were nitrogen dioxide, sulfur dioxide, carbon monoxide, ozone, particulate matter ≤10 μm in diameter (PM10), coefficient of haze (COH), and total reduced sulfur (TRS). We calculated relative risk (RR) estimates using both time-series and case-crossover methods after controlling for appropriate confounders (temperature, humidity, and change in barometric pressure). The results of both analyses were consistent. We found associations between NO2, SO2, CO, COH, or PM10 and daily hospital admission of respiratory diseases especially among females. For females 0–14 years of age, there was 1-day delayed effect of NO2 (RR = 1.19, case-crossover method), a current-day SO2 (RR = 1.11, time series), and current-day and 1- and 2-day delayed effects for CO by case crossover (RR = 1.15, 1.19, 1.22, respectively). Time-series analysis showed that 1-day delayed effect of PM10 on respiratory admissions of adult males (15–64 years of age), with an RR of 1.18. COH had significant effects on female respiratory hospitalization, especially for 2-day delayed effects on adult females, with RRs of 1.15 and 1.29 using time-series and case-crossover analysis, respectively. There were no significant associations between O3 and TRS with respiratory admissions. These findings provide policy makers with current risks estimates of respiratory hospitalization as a result of poor ambient air quality in a government designated “area of concern.” PMID:15743717
Dancing sprites: Detailed analysis of two case studies
NASA Astrophysics Data System (ADS)
Soula, Serge; Mlynarczyk, Janusz; Füllekrug, Martin; Pineda, Nicolau; Georgis, Jean-François; van der Velde, Oscar; Montanyà, Joan; Fabró, Ferran
2017-03-01
On 29-30 October 2013, a low-light video camera installed at Pic du Midi (2877 m), recorded transient luminous events above a very active storm over the Mediterranean Sea. The minimum cloud top temperature reached -73°C, while its cloud to ground (CG) flash rate exceeded 30 fl min-1. Some sprite events have long duration and resemble to dancing sprites. We analyze in detail the temporal evolution and estimated location of two series of sprite sequences, as well as the cloud structure, the lightning activity, the electric field radiated in a broad range of low frequencies, and the current moment waveform of the lightning strokes. (i) In each series, successive sprite sequences reflect time and location of corresponding positive lightning strokes across the stratiform region. (ii) The longer time-delayed (>20 ms) sprite elements correspond to the lower impulsive charge moment changes (iCMC) of the parent strokes (<200 C km), and they are shifted few tens of kilometers from their SP + CG stroke. However, both short and long time-delayed sprite elements also occur after strokes that produce a large iCMC and that are followed by a continuing current. (iii) The long time-delayed sprite elements during the continuing current correspond to surges in the current moment waveform. They occur sometimes at an altitude apparently lower than the previous short time-delayed sprite elements, possibly because of changes in the local conductivity. (iv) The largest and brightest sprite elements produce significant current signatures, visible when their delay is not too short ( 3-5 ms).
Handwriting in 2015: A Main Occupation for Primary School-Aged Children in the Classroom?
ERIC Educational Resources Information Center
McMaster, Emily; Roberts, Tara
2016-01-01
Historically, handwriting is a skill acquired by children in the classroom. The relevance of this skill today is currently debated due to advances in technology. A nonexperimental time-series design investigated how much time Australian primary school children spend on handwriting in the classroom. A second aim investigated how much time was spent…
FINITE ELEMENT MODEL FOR TIDES AND CURRENTS WITH FIELD APPLICATIONS.
Walters, Roy A.
1988-01-01
A finite element model, based upon the shallow water equations, is used to calculate tidal amplitudes and currents for two field-scale test problems. Because tides are characterized by line spectra, the governing equations are subjected to harmonic decomposition. Thus the solution variables are the real and imaginary parts of the amplitude of sea level and velocity rather than a time series of these variables. The time series is recovered through synthesis. This scheme, coupled with a modified form of the governing equations, leads to high computational efficiency and freedom from excessive numerical noise. Two test-cases are presented. The first is a solution for eleven tidal constituents in the English Channel and southern North Sea, and three constituents are discussed. The second is an analysis of the frequency response and tidal harmonics for south San Francisco Bay.
Asquith, W.H.; Mosier, J. G.; Bush, P.W.
1997-01-01
The watershed simulation model Hydrologic Simulation Program—Fortran (HSPF) was used to generate simulated flow (runoff) from the 13 watersheds to the six bay systems because adequate gaged streamflow data from which to estimate freshwater inflows are not available; only about 23 percent of the adjacent contributing watershed area is gaged. The model was calibrated for the gaged parts of three watersheds—that is, selected input parameters (meteorologic and hydrologic properties and conditions) that control runoff were adjusted in a series of simulations until an adequate match between model-generated flows and a set (time series) of gaged flows was achieved. The primary model input is rainfall and evaporation data and the model output is a time series of runoff volumes. After calibration, simulations driven by daily rainfall for a 26-year period (1968–93) were done for the 13 watersheds to obtain runoff under current (1983–93), predevelopment (pre-1940 streamflow and pre-urbanization), and future (2010) land-use conditions for estimating freshwater inflows and for comparing runoff under the three land-use conditions; and to obtain time series of runoff from which to estimate time series of freshwater inflows for trend analysis.
HydroClimATe: hydrologic and climatic analysis toolkit
Dickinson, Jesse; Hanson, Randall T.; Predmore, Steven K.
2014-01-01
The potential consequences of climate variability and climate change have been identified as major issues for the sustainability and availability of the worldwide water resources. Unlike global climate change, climate variability represents deviations from the long-term state of the climate over periods of a few years to several decades. Currently, rich hydrologic time-series data are available, but the combination of data preparation and statistical methods developed by the U.S. Geological Survey as part of the Groundwater Resources Program is relatively unavailable to hydrologists and engineers who could benefit from estimates of climate variability and its effects on periodic recharge and water-resource availability. This report documents HydroClimATe, a computer program for assessing the relations between variable climatic and hydrologic time-series data. HydroClimATe was developed for a Windows operating system. The software includes statistical tools for (1) time-series preprocessing, (2) spectral analysis, (3) spatial and temporal analysis, (4) correlation analysis, and (5) projections. The time-series preprocessing tools include spline fitting, standardization using a normal or gamma distribution, and transformation by a cumulative departure. The spectral analysis tools include discrete Fourier transform, maximum entropy method, and singular spectrum analysis. The spatial and temporal analysis tool is empirical orthogonal function analysis. The correlation analysis tools are linear regression and lag correlation. The projection tools include autoregressive time-series modeling and generation of many realizations. These tools are demonstrated in four examples that use stream-flow discharge data, groundwater-level records, gridded time series of precipitation data, and the Multivariate ENSO Index.
WebCT: A Major Shift of Emphasis
ERIC Educational Resources Information Center
Morningstar, Barbara; Schubert, Jeremy; Thibeault, Kristine
2004-01-01
The evaluation reports in this series usually feature several products at once. The current review, however, comes at a time when one of the most widely used (and expensive) online learning management systems is undergoing a major change in its marketing strategy and corporate focus. "WebCT" is currently evolving to a new version ("WebCT Vista"),…
A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress
2018-01-01
The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i) the proposed model is different from the previous models lacking the concept of time series; (ii) the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii) the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies. PMID:29765399
A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress.
Cheng, Ching-Hsue; Chan, Chia-Pang; Yang, Jun-He
2018-01-01
The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i) the proposed model is different from the previous models lacking the concept of time series; (ii) the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii) the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies.
Monitoring microbial responses to ocean deoxygenation in a model oxygen minimum zone.
Hallam, Steven J; Torres-Beltrán, Mónica; Hawley, Alyse K
2017-10-31
Today in Scientific Data, two compendia of geochemical and multi-omic sequence information (DNA, RNA, protein) generated over almost a decade of time series monitoring in a seasonally anoxic coastal marine setting are presented to the scientific community. These data descriptors introduce a model ecosystem for the study of microbial responses to ocean deoxygenation, a phenotype that is currently expanding due to climate change. Public access to this time series information is intended to promote scientific collaborations and the generation of new hypotheses relevant to microbial ecology, biogeochemistry and global change issues.
Cloud masking and removal in remote sensing image time series
NASA Astrophysics Data System (ADS)
Gómez-Chova, Luis; Amorós-López, Julia; Mateo-García, Gonzalo; Muñoz-Marí, Jordi; Camps-Valls, Gustau
2017-01-01
Automatic cloud masking of Earth observation images is one of the first required steps in optical remote sensing data processing since the operational use and product generation from satellite image time series might be hampered by undetected clouds. The high temporal revisit of current and forthcoming missions and the scarcity of labeled data force us to cast cloud screening as an unsupervised change detection problem in the temporal domain. We introduce a cloud screening method based on detecting abrupt changes along the time dimension. The main assumption is that image time series follow smooth variations over land (background) and abrupt changes will be mainly due to the presence of clouds. The method estimates the background surface changes using the information in the time series. In particular, we propose linear and nonlinear least squares regression algorithms that minimize both the prediction and the estimation error simultaneously. Then, significant differences in the image of interest with respect to the estimated background are identified as clouds. The use of kernel methods allows the generalization of the algorithm to account for higher-order (nonlinear) feature relations. After the proposed cloud masking and cloud removal, cloud-free time series at high spatial resolution can be used to obtain a better monitoring of land cover dynamics and to generate more elaborated products. The method is tested in a dataset with 5-day revisit time series from SPOT-4 at high resolution and with Landsat-8 time series. Experimental results show that the proposed method yields more accurate cloud masks when confronted with state-of-the-art approaches typically used in operational settings. In addition, the algorithm has been implemented in the Google Earth Engine platform, which allows us to access the full Landsat-8 catalog and work in a parallel distributed platform to extend its applicability to a global planetary scale.
61. The World-Wide Inaccessible Web, Part 2: Internet Routes
ERIC Educational Resources Information Center
Baggaley, Jon; Batpurev, Batchuluun; Klaas, Jim
2007-01-01
In the previous report in this series, Web browser loading times were measured in 12 Asian countries, and were found to be up to four times slower than commonly prescribed as acceptable. Failure of webpages to load at all was frequent. The current follow-up study compares these loading times with the complexity of the Internet routes linking the…
NASA Astrophysics Data System (ADS)
Fish, C.; Hill, T. M.; Davis, C. V.; Lipski, D.; Jahncke, J.
2017-12-01
Elucidating both surface and bottom water ecosystem impacts of temperature change, acidification, and food web disruption are needed to understand anthropogenic processes in the ocean. The Applied California Current Ecosystem Studies (ACCESS) partnership surveys the California Current within the Greater Farallones and Cordell Bank National Marine Sanctuaries three times annually, sampling water column hydrography and discrete water samples from 0 m and 200 m depth at five stations along three primary transects. The transects span the continental shelf with stations as close as 13 km from the coastline to 65 km. This time series extends from 2004 to 2017, integrating information on climate, productivity, zooplankton abundance, oxygenation, and carbonate chemistry. We focus on the interpretation of the 2012-2017 carbonate chemistry data and present both long term trends over the duration of the time series as well as shorter term variability (e.g., ENSO, `warm blob' conditions) to investigate the region's changing oceanographic conditions. For example, we document oscillations in carbonate chemistry, oxygenation, and foraminiferal abundance in concert with interannual oceanographic variability and seasonal (upwelling) cycles. We concentrate on results from near Cordell Bank that potentially impact deep sea coral ecosystems.
NASA Astrophysics Data System (ADS)
Betsuin, Toshiki; Tanaka, Yasunori; Arai, T.; Uesugi, Y.; Ishijima, T.
2018-03-01
This paper describes the application of an Ar/CH4/H2 inductively coupled thermal plasma with and without coil current modulation to synthesise diamond films. Induction thermal plasma with coil current modulation is referred to as modulated induction thermal plasma (M-ITP), while that without modulation is referred to as non-modulated ITP (NM-ITP). First, spectroscopic observations of NM-ITP and M-ITP with different modulation waveforms were made to estimate the composition in flux from the thermal plasma by measuring the time evolution in the spectral intensity from the species. Secondly, we studied polycrystalline diamond film deposition tests on a Si substrate, and we studied monocrystalline diamond film growth tests using the irradiation of NM-ITP and M-ITP. From these tests, diamond nucleation effects by M-ITP were found. Finally, following the irradiation results, we attempted to use a time-series irradiation of M-ITP and NM-ITP for polycrystalline diamond film deposition on a Si substrate. The results indicated that numerous larger diamond particles were deposited with a high population density on the Si substrate by time-series irradiation.
Featureless classification of light curves
NASA Astrophysics Data System (ADS)
Kügler, S. D.; Gianniotis, N.; Polsterer, K. L.
2015-08-01
In the era of rapidly increasing amounts of time series data, classification of variable objects has become the main objective of time-domain astronomy. Classification of irregularly sampled time series is particularly difficult because the data cannot be represented naturally as a vector which can be directly fed into a classifier. In the literature, various statistical features serve as vector representations. In this work, we represent time series by a density model. The density model captures all the information available, including measurement errors. Hence, we view this model as a generalization to the static features which directly can be derived, e.g. as moments from the density. Similarity between each pair of time series is quantified by the distance between their respective models. Classification is performed on the obtained distance matrix. In the numerical experiments, we use data from the OGLE (Optical Gravitational Lensing Experiment) and ASAS (All Sky Automated Survey) surveys and demonstrate that the proposed representation performs up to par with the best currently used feature-based approaches. The density representation preserves all static information present in the observational data, in contrast to a less-complete description by features. The density representation is an upper boundary in terms of information made available to the classifier. Consequently, the predictive power of the proposed classification depends on the choice of similarity measure and classifier, only. Due to its principled nature, we advocate that this new approach of representing time series has potential in tasks beyond classification, e.g. unsupervised learning.
Application of computational mechanics to the analysis of natural data: an example in geomagnetism.
Clarke, Richard W; Freeman, Mervyn P; Watkins, Nicholas W
2003-01-01
We discuss how the ideal formalism of computational mechanics can be adapted to apply to a noninfinite series of corrupted and correlated data, that is typical of most observed natural time series. Specifically, a simple filter that removes the corruption that creates rare unphysical causal states is demonstrated, and the concept of effective soficity is introduced. We believe that computational mechanics cannot be applied to a noisy and finite data series without invoking an argument based upon effective soficity. A related distinction between noise and unresolved structure is also defined: Noise can only be eliminated by increasing the length of the time series, whereas the resolution of previously unresolved structure only requires the finite memory of the analysis to be increased. The benefits of these concepts are demonstrated in a simulated times series by (a) the effective elimination of white noise corruption from a periodic signal using the expletive filter and (b) the appearance of an effectively sofic region in the statistical complexity of a biased Poisson switch time series that is insensitive to changes in the word length (memory) used in the analysis. The new algorithm is then applied to an analysis of a real geomagnetic time series measured at Halley, Antarctica. Two principal components in the structure are detected that are interpreted as the diurnal variation due to the rotation of the Earth-based station under an electrical current pattern that is fixed with respect to the Sun-Earth axis and the random occurrence of a signature likely to be that of the magnetic substorm. In conclusion, some useful terminology for the discussion of model construction in general is introduced.
Rip Current Velocity Structure in Drifter Trajectories and Numerical Simulations
NASA Astrophysics Data System (ADS)
Schmidt, W. E.; Slinn, D. N.
2008-12-01
Estimates of rip current velocity and cross-shore structure were made using surfzone drifters, bathymetric surveys, and rectified video images. Over 60 rip current trajectories were observed during a three year period at a Southern California beach in July 2000, 2001, and 2002. Incident wave heights (Hs) immediately offshore (~7 m depth) were obtained by initializing a refraction model with data from nearby directional wave buoys, and varied from 0.3 to 1.0 m. Tide levels varied over approximately 1 m and winds were light. Numerical simulations using the non-linear shallow water equations and modeled over measured bathymetry also produced similar flows and statistics. Time series of drifter position, sampled at 1 Hz, were first-differenced to produce velocity time series. Maximum observed velocities varied between 25 and 80 cm s-1, whereas model maximum velocities were lower by a factor 2 to 3. When velocity maxima were non-dimensionalized by respective trajectory mean velocity, both observed and modeled values varied between 1.5 and 3.5. Cross-shore location of rip current velocity maxima for both shore-normal and shore-oblique rip currents were strongly coincident with the surfzone edge (Xb), as determined by rectified video (observations) or breakpoint (model). Once outside of the surfzone, observed and modeled rip current velocities decreased to 10% of their peak values within 2 surfzone widths of the shoreline, a useful definition of rip current cross-shore extent.
NASA Astrophysics Data System (ADS)
Vlasenko, A. V.; Sizonenko, A. B.; Zhdanov, A. A.
2018-05-01
Discrete time series or mappings are proposed for describing the dynamics of a nonlinear system. The article considers the problems of forecasting the dynamics of the system from the time series generated by it. In particular, the commercial rate of drilling oil and gas wells can be considered as a series where each next value depends on the previous one. The main parameter here is the technical drilling speed. With the aim of eliminating the measurement error and presenting the commercial speed of the object to the current with a good accuracy, future or any of the elapsed time points, the use of the Kalman filter is suggested. For the transition from a deterministic model to a probabilistic one, the use of ensemble modeling is suggested. Ensemble systems can provide a wide range of visual output, which helps the user to evaluate the measure of confidence in the model. In particular, the availability of information on the estimated calendar duration of the construction of oil and gas wells will allow drilling companies to optimize production planning by rationalizing the approach to loading drilling rigs, which ultimately leads to maximization of profit and an increase of their competitiveness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aldridge, David F.; Bartel, Lewis C.
Program LETS calculates the electric current distribution (in space and time) along an electrically energized steel-cased geologic borehole situated within the subsurface earth. The borehole is modeled as an electrical transmission line that “leaks” current into the surrounding geology. Parameters pertinent to the transmission line current calculation (i.e., series resistance and inductance, shunt capacitance and conductance) are obtained by sampling the electromagnetic (EM) properties of a three-dimensional (3D) geologic earth model along a (possibly deviated) well track.
NASA Astrophysics Data System (ADS)
Artana, Camila; Ferrari, Ramiro; Koenig, Zoé; Sennéchael, Nathalie; Saraceno, Martin; Piola, Alberto R.; Provost, Christine
2018-01-01
We combined altimetric data and the in situ data sets from three 10 years apart mooring deployments to compute a coherent and accurate volume transport time series of the Malvinas Current (MC) at 41°S. We used a method developed in Koenig et al. (2014) and explored three types of geostrophic shear to estimate the uncertainty derived from the lack of velocity data in the upper 300 m. The mean MC transport over 24 years in the upper 1,500 m is 37.1 ± 2.6 Sv and the standard deviation 6.6 ± 1 Sv. Since 1993, annual mean transports have varied from 32 to 41 Sv and the three in situ records corresponded to low annual mean transports. The MC transport time series is not stationary, its spectral content evolves with time showing significant energy at the 30-110 days, semiannual and annual period. The distribution of the MC volume transport anomalies is asymmetric, negatively skewed with larger negative anomalies than positive anomalies. Several transport maxima appear to result from cyclonic eddies that propagate northwestward following the 4,000-5,000 m isobaths and locally reinforce the circulation on the slope when they reach 41°S. During transport maxima, the northernmost extension of the Subantarctic Front (SAF) remains at its mean location (39.5°S). During minima, the SAF migrates southward of 41°S as positive anomalies shed by the Brazil Current overshoot moves westward onto the slope. Apart from continental trapped waves, changes in the MC volume transport at 41°S show no correlation with upstream conditions on the continental slope.
Characteristics of lightning flashes generating dancing sprites above thunderstorms
NASA Astrophysics Data System (ADS)
Soula, Serge; Mlynarczyk, Janusz; Füllekrug, Martin; Pineda, Nicolau; Georgis, Jean-François; van der Velde, Oscar; Montanyà, Joan; Fabro, Ferran
2017-04-01
During the night of October 29-30, 2013, a low-light video camera at Pic du Midi (2877 m) in the French Pyrénées, recorded TLEs above a very active storm over the Mediterranean Sea. The minimum cloud top temperature reached -73˚ C at ˜1600 UTC while its cloud to ground (CG) flash rate reached ˜30 fl min-1. Some sprite events with long duration are classified as dancing sprites. We analyze in detail the temporal evolution and estimated location of sprite elements for two cases of these events. They consist in series of sprite sequences with a duration that exceeds 1 second. By associating the cloud structure, the lightning activity, the electric field radiated in a broad range of low frequencies and the current moment waveform of the lightning strokes, some findings are highlighted: (i) In each series, successive sprite sequences reflect the occurrence time and location of individual positive lightning strokes across the stratiform region. (ii) The longer time-delayed (> 20 ms) sprite elements correspond to the lower impulsive charge moment changes (iCMC) of the parent stroke (< 200 C km) and they are shifted few tens of kilometres from their SP+CG stroke. However, both short and long time-delayed sprite elements also occur after strokes that produce a large iCMC and that are followed by a continuing current. (iii) The long time-delayed sprite elements produced during the continuing current correspond to surges in the current moment waveform. They occur sometimes at an altitude apparently lower than the previous short time-delayed sprite elements, possibly because of the lowered altitude of the ionosphere potential. (iv) The largest and brightest sprite elements produce significant current signatures, visible when their delay is not too short (˜3-5 ms).
NASA Astrophysics Data System (ADS)
Jose, L.; Bennett, R. A.; Harig, C.
2017-12-01
Currently, cGPS data is well suited to track vertical changes in the Earth's surface. However, there are annual, semi-annual, and interannual signals within cGPS time series that are not well constrained. We hypothesize that these signals are primarily due to water loading. If this is the case, the conventional method of modeling cGPS data as an annual or semiannual sinusoid falls short, as such models cannot accurately capture all variations in surface displacement, especially those due to extreme hydrologic events. We believe that we can better correct the cGPS time series with another method we are developing wherein we use a time series of surface displacement derived from the GRACE geopotential field instead of a sinusoidal model to correct the data. Currently, our analysis is constrained to the Amazon Basin, where the signal due to water loading is large enough to appear in both the GRACE and cGPS measurements. The vertical signal from cGPS stations across the Amazon Basin show an apparent spatial correlation, which further supports our idea that these signals are due to a regional water loading signal. In our preliminary research, we used tsview for Matlab to find that the WRMS of the corrected cGPS time series can be reduced as much as 30% from the model corrected data to the GRACE corrected data. The Amazon, like many places around the world, has experienced extreme drought, in 2005, 2010, and recently in 2015. In addition to making the cGPS vertical signal more robust, the method we are developing has the potential to help us understand the effects of these weather events and track trends in water loading.
Wood construction codes issues in the United States
Douglas R. Rammer
2006-01-01
The current wood construction codes find their origin in the 1935 Wood Handbook: Wood as an Engineering Material published by the USDA Forest Service. Many of the current design recommendations can be traced back to statements from this book. Since this time a series of development both historical and recent has led to a multi-layered system for use of wood products in...
Spheromak Formation and Current Sustainment Using a Repetitively Pulsed Source
NASA Astrophysics Data System (ADS)
Woodruff, S.; Macnab, A. I. D.; Ziemba, T. M.; Miller, K. E.
2009-06-01
By repeated injection of magnetic helicity ( K = 2φψ) on time-scales short compared with the dissipation time (τinj << τ K ), it is possible to produce toroidal currents relevant to POP-level experiments. Here we discuss an effective injection rate, due to the expansion of a series of current sheets and their subsequent reconnection to form spheromaks and compression into a copper flux-conserving chamber. The benefits of repeated injection are that the usual limits to current amplification can be exceeded, and an efficient quasi-steady sustainment scenario is possible (within minimum impact on confinement). A new experiment designed to address the physics of pulsed formation and sustainment is described.
The Sunspot Number and beyond : reconstructing detailed solar information over centuries
NASA Astrophysics Data System (ADS)
Lefevre, L.
2014-12-01
With four centuries of solar evolution, the International Sunspot Number (SSN) forms the longest solar time series currently available. It provides an essential reference for understanding and quantifying how the solar output has varied over decades and centuries and thus for assessing the variations of the main natural forcing on the Earth climate. Because of its importance, this unique time-series must be closely monitored for any possible biases and drifts. Here, we report about recent disagreements between solar indices, for example the sunspot sumber and the 10.7cm radio flux. Recent analyses indicate that while part of this divergence may be due to a calibration drift in the SSN, it also results from an intrinsic change in the global magnetic parameters of sunspots and solar active regions, suggesting a possible transition to a new activity regime. Going beyond the SSN series, in the framework of the TOSCA (www.cost-tosca.eu/) and SOLID (projects.pmodwrc.ch/solid/) projects, we produced a survey of all existing catalogs providing detailed sunspot information (Lefevre & Clette, 2014:10.1007/s11207-012-0184-5) and we also located different primary solar images and drawing collections that can be exploitable to complement the existing catalogs. These are first steps towards the construction of a multi-parametric time series of multiple sunspot and sunspot group properties over more than a century, allowing to reconstruct and extend the current 1-D SSN series. By bringing new spatial, morphological and evolutionary information, such a data set should bring major advances for the modeling of the solar dynamo and solar irradiance. We will present here the current status of this work. The preliminary version catalog now extends over the last 150 years. It makes use of data from DPD (http://fenyi.solarobs.unideb.hu/DPD/index.html), from the Uccle Solar Equatorial Table (USET:http://sidc.oma.be/uset/) operated by the Royal Obeservatory of Belgium, the Greenwich Catalog (RGO:http://www.ngdc.noaa.gov/) as well as the Kodaikanal white light data.
2011-01-01
Background Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. Methods We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Results Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. Conclusions We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting. PMID:22023778
Kennedy, Curtis E; Turley, James P
2011-10-24
Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting.
NASA Astrophysics Data System (ADS)
Lyons, Mitchell B.; Roelfsema, Chris M.; Phinn, Stuart R.
2013-03-01
The spatial and temporal dynamics of seagrasses have been well studied at the leaf to patch scales, however, the link to large spatial extent landscape and population dynamics is still unresolved in seagrass ecology. Traditional remote sensing approaches have lacked the temporal resolution and consistency to appropriately address this issue. This study uses two high temporal resolution time-series of thematic seagrass cover maps to examine the spatial and temporal dynamics of seagrass at both an inter- and intra-annual time scales, one of the first globally to do so at this scale. Previous work by the authors developed an object-based approach to map seagrass cover level distribution from a long term archive of Landsat TM and ETM+ images on the Eastern Banks (≈200 km2), Moreton Bay, Australia. In this work a range of trend and time-series analysis methods are demonstrated for a time-series of 23 annual maps from 1988 to 2010 and a time-series of 16 monthly maps during 2008-2010. Significant new insight was presented regarding the inter- and intra-annual dynamics of seagrass persistence over time, seagrass cover level variability, seagrass cover level trajectory, and change in area of seagrass and cover levels over time. Overall we found that there was no significant decline in total seagrass area on the Eastern Banks, but there was a significant decline in seagrass cover level condition. A case study of two smaller communities within the Eastern Banks that experienced a decline in both overall seagrass area and condition are examined in detail, highlighting possible differences in environmental and process drivers. We demonstrate how trend and time-series analysis enabled seagrass distribution to be appropriately assessed in context of its spatial and temporal history and provides the ability to not only quantify change, but also describe the type of change. We also demonstrate the potential use of time-series analysis products to investigate seagrass growth and decline as well as the processes that drive it. This study demonstrates clear benefits over traditional seagrass mapping and monitoring approaches, and provides a proof of concept for the use of trend and time-series analysis of remotely sensed seagrass products to benefit current endeavours in seagrass ecology.
NASA Astrophysics Data System (ADS)
Watkins, Nicholas; Clarke, Richard; Freeman, Mervyn
2002-11-01
We discuss how the ideal formalism of Computational Mechanics can be adapted to apply to a non-infinite series of corrupted and correlated data, that is typical of most observed natural time series. Specifically, a simple filter that removes the corruption that creates rare unphysical causal states is demonstrated, and the new concept of effective soficity is introduced. The benefits of these new concepts are demonstrated on simulated time series by (a) the effective elimination of white noise corruption from a periodic signal using the expletive filter and (b) the appearance of an effectively sofic region in the statistical complexity of a biased Poisson switch time series that is insensitive to changes in the word length (memory) used in the analysis. The new algorithm is then applied to analysis of a real geomagnetic time series measured at Halley, Antarctica. Two principal components in the structure are detected that are interpreted as the diurnal variation due to the rotation of the earth-based station under an electrical current pattern that is fixed with respect to the sun-earth axis and the random occurrence of a signature likely to be that of the magnetic substorm. In conclusion, a hypothesis is advanced about model construction in general (see also Clarke et al; arXiv::cond-mat/0110228).
Energy breakdown in capacitive deionization.
Hemmatifar, Ali; Palko, James W; Stadermann, Michael; Santiago, Juan G
2016-11-01
We explored the energy loss mechanisms in capacitive deionization (CDI). We hypothesize that resistive and parasitic losses are two main sources of energy losses. We measured contribution from each loss mechanism in water desalination with constant current (CC) charge/discharge cycling. Resistive energy loss is expected to dominate in high current charging cases, as it increases approximately linearly with current for fixed charge transfer (resistive power loss scales as square of current and charging time scales as inverse of current). On the other hand, parasitic loss is dominant in low current cases, as the electrodes spend more time at higher voltages. We built a CDI cell with five electrode pairs and standard flow between architecture. We performed a series of experiments with various cycling currents and cut-off voltages (voltage at which current is reversed) and studied these energy losses. To this end, we measured series resistance of the cell (contact resistances, resistance of wires, and resistance of solution in spacers) during charging and discharging from voltage response of a small amplitude AC current signal added to the underlying cycling current. We performed a separate set of experiments to quantify parasitic (or leakage) current of the cell versus cell voltage. We then used these data to estimate parasitic losses under the assumption that leakage current is primarily voltage (and not current) dependent. Our results confirmed that resistive and parasitic losses respectively dominate in the limit of high and low currents. We also measured salt adsorption and report energy-normalized adsorbed salt (ENAS, energy loss per ion removed) and average salt adsorption rate (ASAR). We show a clear tradeoff between ASAR and ENAS and show that balancing these losses leads to optimal energy efficiency. Copyright © 2016 Elsevier Ltd. All rights reserved.
Energy breakdown in capacitive deionization
Hemmatifar, Ali; Palko, James W.; Stadermann, Michael; ...
2016-08-12
We explored the energy loss mechanisms in capacitive deionization (CDI). We hypothesize that resistive and parasitic losses are two main sources of energy losses. We measured contribution from each loss mechanism in water desalination with constant current (CC) charge/discharge cycling. Resistive energy loss is expected to dominate in high current charging cases, as it increases approximately linearly with current for fixed charge transfer (resistive power loss scales as square of current and charging time scales as inverse of current). On the other hand, parasitic loss is dominant in low current cases, as the electrodes spend more time at higher voltages.more » We built a CDI cell with five electrode pairs and standard flow between architecture. We performed a series of experiments with various cycling currents and cut-off voltages (voltage at which current is reversed) and studied these energy losses. To this end, we measured series resistance of the cell (contact resistances, resistance of wires, and resistance of solution in spacers) during charging and discharging from voltage response of a small amplitude AC current signal added to the underlying cycling current. We performed a separate set of experiments to quantify parasitic (or leakage) current of the cell versus cell voltage. We then used these data to estimate parasitic losses under the assumption that leakage current is primarily voltage (and not current) dependent. Our results confirmed that resistive and parasitic losses respectively dominate in the limit of high and low currents. We also measured salt adsorption and report energy-normalized adsorbed salt (ENAS, energy loss per ion removed) and average salt adsorption rate (ASAR). As a result, we show a clear tradeoff between ASAR and ENAS and show that balancing these losses leads to optimal energy efficiency.« less
Arctic Acoustic Workshop Proceedings, 14-15 February 1989.
1989-06-01
measurements. The measurements reported by Levine et al. (1987) were taken from current and temperature sensors moored in two triangular grids . The internal...requires a resampling of the data series on a uniform depth-time grid . Statistics calculated from the resampled series will be used to test numerical...from an isolated keel. Figure 2: 2-D Modeling Geometry - The model is based on a 2-D Cartesian grid with an axis of symmetry on the left. A pulsed
A Smoothing Technique for the Multifractal Analysis of a Medium Voltage Feeders Electric Current
NASA Astrophysics Data System (ADS)
de Santis, Enrico; Sadeghian, Alireza; Rizzi, Antonello
2017-12-01
The current paper presents a data-driven detrending technique allowing to smooth complex sinusoidal trends from a real-world electric load time series before applying the Detrended Multifractal Fluctuation Analysis (MFDFA). The algorithm we call Smoothed Sort and Cut Fourier Detrending (SSC-FD) is based on a suitable smoothing of high power periodicities operating directly in the Fourier spectrum through a polynomial fitting technique of the DFT. The main aim consists of disambiguating the characteristic slow varying periodicities, that can impair the MFDFA analysis, from the residual signal in order to study its correlation properties. The algorithm performances are evaluated on a simple benchmark test consisting of a persistent series where the Hurst exponent is known, with superimposed ten sinusoidal harmonics. Moreover, the behavior of the algorithm parameters is assessed computing the MFDFA on the well-known sunspot data, whose correlation characteristics are reported in literature. In both cases, the SSC-FD method eliminates the apparent crossover induced by the synthetic and natural periodicities. Results are compared with some existing detrending methods within the MFDFA paradigm. Finally, a study of the multifractal characteristics of the electric load time series detrendended by the SSC-FD algorithm is provided, showing a strong persistent behavior and an appreciable amplitude of the multifractal spectrum that allows to conclude that the series at hand has multifractal characteristics.
78 FR 28729 - Airworthiness Directives; The Boeing Company Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-16
... Boeing Company Model 757-200 and -200PF series airplanes. That AD currently requires modifying the... specifies a maximum compliance time limit that overrides the optional threshold formula results. This AD was... analytical loads that [[Page 28730
NASA Astrophysics Data System (ADS)
Lindholm, D. M.; Wilson, A.
2010-12-01
The Laboratory for Atmospheric and Space Physics at the University of Colorado has developed an Open Source, OPeNDAP compliant, Java Servlet based, RESTful web service to serve time series data. In addition to handling OPeNDAP style requests and returning standard responses, existing modules for alternate output formats can be reused or customized. It is also simple to reuse or customize modules to directly read various native data sources and even to perform some processing on the server. The server is built around a common data model based on the Unidata Common Data Model (CDM) which merges the NetCDF, HDF, and OPeNDAP data models. The server framework features a modular architecture that supports pluggable Readers, Writers, and Filters via the common interface to the data, enabling a workflow that reads data from their native form, performs some processing on the server, and presents the results to the client in its preferred form. The service is currently being used operationally to serve time series data for the LASP Interactive Solar Irradiance Data Center (LISIRD, http://lasp.colorado.edu/lisird/) and as part of the Time Series Data Server (TSDS, http://tsds.net/). I will present the data model and how it enables reading, writing, and processing concerns to be separated into loosely coupled components. I will also share thoughts for evolving beyond the time series abstraction and providing a general purpose data service that can be orchestrated into larger workflows.
Model for the respiratory modulation of the heart beat-to-beat time interval series
NASA Astrophysics Data System (ADS)
Capurro, Alberto; Diambra, Luis; Malta, C. P.
2005-09-01
In this study we present a model for the respiratory modulation of the heart beat-to-beat interval series. The model consists of a set of differential equations used to simulate the membrane potential of a single rabbit sinoatrial node cell, excited with a periodic input signal with added correlated noise. This signal, which simulates the input from the autonomous nervous system to the sinoatrial node, was included in the pacemaker equations as a modulation of the iNaK current pump and the potassium current iK. We focus at modeling the heart beat-to-beat time interval series from normal subjects during meditation of the Kundalini Yoga and Chi techniques. The analysis of the experimental data indicates that while the embedding of pre-meditation and control cases have a roughly circular shape, it acquires a polygonal shape during meditation, triangular for the Kundalini Yoga data and quadrangular in the case of Chi data. The model was used to assess the waveshape of the respiratory signals needed to reproduce the trajectory of the experimental data in the phase space. The embedding of the Chi data could be reproduced using a periodic signal obtained by smoothing a square wave. In the case of Kundalini Yoga data, the embedding was reproduced with a periodic signal obtained by smoothing a triangular wave having a rising branch of longer duration than the decreasing branch. Our study provides an estimation of the respiratory signal using only the heart beat-to-beat time interval series.
On the estimation of the current density in space plasmas: Multi- versus single-point techniques
NASA Astrophysics Data System (ADS)
Perri, Silvia; Valentini, Francesco; Sorriso-Valvo, Luca; Reda, Antonio; Malara, Francesco
2017-06-01
Thanks to multi-spacecraft mission, it has recently been possible to directly estimate the current density in space plasmas, by using magnetic field time series from four satellites flying in a quasi perfect tetrahedron configuration. The technique developed, commonly called ;curlometer; permits a good estimation of the current density when the magnetic field time series vary linearly in space. This approximation is generally valid for small spacecraft separation. The recent space missions Cluster and Magnetospheric Multiscale (MMS) have provided high resolution measurements with inter-spacecraft separation up to 100 km and 10 km, respectively. The former scale corresponds to the proton gyroradius/ion skin depth in ;typical; solar wind conditions, while the latter to sub-proton scale. However, some works have highlighted an underestimation of the current density via the curlometer technique with respect to the current computed directly from the velocity distribution functions, measured at sub-proton scales resolution with MMS. In this paper we explore the limit of the curlometer technique studying synthetic data sets associated to a cluster of four artificial satellites allowed to fly in a static turbulent field, spanning a wide range of relative separation. This study tries to address the relative importance of measuring plasma moments at very high resolution from a single spacecraft with respect to the multi-spacecraft missions in the current density evaluation.
ARIMA representation for daily solar irradiance and surface air temperature time series
NASA Astrophysics Data System (ADS)
Kärner, Olavi
2009-06-01
Autoregressive integrated moving average (ARIMA) models are used to compare long-range temporal variability of the total solar irradiance (TSI) at the top of the atmosphere (TOA) and surface air temperature series. The comparison shows that one and the same type of the model is applicable to represent the TSI and air temperature series. In terms of the model type surface air temperature imitates closely that for the TSI. This may mean that currently no other forcing to the climate system is capable to change the random walk type variability established by the varying activity of the rotating Sun. The result should inspire more detailed examination of the dependence of various climate series on short-range fluctuations of TSI.
NASA Astrophysics Data System (ADS)
Ritzberger, D.; Jakubek, S.
2017-09-01
In this work, a data-driven identification method, based on polynomial nonlinear autoregressive models with exogenous inputs (NARX) and the Volterra series, is proposed to describe the dynamic and nonlinear voltage and current characteristics of polymer electrolyte membrane fuel cells (PEMFCs). The structure selection and parameter estimation of the NARX model is performed on broad-band voltage/current data. By transforming the time-domain NARX model into a Volterra series representation using the harmonic probing algorithm, a frequency-domain description of the linear and nonlinear dynamics is obtained. With the Volterra kernels corresponding to different operating conditions, information from existing diagnostic tools in the frequency domain such as electrochemical impedance spectroscopy (EIS) and total harmonic distortion analysis (THDA) are effectively combined. Additionally, the time-domain NARX model can be utilized for fault detection by evaluating the difference between measured and simulated output. To increase the fault detectability, an optimization problem is introduced which maximizes this output residual to obtain proper excitation frequencies. As a possible extension it is shown, that by optimizing the periodic signal shape itself that the fault detectability is further increased.
Rigler, E. Joshua
2017-04-26
A theoretical basis and prototype numerical algorithm are provided that decompose regular time series of geomagnetic observations into three components: secular variation; solar quiet, and disturbance. Respectively, these three components correspond roughly to slow changes in the Earth’s internal magnetic field, periodic daily variations caused by quasi-stationary (with respect to the sun) electrical current systems in the Earth’s magnetosphere, and episodic perturbations to the geomagnetic baseline that are typically driven by fluctuations in a solar wind that interacts electromagnetically with the Earth’s magnetosphere. In contrast to similar algorithms applied to geomagnetic data in the past, this one addresses the issue of real time data acquisition directly by applying a time-causal, exponential smoother with “seasonal corrections” to the data as soon as they become available.
Ensemble Deep Learning for Biomedical Time Series Classification
2016-01-01
Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost. PMID:27725828
Indicator saturation: a novel approach to detect multiple breaks in geodetic time series.
NASA Astrophysics Data System (ADS)
Jackson, L. P.; Pretis, F.; Williams, S. D. P.
2016-12-01
Geodetic time series can record long term trends, quasi-periodic signals at a variety of time scales from days to decades, and sudden breaks due to natural or anthropogenic causes. The causes of breaks range from instrument replacement to earthquakes to unknown (i.e. no attributable cause). Furthermore, breaks can be permanent or short-lived and range at least two orders of magnitude in size (mm to 100's mm). To account for this range of possible signal-characteristics requires a flexible time series method that can distinguish between true and false breaks, outliers and time-varying trends. One such method, Indicator Saturation (IS) comes from the field of econometrics where analysing stochastic signals in these terms is a common problem. The IS approach differs from alternative break detection methods by considering every point in the time series as a break until it is demonstrated statistically that it is not. A linear model is constructed with a break function at every point in time, and all but statistically significant breaks are removed through a general-to-specific model selection algorithm for more variables than observations. The IS method is flexible because it allows multiple breaks of different forms (e.g. impulses, shifts in the mean, and changing trends) to be detected, while simultaneously modelling any underlying variation driven by additional covariates. We apply the IS method to identify breaks in a suite of synthetic GPS time series used for the Detection of Offsets in GPS Experiments (DOGEX). We optimise the method to maximise the ratio of true-positive to false-positive detections, which improves estimates of errors in the long term rates of land motion currently required by the GPS community.
NASA Astrophysics Data System (ADS)
Pardo-Iguzquiza, Eulogio; Rodríguez-Tovar, Francisco J.
2011-12-01
One important handicap when working with stratigraphic sequences is the discontinuous character of the sedimentary record, especially relevant in cyclostratigraphic analysis. Uneven palaeoclimatic/palaeoceanographic time series are common, their cyclostratigraphic analysis being comparatively difficult because most spectral methodologies are appropriate only when working with even sampling. As a means to solve this problem, a program for calculating the smoothed Lomb-Scargle periodogram and cross-periodogram, which additionally evaluates the statistical confidence of the estimated power spectrum through a Monte Carlo procedure (the permutation test), has been developed. The spectral analysis of a short uneven time series calls for assessment of the statistical significance of the spectral peaks, since a periodogram can always be calculated but the main challenge resides in identifying true spectral features. To demonstrate the effectiveness of this program, two case studies are presented: the one deals with synthetic data and the other with paleoceanographic/palaeoclimatic proxies. On a simulated time series of 500 data, two uneven time series (with 100 and 25 data) were generated by selecting data at random. Comparative analysis between the power spectra from the simulated series and from the two uneven time series demonstrates the usefulness of the smoothed Lomb-Scargle periodogram for uneven sequences, making it possible to distinguish between statistically significant and spurious spectral peaks. Fragmentary time series of Cd/Ca ratios and δ18O from core AII107-131 of SPECMAP were analysed as a real case study. The efficiency of the direct and cross Lomb-Scargle periodogram in recognizing Milankovitch and sub-Milankovitch signals related to palaeoclimatic/palaeoceanographic changes is demonstrated. As implemented, the Lomb-Scargle periodogram may be applied to any palaeoclimatic/palaeoceanographic proxies, including those usually recovered from contourites, and it holds special interest in the context of centennial- to millennial-scale climatic changes affecting contouritic currents.
Variability of the Denmark Strait overflow: Moored time series from 1996-2011
NASA Astrophysics Data System (ADS)
Jochumsen, Kerstin; Quadfasel, Detlef; Valdimarsson, Heã°Inn; Jónsson, SteingríMur
2012-12-01
The Denmark Strait overflow provides about half of the total dense water overflow from the Nordic Seas into the North Atlantic Ocean. The velocity of the overflow has been monitored in the Strait with two moored Acoustic Doppler Current Profilers since 1996 with several interruptions due to mooring losses or instrument failure. So far, overflow transports were only calculated when data from both moorings were available. In this work, we introduce a linear model to fill gaps in the time series when data from only one instrument is available. The mean overflow transport is 3.4 Sv and exhibits a variance of 2.0 Sv2. No significant trend was detected in the time series. The highest variability in the transport is associated with the passage of mesoscale eddies with time scales of 2-10 days (associated with a variance of 1.5 Sv2). Seasonal variability is weak and explains less than 5% of the variance in all time series, which is in contrast to the strong seasonal cycle found in high resolution model simulations. Interannual variability is on the order of 10% of the mean. A relation to atmospheric forcing such as the local wind stress curl, as well as to larger scale phenomena, e.g. the North Atlantic Oscillation, is not detected. Since 2005 data from moored temperature, conductivity and pressure recorders have been available as well, monitoring the hydrographic variability at the bottom of Denmark Strait. In recent years the temperature time series of the Denmark Strait overflow revealed a cooling, while the salinity stayed nearly constant.
NASA Astrophysics Data System (ADS)
Tudino, T.; Bortoluzzi, G.; Aliani, S.
2014-03-01
Marine water dynamics in the near field of a massive gas eruption near Panarea (Aeolian Islands volcanic arc, SE Tyrrhenian Sea) is described. ADCP current-meters were deployed during the paroxysmal phase in 2002 and 2003 a few meters from the degassing vent, recording day-long time series. Datasets were sorted to remove errors and select good quality ensembles over the entire water column. Standard deviation of error velocity was considered a proxy for inhomogeneous velocity fields over beams. Time series intervals had been selected when the basic ADCP assumptions were fulfilled and random errors minimized. Backscatter data were also processed to identify bubbles in the water column with the aim of locating bubble-free ensembles. Reliable time series are selected combining these data. Two possible scenarios have been described: firstly, a highly dynamic situation with visible surface diverging rings of waves, entrainment on the lower part of the gas column, detrainment in the upper part and a stagnation line (SL) at mid depth where currents were close to zero and most of the gas bubbles spread laterally; secondly, a less dynamic situation with water entraining into the gas plume at all depths and no surface rings of diverging waves. Reasons for these different dynamics may be ascribed to changes in gas fluxes (one order of magnitude higher in 2002). Description of SL is important to quantify its position in the water column and timing for entrainment-detrainment, and it can be measured by ADCP and calculated from models.
NASA Astrophysics Data System (ADS)
Barraza Bernadas, V.; Grings, F.; Roitberg, E.; Perna, P.; Karszenbaum, H.
2017-12-01
The Dry Chaco region (DCF) has the highest absolute deforestation rates of all Argentinian forests. The most recent report indicates a current deforestation rate of 200,000 Ha year-1. In order to better monitor this process, DCF was chosen to implement an early warning program for illegal deforestation. Although the area is intensively studied using medium resolution imagery (Landsat), the products obtained have a yearly pace and therefore unsuited for an early warning program. In this paper, we evaluated the performance of an online Bayesian change-point detection algorithm for MODIS Enhanced Vegetation Index (EVI) and Land Surface Temperature (LST) datasets. The goal was to to monitor the abrupt changes in vegetation dynamics associated with deforestation events. We tested this model by simulating 16-day EVI and 8-day LST time series with varying amounts of seasonality, noise, length of the time series and by adding abrupt changes with different magnitudes. This model was then tested on real satellite time series available through the Google Earth Engine, over a pilot area in DCF, where deforestation was common in the 2004-2016 period. A comparison with yearly benchmark products based on Landsat images is also presented (REDAF dataset). The results shows the advantages of using an automatic model to detect a changepoint in the time series than using only visual inspection techniques. Simulating time series with varying amounts of seasonality and noise, and by adding abrupt changes at different times and magnitudes, revealed that this model is robust against noise, and is not influenced by changes in amplitude of the seasonal component. Furthermore, the results compared favorably with REDAF dataset (near 65% of agreement). These results show the potential to combine LST and EVI to identify deforestation events. This work is being developed within the frame of the national Forest Law for the protection and sustainable development of Native Forest in Argentina in agreement with international legislation (REDD+).
Practical analysis of tide gauges records from Antarctica
NASA Astrophysics Data System (ADS)
Galassi, Gaia; Spada, Giorgio
2015-04-01
We have collected and analyzed in a basic way the currently available time series from tide gauges deployed along the coasts of Antarctica. The database of the Permanent Service for Mean Sea Level (PSMSL) holds relative sea level information for 17 stations, which are mostly concentrated in the Antarctic Peninsula (8 out of 17). For 7 of the PSMSL stations, Revised Local Reference (RLR) monthly and yearly observations are available, spanning from year 1957.79 (Almirante Brown) to 2013.95 (Argentine Islands). For the remaining 11 stations, only metric monthly data can be obtained during the time window 1957-2013. The record length of the available time series is not generally exceeding 20 years. Remarkable exceptions are the RLR station of Argentine Island, located in the Antarctic Peninsula (AP) (time span: 1958-2013, record length: 54 years, completeness=98%), and the metric station of Syowa in East Antarctica (1975-2012, 37 years, 92%). The general quality (geographical coverage and length of record) of the time series hinders a coherent geophysical interpretation of the relative sea-level data along the coasts of Antarctica. However, in an attempt to characterize the relative sea level signals available, we have stacked (i.e., averaged) the RLR time series for the AP and for the whole Antarctica. The so obtained time series have been analyzed using simple regression in order to estimate a trend and a possible sea-level acceleration. For the AP, the the trend is 1.8 ± 0.2 mm/yr and for the whole Antarctica it is 2.1 ± 0.1 mm/yr (both during 1957-2013). The modeled values of Glacial Isostatic Adjustment (GIA) obtained with ICE-5G(VM2) using program SELEN, range between -0.7 and -1.6 mm/yr, showing that the sea-level trend recorded by tide gauges is strongly influenced by GIA. Subtracting the average GIA contribution (-1.1 mm/yr) to observed sea-level trend from the two stacks, we obtain 3.2 and 2.9 mm/yr for Antarctica and AP respectively, which are interpreted as the effect of current ice melting and steric ocean contributions. By the Ensemble Empirical Mode Decomposition method, we have detected different oscillations embedded in the sea-level signals for Antarctica and AP. This confirms previously recognized connections between the sea-level variations in Antarctica and ocean modes like the ENSO.
Land science with Sentinel-2 and Sentinel-3 data series synergy
NASA Astrophysics Data System (ADS)
Moreno, Jose; Guanter, Luis; Alonso, Luis; Gomez, Luis; Amoros, Julia; Camps, Gustavo; Delegido, Jesus
2010-05-01
Although the GMES/Sentinel satellite series were primarily designed to provide observations for operational services and routine applications, there is a growing interest in the scientific community towards the usage of Sentinel data for more advanced and innovative science. Apart from the improved spatial and spectral capabilities, the availability of consistent time series covering a period of over 20 years opens possibilities never explored before, such as systematic data assimilation approaches exploiting the time-series concept, or the incorporation in the modelling approaches of processes covering time scales from weeks to decades. Sentinel-3 will provide continuity to current ENVISAT MERIS/AATSR capabilities. The results already derived from MERIS/AATRS will be more systematically exploited by using OLCI in synergy with SLST. Particularly innovative is the case of Sentinel-2, which is specifically designed for land applications. Built on a constellation of two satellites operating simultaneously to provide 5 days geometric revisit time, the Sentinel-2 system will providing global and systematic acquisitions with high spatial resolution and with a high revisit time tailored towards the needs of land monitoring. Apart from providing continuity to Landsat and SPOT time series, the Sentinel-2 Multi-Spectral Instrument (MSI) incorporates new narrow bands around the red-edge for improved retrievals of biophysical parameters. The limitations imposed by the need of a proper cloud screening and atmospheric corrections have represented a serious constraint in the past for optical data. The fact that both Sentinel-2 and 3 have dedicated bands to allow such needed corrections for optical data represents an important step towards a proper exploitation, guarantying consistent time series showing actual variability in land surface conditions without the artefacts introduced by the atmosphere. Expected operational products (such as Land Cover maps, Leaf Area Index, Fractional Vegetation Cover, Fraction of Absorbed Photosynthetically Active Radiation, and Leaf Chlorophyll and Water Contents), will be enhanced with new scientific applications. Higher level products will also be provided, by means of mosaicking, averaging, synthesising or compositing of spatially and temporally resampled data. A key element in the exploitation of the Sentinel series will be the adequate use of data synergy, which will open new possibilities for improved Land Models. This paper analyses in particular the possibilities offered by mosaicking and compositing information derived from Sentinel-2 observations in high spatial resolution to complement dense time series derived from Sentinel-3 data with more frequent coverage. Interpolation of gaps in high spatial resolution time series (from Sentinel-2 data) by using medium/low resolution data from Sentinel-3 (OLCI and SLSTR) is also a way of making series more temporally consistent with high spatial resolution. The primary goal of such temporal interpolation / spatial mosaicking techniques is to derive consistent surface reflectance data virtually for every date and geographical location, no matter the initial spatial/temporal coverage of the original data used to produce the composite. As a result, biophysical products can be derived in a more consistent way from the spectral information of Sentinel-3 data by making use of a description of surface heterogeneity derived from Sentinel-2 data. Using data from dedicated experiments (SEN2FLEX, CEFLES2, SEN3EXP), that include a large dataset of satellite and airborne data and of ground-based measurements of atmospheric and vegetation parameters, different techniques are tested, including empirical / statistical approaches that builds nonlinear regression by mapping spectra to a high dimensional space, up to model inversion / data assimilation scenarios. Exploitation of the temporal domain and spatial multi-scale domain becomes then a driver for the systematic exploitation of GMES/Sentinels data time series. This paper review current status, and identifies research priorities in such direction.
MODIS Interactive Subsetting Tool (MIST)
NASA Astrophysics Data System (ADS)
McAllister, M.; Duerr, R.; Haran, T.; Khalsa, S. S.; Miller, D.
2008-12-01
In response to requests from the user community, NSIDC has teamed with the Oak Ridge National Laboratory Distributive Active Archive Center (ORNL DAAC) and the Moderate Resolution Data Center (MrDC) to provide time series subsets of satellite data covering stations in the Greenland Climate Network (GC-NET) and the International Arctic Systems for Observing the Atmosphere (IASOA) network. To serve these data NSIDC created the MODIS Interactive Subsetting Tool (MIST). MIST works with 7 km by 7 km subset time series of certain Version 5 (V005) MODIS products over GC-Net and IASOA stations. User- selected data are delivered in a text Comma Separated Value (CSV) file format. MIST also provides online analysis capabilities that include generating time series and scatter plots. Currently, MIST is a Beta prototype and NSIDC intends that user requests will drive future development of the tool. The intent of this poster is to introduce MIST to the MODIS data user audience and illustrate some of the online analysis capabilities.
Potential and Pitfalls of High-Rate GPS
NASA Astrophysics Data System (ADS)
Smalley, R.
2008-12-01
With completion of the Plate Boundary Observatory (PBO), we are poised to capture a dense sampling of strong motion displacement time series from significant earthquakes in western North America with High-Rate GPS (HRGPS) data collected at 1 and 5 Hz. These data will provide displacement time series at potentially zero epicentral distance that, if valid, have great potential to contribute to understanding earthquake rupture processes. The caveat relates to whether or not the data are aliased: is the sampling rate fast enough to accurately capture the displacement's temporal history? Using strong motion recordings in the immediate epicentral area of several 6.77.5 events, which can be reasonably expected in the PBO footprint, even the 5 Hz data may be aliased. Some sort of anti-alias processing, currently not applied, will therefore necessary at the closest stations to guarantee the veracity of the displacement time series. We discuss several solutions based on a-priori knowledge of the expected ground motion and practicality of implementation.
NASA Technical Reports Server (NTRS)
Spruce, Joseph P.; Hargrove, William; Gasser, Jerry; Smoot, James; Kuper, Philip D.
2014-01-01
This presentation discusses MODIS NDVI change detection methods and products used in the ForWarn Early Warning System (EWS) for near real time (NRT) recognition and tracking of regionally evident forest disturbances throughout the conterminous US (CONUS). The latter has provided NRT forest change products to the forest health protection community since 2010, using temporally processed MODIS Aqua and Terra NDVI time series data to currently compute and post 6 different forest change products for CONUS every 8 days. Multiple change products are required to improve detectability and to more fully assess the nature of apparent disturbances. Each type of forest change product reports per pixel percent change in NDVI for a given 24 day interval, comparing current versus a given historical baseline NDVI. EMODIS 7 day expedited MODIS MOD13 data are used to obtain current and historical NDVIs, respectively. Historical NDVI data is processed with Time Series Product Tool (TSPT); and 2) the Phenological Parameters Estimation Tool (PPET) software. While each change products employ maximum value compositing (MVC) of NDVI, the design of specific products primarily differs in terms of the historical baseline. The three main change products use either 1, 3, or all previous years of MVC NDVI as a baseline. Another product uses an Adaptive Length Compositing (ALC) version of MVC to derive an alternative current NDVI that is the freshest quality NDVI as opposed to merely the MVC NDVI across a 24 day time frame. The ALC approach can improve detection speed by 8 to 16 days. ForWarn also includes 2 change products that improve detectability of forest disturbances in lieu of climatic fluctuations, especially in the spring and fall. One compares current MVC NDVI to the zonal maximum under the curve NDVI per pheno-region cluster class, considering all previous years in the MODIS record. The other compares current maximum NDVI to the mean of maximum NDVI for all previous MODIS years.
NASA Astrophysics Data System (ADS)
Lavin, Alicia; Somavilla, Raquel; Cano, Daniel; Rodriguez, Carmen; Gonzalez-Pola, Cesar; Viloria, Amaia; Tel, Elena; Ruiz-Villareal, Manuel
2017-04-01
Long-Term Time Series Stations have been developed in order to document seasonal to decadal scale variations in key physical and biogeochemical parameters. Long-term time series measurements are crucial for determining the physical and biological mechanisms controlling the system. The Science and Technology Ministers of the G7 in their Tsukuba Communiqué have stated that 'many parts of the ocean interior are not sufficiently observed' and that 'it is crucial to develop far stronger scientific knowledge necessary to assess the ongoing changes in the ocean and their impact on economies.' Time series has been classically obtained by oceanographic ships that regularly cover standard sections and stations. From 1991, shelf and slope waters of the Southern Bay of Biscay are regularly sampled in a monthly hydrographic line north of Santander to a depth of 1000 m in early stages and for the whole water column down to 2580 m in recent times. Nearby, in June 2007, the IEO deployed an oceanic-meteorological buoy (AGL Buoy, 43° 50.67'N; 3° 46.20'W, and 40 km offshore, www.boya-agl.st.ieo.es). The Santander Atlantic Time Series Station is integrated in the Spanish Institute of Oceanography Observing Sistem (IEOOS). The long-term hydrographic monitoring has allowed to define the seasonality of the main oceanographic facts as the upwelling, the Iberian Poleward Current, low salinity incursions, trends and interannual variability at mixing layer, and at the main water masses North Atlantic Central Water and Mediterranean Water. The relation of these changes with the high frequency surface conditions recorded by the Biscay AGL has been examined using also satellite and reanalysis data. During the FIXO3 Project (Fixed-point Open Ocean Observatories), and using this combined sources, some products and quality controled series of high interest and utility for scientific purposes has been developed. Hourly products as Sea Surface Temperature and Salinity anomalies, wave significant height character with respect to monthly average, and currents with respect to seasonal averages. Ocean-atmosphere heat fluxes (latent and sensible) are computed from the buoy atmospheric and oceanic measurements. Estimations of the mixed layer depth and bulk series at different water levels are provided in a monthly basis. Quality controlled series are distributed for sea surface salinity, oxygen and chlorophyll data. Some sensors are particularly affected by biofouling, and monthly visits to the buoy permit to follow these sensors behaviour. Chlorophyll-fluorescence sensor is the main concern, but Dissolved Oxygen sensor is also problematic. Periods of realistic smooth variations present strong offset that is corrected based on the Winkler analysis of water samples. Also Wind air temperature and humidilty buoy sensors are monthly compared with the research vessel data. Next step will consist in working on a better validation of the data, mainly ten-year data from the Biscay AGL buoy, but also the 25 year data of the station 7, close to the buoy. Data will be depurated an analyzed and the final product will be published and widening to improve and get the better use of them.
Electronic Properties of DNA-Based Schottky Barrier Diodes in Response to Alpha Particles.
Al-Ta'ii, Hassan Maktuff Jaber; Periasamy, Vengadesh; Amin, Yusoff Mohd
2015-05-21
Detection of nuclear radiation such as alpha particles has become an important field of research in recent history due to nuclear threats and accidents. In this context; deoxyribonucleic acid (DNA) acting as an organic semiconducting material could be utilized in a metal/semiconductor Schottky junction for detecting alpha particles. In this work we demonstrate for the first time the effect of alpha irradiation on an Al/DNA/p-Si/Al Schottky diode by investigating its current-voltage characteristics. The diodes were exposed for different periods (0-20 min) of irradiation. Various diode parameters such as ideality factor, barrier height, series resistance, Richardson constant and saturation current were then determined using conventional, Cheung and Cheung's and Norde methods. Generally, ideality factor or n values were observed to be greater than unity, which indicates the influence of some other current transport mechanism besides thermionic processes. Results indicated ideality factor variation between 9.97 and 9.57 for irradiation times between the ranges 0 to 20 min. Increase in the series resistance with increase in irradiation time was also observed when calculated using conventional and Cheung and Cheung's methods. These responses demonstrate that changes in the electrical characteristics of the metal-semiconductor-metal diode could be further utilized as sensing elements to detect alpha particles.
Toward automatic time-series forecasting using neural networks.
Yan, Weizhong
2012-07-01
Over the past few decades, application of artificial neural networks (ANN) to time-series forecasting (TSF) has been growing rapidly due to several unique features of ANN models. However, to date, a consistent ANN performance over different studies has not been achieved. Many factors contribute to the inconsistency in the performance of neural network models. One such factor is that ANN modeling involves determining a large number of design parameters, and the current design practice is essentially heuristic and ad hoc, this does not exploit the full potential of neural networks. Systematic ANN modeling processes and strategies for TSF are, therefore, greatly needed. Motivated by this need, this paper attempts to develop an automatic ANN modeling scheme. It is based on the generalized regression neural network (GRNN), a special type of neural network. By taking advantage of several GRNN properties (i.e., a single design parameter and fast learning) and by incorporating several design strategies (e.g., fusing multiple GRNNs), we have been able to make the proposed modeling scheme to be effective for modeling large-scale business time series. The initial model was entered into the NN3 time-series competition. It was awarded the best prediction on the reduced dataset among approximately 60 different models submitted by scholars worldwide.
Dakos, Vasilis; Carpenter, Stephen R.; Brock, William A.; Ellison, Aaron M.; Guttal, Vishwesha; Ives, Anthony R.; Kéfi, Sonia; Livina, Valerie; Seekell, David A.; van Nes, Egbert H.; Scheffer, Marten
2012-01-01
Many dynamical systems, including lakes, organisms, ocean circulation patterns, or financial markets, are now thought to have tipping points where critical transitions to a contrasting state can happen. Because critical transitions can occur unexpectedly and are difficult to manage, there is a need for methods that can be used to identify when a critical transition is approaching. Recent theory shows that we can identify the proximity of a system to a critical transition using a variety of so-called ‘early warning signals’, and successful empirical examples suggest a potential for practical applicability. However, while the range of proposed methods for predicting critical transitions is rapidly expanding, opinions on their practical use differ widely, and there is no comparative study that tests the limitations of the different methods to identify approaching critical transitions using time-series data. Here, we summarize a range of currently available early warning methods and apply them to two simulated time series that are typical of systems undergoing a critical transition. In addition to a methodological guide, our work offers a practical toolbox that may be used in a wide range of fields to help detect early warning signals of critical transitions in time series data. PMID:22815897
A Continuous Long-Term Record of Magnetic-Storm Occurrence and Intensity
NASA Astrophysics Data System (ADS)
Love, J. J.
2007-05-01
Hourly magnetometer data have been produced by ground-based magnetic observatories for over a century. These data are used for a wide variety of applications, including many for space physics. In particular, hourly data from a longitudinal necklace of mid-latitude observatories can be used to construct a time series recording the storm-time disturbance index Dst, one of the most useful scalar summaries of magnetic storm intensity which is generally interpreted in terms of an equivalent equatorial magnetospheric ring current. Dst has been routinely calculated in a temporally piece-wise fashion since the IGY using a subset of the available observatories: four or five stations, typically including Honolulu (HON), San Juan (SJG), Kakioka Japan (KAK), Hermanus South Africa (HER), and Alibag India (ABG). In this presentation we discuss a single continuous Dst time series made using a denser and more uniform distribution of observatories than that which is standard: including, additionally, Watheroo Australia (WAT), Apia Samoa (API), and Vassouras Brazil (VSS). Starting first with the data from each individual observatory, we subtract the geomagnetic secular variation, caused primarily by the core dynamo, and the solar-quiet (Sq) variation, caused primarily by the ionospheric dynamo. The latter requires careful spectral analysis, and those intermediate results are, themselves, of scientific interest. Following this, we combine the disturbance residuals from each station to form the continuous Dst time series. Statistics deduced from this model allow us to quantify the likelihood of storm occurrence and intensity, both of which are modulated in time by the solar cycle. This analysis is accomplished using a 50 year Dst time series. The prospects for constructing a longer continuous Dst time series are discussed.
The Impact of the Revised Sunspot Record on Solar Irradiance Reconstructions
NASA Astrophysics Data System (ADS)
Kopp, G.; Krivova, N.; Lean, J.; Wu, C. J.
2015-12-01
We describe the expected effects of the new sunspot number time series on proxy model based reconstructions of the total solar irradiance (TSI), which is largely explained by the opposing effects of dark sunspots and bright faculae. Regressions of indices for facular brightening and sunspot darkening with time series of direct TSI observations during the recent 37-year spacecraft TSI measurement era determine the relative contributions from each. Historical TSI reconstructions are enabled by extending these proxy models back in time prior to the start of the measurement record using a variety of solar activity indices including the sunspot number time series alone prior to 1882. Such reconstructions are critical for Earth climate research, which requires knowledge of the incident energy from the Sun to assess climate sensitivity to the natural influence of solar variability. Two prominent TSI reconstructions that utilize the sunspot record starting in 1610 are the NRLTSI and the SATIRE models. We review the indices that each currently uses and estimate the effects the revised sunspot record has on these reconstructions.
Berris, Steven N.; Hess, Glen W.; Bohman, Larry R.
2000-01-01
Title II of Public Law 101-618, the Truckee?Carson?Pyramid Lake Water Rights Settlement Act of 1990, provides direction, authority, and a mechanism for resolving conflicts over water rights in the Truckee and Carson River Basins. The Truckee Carson Program of the U.S. Geological Survey, to support implementation of Public Law 101-618, has developed an operations model to simulate lake/reservoir and river operations for the Truckee River Basin including diversion of Truckee River water to the Truckee Canal for transport to the Carson River Basin. Several types of hydrologic data, formatted in a chronological order with a daily time interval called 'time series,' are described in this report. Time series from water years 1933 to 1997 can be used to run the operations model. Auxiliary hydrologic data not currently used by the model are also described. The time series of hydrologic data consist of flow, lake/reservoir elevation and storage, precipitation, evaporation, evapotranspiration, municipal and industrial (M&I) demand, and streamflow and lake/reservoir level forecast data.
NASA Astrophysics Data System (ADS)
Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish
2018-02-01
Conventional bias correction is usually applied on a grid-by-grid basis, meaning that the resulting corrections cannot address biases in the spatial distribution of climate variables. To solve this problem, a two-step bias correction method is proposed here to correct time series at multiple locations conjointly. The first step transforms the data to a set of statistically independent univariate time series, using a technique known as independent component analysis (ICA). The mutually independent signals can then be bias corrected as univariate time series and back-transformed to improve the representation of spatial dependence in the data. The spatially corrected data are then bias corrected at the grid scale in the second step. The method has been applied to two CMIP5 General Circulation Model simulations for six different climate regions of Australia for two climate variables—temperature and precipitation. The results demonstrate that the ICA-based technique leads to considerable improvements in temperature simulations with more modest improvements in precipitation. Overall, the method results in current climate simulations that have greater equivalency in space and time with observational data.
NASA Astrophysics Data System (ADS)
Spellman, P.; Griffis, V. W.; LaFond, K.
2013-12-01
A changing climate brings about new challenges for flood risk analysis and water resources planning and management. Current methods for estimating flood risk in the US involve fitting the Pearson Type III (P3) probability distribution to the logarithms of the annual maximum flood (AMF) series using the method of moments. These methods are employed under the premise of stationarity, which assumes that the fitted distribution is time invariant and variables affecting stream flow such as climate do not fluctuate. However, climate change would bring about shifts in meteorological forcings which can alter the summary statistics (mean, variance, skew) of flood series used for P3 parameter estimation, resulting in erroneous flood risk projections. To ascertain the degree to which future risk may be misrepresented by current techniques, we use climate scenarios generated from global climate models (GCMs) as input to a hydrological model to explore how relative changes to current climate affect flood response for watersheds in the northeastern United States. The watersheds were calibrated and run on a daily time step using the continuous, semi-distributed, process based Soil and Water Assessment Tool (SWAT). Nash Sutcliffe Efficiency (NSE), RMSE to Standard Deviation ratio (RSR) and Percent Bias (PBIAS) were all used to assess model performance. Eight climate scenarios were chosen from GCM output based on relative precipitation and temperature changes from the current climate of the watershed and then further bias-corrected. Four of the scenarios were selected to represent warm-wet, warm-dry, cool-wet and cool-dry future climates, and the other four were chosen to represent more extreme, albeit possible, changes in precipitation and temperature. We quantify changes in response by comparing the differences in total mass balance and summary statistics of the logarithms of the AMF series from historical baseline values. We then compare forecasts of flood quantiles from fitting a P3 distribution to the logs of historical AMF data to that of generated AMF series.
System and method for charging electrochemical cells in series
DeLuca, William H.; Hornstra, Jr, Fred; Gelb, George H.; Berman, Baruch; Moede, Larry W.
1980-01-01
A battery charging system capable of equalizing the charge of each individual cell at a selected full charge voltage includes means for regulating charger current to first increase current at a constant rate until a bulk charging level is achieved or until any cell reaches a safe reference voltage. A system controller then begins to decrease the charging rate as long as any cell exceeds the reference voltage until an equalization current level is reached. At this point, the system controller activates a plurality of shunt modules to permit shunting of current around any cell having a voltage exceeding the reference voltage. Leads extending between the battery of cells and shunt modules are time shared to permit alternate shunting of current and voltage monitoring without the voltage drop caused by the shunt current. After each cell has at one time exceeded the reference voltage, the charging current is terminated.
Principal Investigator in a Box Technical Description Document. 2.0
NASA Technical Reports Server (NTRS)
Groleau, Nick; Frainier, Richard
1994-01-01
This document provides a brief overview of the PI-in-a-Box system, which can be used for automatic real-time reaction to incoming data. We will therefore outline the current system's capabilities and limitations, and hint at how best to think about PI-in-a-Box as a tool for real-time analysis and reaction in section two, below. We also believe that the solution to many commercial real-time process problems requires data acquisition and analysis combined with rule-based reasoning and/or an intuitive user interface. We will develop the technology reuse potential in section three. Currently, the system runs only on Apple Computer's Macintosh series.
NASA Technical Reports Server (NTRS)
Beckley, B. D.; Lemoine, F. G.; Zelensky, N. P.; Yang, X.; Holmes, S.; Ray, R. D.; Mitchum, G. T.; Desai, S.; Brown, S.; Haines, B.
2011-01-01
Recent developments in Precise Orbit Determinations (POD) due to in particular to revisions to the terrestrial reference frame realization and the time variable gravity (TVG) continues to provide improvements to the accuracy and stability of the PO directly affecting mean sea level (MSL) estimates. Long-term credible MSL estimates require the development and continued maintenance of a stable reference frame, along with vigilant monitoring of the performance of the independent tracking systems used to calculate the orbits for altimeter spacecrafts. The stringent MSL accuracy requirements of a few tenths of an mm/yr are particularly essential for mass budget closure analysis over the relative short time period of Jason-l &2, GRACE, and Argo coincident measurements. In an effort to adhere to cross mission consistency, we have generated a full time series of experimental orbits (GSFC stdlllO) for TOPEX/Poseidon (TP), Jason-I, and OSTM based on an improved terrestrial reference frame (TRF) realization (ITRF2008), revised static (GGM03s), and time variable gravity field (Eigen6s). In this presentation we assess the impact of the revised precision orbits on inter-mission bias estimates, and resultant global and regional MSL trends. Tide gauge verification results are shown to assess the current stability of the Jason-2 sea surface height time series that suggests a possible discontinuity initiated in early 2010. Although the Jason-2 time series is relatively short (approximately 3 years), a thorough review of the entire suite of geophysical and environmental range corrections is warranted and is underway to maintain the fidelity of the record.
Fiber optic current monitor for high-voltage applications
Renda, G.F.
1992-04-21
A current monitor which derives its power from the conductor being measured for bidirectionally measuring the magnitude of current (from DC to above 50 khz) flowing through a conductor across which a relatively high level DC voltage is applied, includes a pair of identical transmitter modules connected in opposite polarity to one another in series with the conductor being monitored, for producing from one module a first light signal having an intensity directly proportional to the magnitude of current flowing in one direction through the conductor during one period of time, and from the other module a second light signal having an intensity directly proportional to the magnitude of current flowing in the opposite direction through the conductor during another period of time, and a receiver located in a safe area remote from the high voltage area for receiving the first and second light signals, and converting the same to first and second voltage signals having levels indicative of the magnitude of current being measured at a given time. 6 figs.
Fiber optic current monitor for high-voltage applications
Renda, George F.
1992-01-01
A current monitor which derives its power from the conductor being measured for bidirectionally measuring the magnitude of current (from DC to above 50 khz) flowing through a conductor across which a relatively high level DC voltage is applied, includes a pair of identical transmitter modules connected in opposite polarity to one another in series with the conductor being monitored, for producing from one module a first light signal having an intensity directly proportional to the magnitude of current flowing in one direction through the conductor during one period of time, and from the other module a second light signal having an intensity directly proportional to the magnitude of current flowing in the opposite direction through the conductor during another period of time, and a receiver located in a safe area remote from the high voltage area for receiving the first and second light signals, and converting the same to first and second voltage signals having levels indicative of the magnitude of current being measured at a given time.
Numerical modeling of high-voltage circuit breaker arcs and their interraction with the power system
NASA Astrophysics Data System (ADS)
Orama, Lionel R.
In this work the interaction between series connected gas and vacuum circuit breaker arcs has been studied. The breakdown phenomena in vacuum interrupters during the post arc current period have been of special interest. Numerical models of gas and vacuum arcs were developed in the form of black box models. Especially, the vacuum post arc model was implemented by combining the existing transition model with an ion density function and expressions for the breakdown mechanisms. The test series studied reflect that for electric fields on the order of 10sp7V/m over the anode, the breakdown of the vacuum gap can result from a combination of both thermal and electrical stresses. For a particular vacuum device, the vacuum model helps to find the interruption limits of the electric field and power density over the anode. The series connection of gas and vacuum interrupters always performs better than the single gas device. Moreover, to take advantage of the good characteristics of both devices, the time between the current zero crossing in each interrupter can be changed. This current zero synchronization is controlled by changing the capacitance in parallel to the gas device. This gas/vacuum interrupter is suitable for interruption of very stressful short circuits in which the product of the dI/dt before current zero and the dV/dt after current zero is very high. Also, a single SF6 interrupter can be replaced by an air circuit breaker of the same voltage rating in series with a vacuum device without compromising the good performance of the SF6 device. Conceptually, a series connected vacuum device can be used for high voltage applications with equal distribution of electrical stresses between the individual interrupters. The equalization can be made by a sequential opening of the individual contact pairs, beginning with the interruptors that are closer to ground potential. This could eliminate the use of grading capacitors.
Energy conservation indicators. 1982 annual report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belzer, D.B.
A series of Energy Conservation Indicators were developed for the Department of Energy to assist in the evaluation of current and proposed conservation strategies. As descriptive statistics that signify current conditions and trends related to efficiency of energy use, indicators provide a way of measuring, monitoring, or inferring actual responses by consumers in markets for energy services. Related sets of indicators are presented in some 40 one-page indicator summaries. Indicators are shown graphically, followed by several paragraphs that explain their derivation and highlight key findings. Indicators are classified according to broad end-use sectors: Aggregate (economy), Residential, Commercial, Industrial, Transportation andmore » Electric Utilities. In most cases annual time series information is presented covering the period 1960 through 1981.« less
Time-dependent interstellar chemistry
NASA Technical Reports Server (NTRS)
Glassgold, A. E.
1985-01-01
Some current problems in interstellar chemistry are considered in the context of time-dependent calculations. The limitations of steady-state models of interstellar gas-phase chemistry are discussed, and attempts to chemically date interstellar clouds are reviewed. The importance of studying the physical and chemical properties of interstellar dust is emphasized. Finally, the results of a series of studies of collapsing clouds are described.
Modelling spatiotemporal change using multidimensional arrays Meng
NASA Astrophysics Data System (ADS)
Lu, Meng; Appel, Marius; Pebesma, Edzer
2017-04-01
The large variety of remote sensors, model simulations, and in-situ records provide great opportunities to model environmental change. The massive amount of high-dimensional data calls for methods to integrate data from various sources and to analyse spatiotemporal and thematic information jointly. An array is a collection of elements ordered and indexed in arbitrary dimensions, which naturally represent spatiotemporal phenomena that are identified by their geographic locations and recording time. In addition, array regridding (e.g., resampling, down-/up-scaling), dimension reduction, and spatiotemporal statistical algorithms are readily applicable to arrays. However, the role of arrays in big geoscientific data analysis has not been systematically studied: How can arrays discretise continuous spatiotemporal phenomena? How can arrays facilitate the extraction of multidimensional information? How can arrays provide a clean, scalable and reproducible change modelling process that is communicable between mathematicians, computer scientist, Earth system scientist and stakeholders? This study emphasises on detecting spatiotemporal change using satellite image time series. Current change detection methods using satellite image time series commonly analyse data in separate steps: 1) forming a vegetation index, 2) conducting time series analysis on each pixel, and 3) post-processing and mapping time series analysis results, which does not consider spatiotemporal correlations and ignores much of the spectral information. Multidimensional information can be better extracted by jointly considering spatial, spectral, and temporal information. To approach this goal, we use principal component analysis to extract multispectral information and spatial autoregressive models to account for spatial correlation in residual based time series structural change modelling. We also discuss the potential of multivariate non-parametric time series structural change methods, hierarchical modelling, and extreme event detection methods to model spatiotemporal change. We show how array operations can facilitate expressing these methods, and how the open-source array data management and analytics software SciDB and R can be used to scale the process and make it easily reproducible.
Monitoring Forest and Rangeland Change in the United States Using Landsat Time Series Data
NASA Astrophysics Data System (ADS)
Vogelmann, J.; Tolk, B.; Xian, G. Z.; Homer, C.
2011-12-01
The LANDFIRE project produces spatial data layers for fire management applications. As part of the project, 2000 vintage Landsat Thematic Mapper and Enhanced Thematic Mapper plus data sets were used to generate detailed vegetation type data sets for the entire United States. We are currently using several approaches to update this information, including incorporation of (1) Landsat-derived historic fire burn information, (2) forest harvest information from Landsat time series data using the Vegetation Change Tracker, and (3) data sets that capture subtle and gradual intra-state disturbances, such as those related to insects and disease as well as succession. The primary focus of this presentation will be on of the detection and characterization of gradual change occurring in forest and rangeland ecosystems, and how to incorporate this information in the LANDFIRE updating process. Landsat data acquired over the previous 25+ years are being used to assess status and trends of forest and rangeland condition. Current study areas are located in the southwestern US, western Nebraska, western Wyoming, western South Dakota, northeastern US and the central Appalachian Mountains. Trends of changing vegetation index values derived from Landsat time series data stacks are the foundation for the gradual change information being developed. Thus far we have found evidence of gradual systematic change in all areas that we have examined. Many of the conifer forests in the southwestern US are showing declining conditions related to insects and drought, and very few of the examined areas are showing evidence of increased canopy cover or greenness. While sagebrush communities are showing decreases in greenness related to fire, mining, and drought, few of these communities are showing evidence of increased greenness or "improving" conditions. However, there is evidence that some forest communities are expanding and that canopy cover density is increasing at some locations. In Nebraska, increases in canopy cover appear to be mostly related to expansion of eastern red cedar. In the White Mountains of New Hampshire, observed increases in forest canopy appear to be related to understory balsam fir expansion, most likely related to release of forest suppression resulting from the thinning of the upper forest canopy. Continued analyses of time series data using multi-spatial scenes and covering multiple years are required in order to develop accurate impressions and representations of the changing ecosystem patterns and trends that are occurring. The approach demonstrates that Landsat time series data can be used operationally for assessing gradual ecosystem change across large areas. This information complements the information derived from other time-series change detection used for LANDFIRE.
Dynamic Black-Level Correction and Artifact Flagging for Kepler Pixel Time Series
NASA Technical Reports Server (NTRS)
Kolodziejczak, J. J.; Clarke, B. D.; Caldwell, D. A.
2011-01-01
Methods applied to the calibration stage of Kepler pipeline data processing [1] (CAL) do not currently use all of the information available to identify and correct several instrument-induced artifacts. These include time-varying crosstalk from the fine guidance sensor (FGS) clock signals, and manifestations of drifting moire pattern as locally correlated nonstationary noise, and rolling bands in the images which find their way into the time series [2], [3]. As the Kepler Mission continues to improve the fidelity of its science data products, we are evaluating the benefits of adding pipeline steps to more completely model and dynamically correct the FGS crosstalk, then use the residuals from these model fits to detect and flag spatial regions and time intervals of strong time-varying black-level which may complicate later processing or lead to misinterpretation of instrument behavior as stellar activity.
ERIC Educational Resources Information Center
Pfister, Hans
2014-01-01
Physics students encountering electric circuits for the first time often ask why adding more resistors to a circuit sometimes increases and sometimes decreases the resulting total resistance. It appears that these students have an inadequate understanding of current flow and resistance. Students who do not adopt a model of current, voltage, and…
NASA Astrophysics Data System (ADS)
Harte, Philip T.; Smith, Thor E.; Williams, John H.; Degnan, James R.
2012-05-01
In situ chemical oxidation (ISCO) treatment with sodium permanganate, an electrically conductive oxidant, provides a strong electrical signal for tracking of injectate transport using time series geophysical surveys including direct current (DC) resistivity and electromagnetic (EM) methods. Effective remediation is dependent upon placing the oxidant in close contact with the contaminated aquifer. Therefore, monitoring tools that provide enhanced tracking capability of the injectate offer considerable benefit to guide subsequent ISCO injections. Time-series geophysical surveys were performed at a superfund site in New Hampshire, USA over a one-year period to identify temporal changes in the bulk electrical conductivity of a tetrachloroethylene (PCE; also called tetrachloroethene) contaminated, glacially deposited aquifer due to the injection of sodium permanganate. The ISCO treatment involved a series of pulse injections of sodium permanganate from multiple injection wells within a contained area of the aquifer. After the initial injection, the permanganate was allowed to disperse under ambient groundwater velocities. Time series geophysical surveys identified the downward sinking and pooling of the sodium permanganate atop of the underlying till or bedrock surface caused by density-driven flow, and the limited horizontal spread of the sodium permanganate in the shallow parts of the aquifer during this injection period. When coupled with conventional monitoring, the surveys allowed for an assessment of ISCO treatment effectiveness in targeting the PCE plume and helped target areas for subsequent treatment.
Harte, Philip T.; Smith, Thor E.; Williams, John H.; Degnan, James R.
2012-01-01
In situ chemical oxidation (ISCO) treatment with sodium permanganate, an electrically conductive oxidant, provides a strong electrical signal for tracking of injectate transport using time series geophysical surveys including direct current (DC) resistivity and electromagnetic (EM) methods. Effective remediation is dependent upon placing the oxidant in close contact with the contaminated aquifer. Therefore, monitoring tools that provide enhanced tracking capability of the injectate offer considerable benefit to guide subsequent ISCO injections. Time-series geophysical surveys were performed at a superfund site in New Hampshire, USA over a one-year period to identify temporal changes in the bulk electrical conductivity of a tetrachloroethylene (PCE; also called tetrachloroethene) contaminated, glacially deposited aquifer due to the injection of sodium permanganate. The ISCO treatment involved a series of pulse injections of sodium permanganate from multiple injection wells within a contained area of the aquifer. After the initial injection, the permanganate was allowed to disperse under ambient groundwater velocities. Time series geophysical surveys identified the downward sinking and pooling of the sodium permanganate atop of the underlying till or bedrock surface caused by density-driven flow, and the limited horizontal spread of the sodium permanganate in the shallow parts of the aquifer during this injection period. When coupled with conventional monitoring, the surveys allowed for an assessment of ISCO treatment effectiveness in targeting the PCE plume and helped target areas for subsequent treatment.
Harte, Philip T; Smith, Thor E; Williams, John H; Degnan, James R
2012-05-01
In situ chemical oxidation (ISCO) treatment with sodium permanganate, an electrically conductive oxidant, provides a strong electrical signal for tracking of injectate transport using time series geophysical surveys including direct current (DC) resistivity and electromagnetic (EM) methods. Effective remediation is dependent upon placing the oxidant in close contact with the contaminated aquifer. Therefore, monitoring tools that provide enhanced tracking capability of the injectate offer considerable benefit to guide subsequent ISCO injections. Time-series geophysical surveys were performed at a superfund site in New Hampshire, USA over a one-year period to identify temporal changes in the bulk electrical conductivity of a tetrachloroethylene (PCE; also called tetrachloroethene) contaminated, glacially deposited aquifer due to the injection of sodium permanganate. The ISCO treatment involved a series of pulse injections of sodium permanganate from multiple injection wells within a contained area of the aquifer. After the initial injection, the permanganate was allowed to disperse under ambient groundwater velocities. Time series geophysical surveys identified the downward sinking and pooling of the sodium permanganate atop of the underlying till or bedrock surface caused by density-driven flow, and the limited horizontal spread of the sodium permanganate in the shallow parts of the aquifer during this injection period. When coupled with conventional monitoring, the surveys allowed for an assessment of ISCO treatment effectiveness in targeting the PCE plume and helped target areas for subsequent treatment. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Mau, S.; Reed, J.; Clark, J.; Valentine, D.
2006-12-01
Large quantities of natural gas are emitted from the seafloor into the coastal ocean near Coal Oil Point, Santa Barbara Channel (SBC), California. Methane, ethane, and propane were quantified in the surface water at 79 stations in a 270 km2 area in order to map the surficial hydrocarbon plume and to quantify air-sea exchange of these gases. A time series was initiated for 14 stations to identify the variability of the mapped plume, and biologically-mediated oxidation rates of methane were measured to quantify the loss of methane in surface water. The hydrocarbon plume was found to comprise ~70 km2 and extended beyond study area. The plume width narrowed from 3 km near the source to 0.7 km further from the source, and then expanded to 6.7 km at the edge of the study area. This pattern matches the cyclonic gyre which is the normal current flow in this part of the Santa Barbara Channel - pushing water to the shore near the seep field and then broadening the plume while the water turns offshore further from the source. Concentrations of gaseous hydrocarbons decrease as the plume migrates. Time series sampling shows similar plume width and hydrocarbon concentrations when normal current conditions prevail. In contrast, smaller plume width and low hydrocarbon concentrations were observed when an additional anticyclonic eddy reversed the normal current flow, and a much broader plume with higher hydrocarbon concentrations was observed during a time of diminished speed within the current gyre. These results demonstrate that surface currents control hydrocarbon plume dynamics in the SBC, though hydrocarbon flux to the atmosphere is likely less dependent on currents. Estimates of air- sea hydrocarbon flux and biological oxidation rates will also be presented.
Time Series Analysis of Photovoltaic Soiling Station Data: Version 1.0, August 2017
DOE Office of Scientific and Technical Information (OSTI.GOV)
Micheli, Leonardo; Muller, Matthew T.; Deceglie, Michael G.
The time series data from PV soiling stations, operating in the USA, at different time periods are analyzed and presented. The current version of the paper includes twenty stations operating between 2013 and 2016, but the paper is intended to be periodically updated as more stations and more data become available. The challenges in working with soiling stations data are discussed, including measurement methodology, quality controls, and measurement uncertainty. The soiling profiles of the soiling stations are made available so that the PV community can make use of this data to guide operations and maintence decisions, estimate soiling derate inmore » performance models, and more generally come to a better understanding of the challenges associated with the variability of PV soiling.« less
Estimation of Hurst Exponent for the Financial Time Series
NASA Astrophysics Data System (ADS)
Kumar, J.; Manchanda, P.
2009-07-01
Till recently statistical methods and Fourier analysis were employed to study fluctuations in stock markets in general and Indian stock market in particular. However current trend is to apply the concepts of wavelet methodology and Hurst exponent, see for example the work of Manchanda, J. Kumar and Siddiqi, Journal of the Frankline Institute 144 (2007), 613-636 and paper of Cajueiro and B. M. Tabak. Cajueiro and Tabak, Physica A, 2003, have checked the efficiency of emerging markets by computing Hurst component over a time window of 4 years of data. Our goal in the present paper is to understand the dynamics of the Indian stock market. We look for the persistency in the stock market through Hurst exponent and fractal dimension of time series data of BSE 100 and NIFTY 50.
Investigation on Law and Economics Based on Complex Network and Time Series Analysis.
Yang, Jian; Qu, Zhao; Chang, Hui
2015-01-01
The research focuses on the cooperative relationship and the strategy tendency among three mutually interactive parties in financing: small enterprises, commercial banks and micro-credit companies. Complex network theory and time series analysis were applied to figure out the quantitative evidence. Moreover, this paper built up a fundamental model describing the particular interaction among them through evolutionary game. Combining the results of data analysis and current situation, it is justifiable to put forward reasonable legislative recommendations for regulations on lending activities among small enterprises, commercial banks and micro-credit companies. The approach in this research provides a framework for constructing mathematical models and applying econometrics and evolutionary game in the issue of corporation financing.
NASA Astrophysics Data System (ADS)
Mosier, T. M.; Hill, D. F.; Sharp, K. V.
2013-12-01
High spatial resolution time-series data are critical for many hydrological and earth science studies. Multiple groups have developed historical and forecast datasets of high-resolution monthly time-series for regions of the world such as the United States (e.g. PRISM for hindcast data and MACA for long-term forecasts); however, analogous datasets have not been available for most data scarce regions. The current work fills this data need by producing and freely distributing hindcast and forecast time-series datasets of monthly precipitation and mean temperature for all global land surfaces, gridded at a 30 arc-second resolution. The hindcast data are constructed through a Delta downscaling method, using as inputs 0.5 degree monthly time-series and 30 arc-second climatology global weather datasets developed by Willmott & Matsuura and WorldClim, respectively. The forecast data are formulated using a similar downscaling method, but with an additional step to remove bias from the climate variable's probability distribution over each region of interest. The downscaling package is designed to be compatible with a number of general circulation models (GCM) (e.g. with GCMs developed for the IPCC AR4 report and CMIP5), and is presently implemented using time-series data from the NCAR CESM1 model in conjunction with 30 arc-second future decadal climatologies distributed by the Consultative Group on International Agricultural Research. The resulting downscaled datasets are 30 arc-second time-series forecasts of monthly precipitation and mean temperature available for all global land areas. As an example of these data, historical and forecast 30 arc-second monthly time-series from 1950 through 2070 are created and analyzed for the region encompassing Pakistan. For this case study, forecast datasets corresponding to the future representative concentration pathways 45 and 85 scenarios developed by the IPCC are presented and compared. This exercise highlights a range of potential meteorological trends for the Pakistan region and more broadly serves to demonstrate the utility of the presented 30 arc-second monthly precipitation and mean temperature datasets for use in data scarce regions.
Mobile electric power. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bloomfield, D.P.; Bloomfield, V.J.; Grosjean, P.D.
1995-12-01
The objective of this program was to develop a mobile fuel cell power supply for use by soldiers. The Century Series of 100 through 500 watt fuel cell power supplies was developed. The Century Series fuel cell power supplies are made up of a fuel cell stack, chemical hydride hydrogen supply, a fan and a controller. The FC-200, the 200 watt Century Series power supply, weighs 8.8 ib. and has a volume of 322 in.3. The operating point is 0.7 volt/cell at 125 ASF; a power density of 22.7 watts/lb. or 0.62 watts/in.3 and an energy density of 110 whr/lb.more » The prototype 750 whr hydrogen supply weighs 7 lbs. and has a volume of 193 in.3. The fuel elements weigh 0.45 lb. and require 0.79 lbs. of water. The FC-200 has powered a scooter requiring a starting current of three times the rated current of the stack. It has also powered a microclimate cooler. (KAR) P. 1.« less
NASA Astrophysics Data System (ADS)
Bloomfield, David P.; Bloomfield, Valerie J.; Grosjean, Paul D.; Kelland, James W.
1995-02-01
The objective of this program was to develop a mobile fuel cell power supply for use by soldiers. The Century Series of 100 through 500 watt fuel cell power supplies was developed. The Century Series fuel cell power supplies are made up of a fuel cell stack, chemical hydride hydrogen supply, a fan and a controller. The FC-200, the 200 watt Century Series power supply, weighs 8.8 lb. and has a volume of 322 cu in. The operating point is 0.7 volt/cell at 125 ASF; a power density of 22.7 watts/cu in. or 0.62 watts/cu in. and an energy density of 110 whr/lb. The prototype 750 whr hydrogen supply weighs 7 lbs. and has a volume of 193 cu in. The fuel elements weigh 0.45 lb. and require 0.79 lbs. of water. The FC-200 has powered a scooter requiring a starting current of three times the rated current of the stack. It has also powered a microclimate cooler.
Predicting Information Flows in Network Traffic.
ERIC Educational Resources Information Center
Hinich, Melvin J.; Molyneux, Robert E.
2003-01-01
Discusses information flow in networks and predicting network traffic and describes a study that uses time series analysis on a day's worth of Internet log data. Examines nonlinearity and traffic invariants, and suggests that prediction of network traffic may not be possible with current techniques. (Author/LRW)
The role of flight progress strips in en route air traffic control : a time-series analysis.
DOT National Transportation Integrated Search
1995-01-01
Paper flight progress strips (FPSs) are currently used in the United States en route air traffic control system to document flight information. Impending automation will replace these paper strips with electronic flight data entries. In this observat...
Current testing is limited by traditional testing models and regulatory systems. An overview is given of high throughput screening approaches to provide broader chemical and biological coverage, toxicokinetics and molecular pathway data and tools to facilitate utilization for reg...
An investigation of fMRI time series stationarity during motor sequence learning foot tapping tasks.
Muhei-aldin, Othman; VanSwearingen, Jessie; Karim, Helmet; Huppert, Theodore; Sparto, Patrick J; Erickson, Kirk I; Sejdić, Ervin
2014-04-30
Understanding complex brain networks using functional magnetic resonance imaging (fMRI) is of great interest to clinical and scientific communities. To utilize advanced analysis methods such as graph theory for these investigations, the stationarity of fMRI time series needs to be understood as it has important implications on the choice of appropriate approaches for the analysis of complex brain networks. In this paper, we investigated the stationarity of fMRI time series acquired from twelve healthy participants while they performed a motor (foot tapping sequence) learning task. Since prior studies have documented that learning is associated with systematic changes in brain activation, a sequence learning task is an optimal paradigm to assess the degree of non-stationarity in fMRI time-series in clinically relevant brain areas. We predicted that brain regions involved in a "learning network" would demonstrate non-stationarity and may violate assumptions associated with some advanced analysis approaches. Six blocks of learning, and six control blocks of a foot tapping sequence were performed in a fixed order. The reverse arrangement test was utilized to investigate the time series stationarity. Our analysis showed some non-stationary signals with a time varying first moment as a major source of non-stationarity. We also demonstrated a decreased number of non-stationarities in the third block as a result of priming and repetition. Most of the current literature does not examine stationarity prior to processing. The implication of our findings is that future investigations analyzing complex brain networks should utilize approaches robust to non-stationarities, as graph-theoretical approaches can be sensitive to non-stationarities present in data. Copyright © 2014 Elsevier B.V. All rights reserved.
Flow networks for Ocean currents
NASA Astrophysics Data System (ADS)
Tupikina, Liubov; Molkenthin, Nora; Marwan, Norbert; Kurths, Jürgen
2014-05-01
Complex networks have been successfully applied to various systems such as society, technology, and recently climate. Links in a climate network are defined between two geographical locations if the correlation between the time series of some climate variable is higher than a threshold. Therefore, network links are considered to imply heat exchange. However, the relationship between the oceanic and atmospheric flows and the climate network's structure is still unclear. Recently, a theoretical approach verifying the correlation between ocean currents and surface air temperature networks has been introduced, where the Pearson correlation networks were constructed from advection-diffusion dynamics on an underlying flow. Since the continuous approach has its limitations, i.e., by its high computational complexity, we here introduce a new, discrete construction of flow-networks, which is then applied to static and dynamic velocity fields. Analyzing the flow-networks of prototypical flows we find that our approach can highlight the zones of high velocity by degree and transition zones by betweenness, while the combination of these network measures can uncover how the flow propagates within time. We also apply the method to time series data of the Equatorial Pacific Ocean Current and the Gulf Stream ocean current for the changing velocity fields, which could not been done before, and analyse the properties of the dynamical system. Flow-networks can be powerful tools to theoretically understand the step from system's dynamics to network's topology that can be analyzed using network measures and is used for shading light on different climatic phenomena.
Geomagnetic storms, the Dst ring-current myth and lognormal distributions
Campbell, W.H.
1996-01-01
The definition of geomagnetic storms dates back to the turn of the century when researchers recognized the unique shape of the H-component field change upon averaging storms recorded at low latitude observatories. A generally accepted modeling of the storm field sources as a magnetospheric ring current was settled about 30 years ago at the start of space exploration and the discovery of the Van Allen belt of particles encircling the Earth. The Dst global 'ring-current' index of geomagnetic disturbances, formulated in that period, is still taken to be the definitive representation for geomagnetic storms. Dst indices, or data from many world observatories processed in a fashion paralleling the index, are used widely by researchers relying on the assumption of such a magnetospheric current-ring depiction. Recent in situ measurements by satellites passing through the ring-current region and computations with disturbed magnetosphere models show that the Dst storm is not solely a main-phase to decay-phase, growth to disintegration, of a massive current encircling the Earth. Although a ring current certainly exists during a storm, there are many other field contributions at the middle-and low-latitude observatories that are summed to show the 'storm' characteristic behavior in Dst at these observatories. One characteristic of the storm field form at middle and low latitudes is that Dst exhibits a lognormal distribution shape when plotted as the hourly value amplitude in each time range. Such distributions, common in nature, arise when there are many contributors to a measurement or when the measurement is a result of a connected series of statistical processes. The amplitude-time displays of Dst are thought to occur because the many time-series processes that are added to form Dst all have their own characteristic distribution in time. By transforming the Dst time display into the equivalent normal distribution, it is shown that a storm recovery can be predicted with remarkable accuracy from measurements made during the Dst growth phase. In the lognormal formulation, the mean, standard deviation and field count within standard deviation limits become definitive Dst storm parameters.
Robust, automatic GPS station velocities and velocity time series
NASA Astrophysics Data System (ADS)
Blewitt, G.; Kreemer, C.; Hammond, W. C.
2014-12-01
Automation in GPS coordinate time series analysis makes results more objective and reproducible, but not necessarily as robust as the human eye to detect problems. Moreover, it is not a realistic option to manually scan our current load of >20,000 time series per day. This motivates us to find an automatic way to estimate station velocities that is robust to outliers, discontinuities, seasonality, and noise characteristics (e.g., heteroscedasticity). Here we present a non-parametric method based on the Theil-Sen estimator, defined as the median of velocities vij=(xj-xi)/(tj-ti) computed between all pairs (i, j). Theil-Sen estimators produce statistically identical solutions to ordinary least squares for normally distributed data, but they can tolerate up to 29% of data being problematic. To mitigate seasonality, our proposed estimator only uses pairs approximately separated by an integer number of years (N-δt)<(tj-ti )<(N+δt), where δt is chosen to be small enough to capture seasonality, yet large enough to reduce random error. We fix N=1 to maximally protect against discontinuities. In addition to estimating an overall velocity, we also use these pairs to estimate velocity time series. To test our methods, we process real data sets that have already been used with velocities published in the NA12 reference frame. Accuracy can be tested by the scatter of horizontal velocities in the North American plate interior, which is known to be stable to ~0.3 mm/yr. This presents new opportunities for time series interpretation. For example, the pattern of velocity variations at the interannual scale can help separate tectonic from hydrological processes. Without any step detection, velocity estimates prove to be robust for stations affected by the Mw7.2 2010 El Mayor-Cucapah earthquake, and velocity time series show a clear change after the earthquake, without any of the usual parametric constraints, such as relaxation of postseismic velocities to their preseismic values.
Review of current GPS methodologies for producing accurate time series and their error sources
NASA Astrophysics Data System (ADS)
He, Xiaoxing; Montillet, Jean-Philippe; Fernandes, Rui; Bos, Machiel; Yu, Kegen; Hua, Xianghong; Jiang, Weiping
2017-05-01
The Global Positioning System (GPS) is an important tool to observe and model geodynamic processes such as plate tectonics and post-glacial rebound. In the last three decades, GPS has seen tremendous advances in the precision of the measurements, which allow researchers to study geophysical signals through a careful analysis of daily time series of GPS receiver coordinates. However, the GPS observations contain errors and the time series can be described as the sum of a real signal and noise. The signal itself can again be divided into station displacements due to geophysical causes and to disturbing factors. Examples of the latter are errors in the realization and stability of the reference frame and corrections due to ionospheric and tropospheric delays and GPS satellite orbit errors. There is an increasing demand on detecting millimeter to sub-millimeter level ground displacement signals in order to further understand regional scale geodetic phenomena hence requiring further improvements in the sensitivity of the GPS solutions. This paper provides a review spanning over 25 years of advances in processing strategies, error mitigation methods and noise modeling for the processing and analysis of GPS daily position time series. The processing of the observations is described step-by-step and mainly with three different strategies in order to explain the weaknesses and strengths of the existing methodologies. In particular, we focus on the choice of the stochastic model in the GPS time series, which directly affects the estimation of the functional model including, for example, tectonic rates, seasonal signals and co-seismic offsets. Moreover, the geodetic community continues to develop computational methods to fully automatize all phases from analysis of GPS time series. This idea is greatly motivated by the large number of GPS receivers installed around the world for diverse applications ranging from surveying small deformations of civil engineering structures (e.g., subsidence of the highway bridge) to the detection of particular geophysical signals.
Time-series analysis of delta13C from tree rings. I. Time trends and autocorrelation.
Monserud, R A; Marshall, J D
2001-09-01
Univariate time-series analyses were conducted on stable carbon isotope ratios obtained from tree-ring cellulose. We looked for the presence and structure of autocorrelation. Significant autocorrelation violates the statistical independence assumption and biases hypothesis tests. Its presence would indicate the existence of lagged physiological effects that persist for longer than the current year. We analyzed data from 28 trees (60-85 years old; mean = 73 years) of western white pine (Pinus monticola Dougl.), ponderosa pine (Pinus ponderosa Laws.), and Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco var. glauca) growing in northern Idaho. Material was obtained by the stem analysis method from rings laid down in the upper portion of the crown throughout each tree's life. The sampling protocol minimized variation caused by changing light regimes within each tree. Autoregressive moving average (ARMA) models were used to describe the autocorrelation structure over time. Three time series were analyzed for each tree: the stable carbon isotope ratio (delta(13)C); discrimination (delta); and the difference between ambient and internal CO(2) concentrations (c(a) - c(i)). The effect of converting from ring cellulose to whole-leaf tissue did not affect the analysis because it was almost completely removed by the detrending that precedes time-series analysis. A simple linear or quadratic model adequately described the time trend. The residuals from the trend had a constant mean and variance, thus ensuring stationarity, a requirement for autocorrelation analysis. The trend over time for c(a) - c(i) was particularly strong (R(2) = 0.29-0.84). Autoregressive moving average analyses of the residuals from these trends indicated that two-thirds of the individual tree series contained significant autocorrelation, whereas the remaining third were random (white noise) over time. We were unable to distinguish between individuals with and without significant autocorrelation beforehand. Significant ARMA models were all of low order, with either first- or second-order (i.e., lagged 1 or 2 years, respectively) models performing well. A simple autoregressive (AR(1)), model was the most common. The most useful generalization was that the same ARMA model holds for each of the three series (delta(13)C, delta, c(a) - c(i)) for an individual tree, if the time trend has been properly removed for each series. The mean series for the two pine species were described by first-order ARMA models (1-year lags), whereas the Douglas-fir mean series were described by second-order models (2-year lags) with negligible first-order effects. Apparently, the process of constructing a mean time series for a species preserves an underlying signal related to delta(13)C while canceling some of the random individual tree variation. Furthermore, the best model for the overall mean series (e.g., for a species) cannot be inferred from a consensus of the individual tree model forms, nor can its parameters be estimated reliably from the mean of the individual tree parameters. Because two-thirds of the individual tree time series contained significant autocorrelation, the normal assumption of a random structure over time is unwarranted, even after accounting for the time trend. The residuals of an appropriate ARMA model satisfy the independence assumption, and can be used to make hypothesis tests.
The Modern School of Francisco Ferrer i Guàrdia (1859-1909), an International and Current Figure
ERIC Educational Resources Information Center
Garcia-Yeste, Carme; Redondo-Sama, Gisela; Padrós, Maria; Melgar, Patricia
2016-01-01
Background/Context: Throughout history, a country's economic and military strength has influenced its times of cultural splendor and the rise of famous intellectuals and artists. Spain has been an exception to this. At the turn of the 20th century, a surprising series of events that no one could have predicted occurred. At the time, Spain had…
Swetapadma, Aleena; Yadav, Anamika
2015-01-01
Many schemes are reported for shunt fault location estimation, but fault location estimation of series or open conductor faults has not been dealt with so far. The existing numerical relays only detect the open conductor (series) fault and give the indication of the faulty phase(s), but they are unable to locate the series fault. The repair crew needs to patrol the complete line to find the location of series fault. In this paper fuzzy based fault detection/classification and location schemes in time domain are proposed for both series faults, shunt faults, and simultaneous series and shunt faults. The fault simulation studies and fault location algorithm have been developed using Matlab/Simulink. Synchronized phasors of voltage and current signals of both the ends of the line have been used as input to the proposed fuzzy based fault location scheme. Percentage of error in location of series fault is within 1% and shunt fault is 5% for all the tested fault cases. Validation of percentage of error in location estimation is done using Chi square test with both 1% and 5% level of significance. PMID:26413088
Funk, Sebastian; Bogich, Tiffany L; Jones, Kate E; Kilpatrick, A Marm; Daszak, Peter
2013-01-01
The proper allocation of public health resources for research and control requires quantification of both a disease's current burden and the trend in its impact. Infectious diseases that have been labeled as "emerging infectious diseases" (EIDs) have received heightened scientific and public attention and resources. However, the label 'emerging' is rarely backed by quantitative analysis and is often used subjectively. This can lead to over-allocation of resources to diseases that are incorrectly labelled "emerging," and insufficient allocation of resources to diseases for which evidence of an increasing or high sustained impact is strong. We suggest a simple quantitative approach, segmented regression, to characterize the trends and emergence of diseases. Segmented regression identifies one or more trends in a time series and determines the most statistically parsimonious split(s) (or joinpoints) in the time series. These joinpoints in the time series indicate time points when a change in trend occurred and may identify periods in which drivers of disease impact change. We illustrate the method by analyzing temporal patterns in incidence data for twelve diseases. This approach provides a way to classify a disease as currently emerging, re-emerging, receding, or stable based on temporal trends, as well as to pinpoint the time when the change in these trends happened. We argue that quantitative approaches to defining emergence based on the trend in impact of a disease can, with appropriate context, be used to prioritize resources for research and control. Implementing this more rigorous definition of an EID will require buy-in and enforcement from scientists, policy makers, peer reviewers and journal editors, but has the potential to improve resource allocation for global health.
Cloessner, Emily A.; Stokley, Shannon; Yankey, David; Markowitz, Lauri E.
2016-01-01
Abstract The current recommendation for human papillomavirus (HPV) vaccination in the United States is for 3 doses to be administered over a 6 month period. In April 2014, the World Health Organization (WHO) recommended adoption of a 2-dose schedule, with doses spaced a minimum of 6 months apart, for teens who begin the series before age 15. We analyzed data from the 2013 National Immunization Survey-Teen to examine the timing of second and third dose receipt among US adolescents. All analyses were restricted to adolescents age 13–17 y who had adequate provider data. The Wilcoxon–Mann–Whitney test measured differences in time to receive vaccine doses among demographic and socioeconomic groups. Logistic regression identified socioeconomic characteristics associated with receiving the second dose of HPV vaccine at least 6 months after the first dose. The median time for teens to receive the second dose of HPV vaccine was 2.6 months after the first dose, and the median time to receive the third dose was 4.9 months after the second dose. Minority teens and teens living below the poverty level took significantly longer to receive doses. Among teens that initiated the HPV vaccine series before age 15 y, 28.6% received the second dose at least 6 months after the first dose. If these teens, who met the WHO criteria for up-to-date HPV vaccination, were classified as having completed the vaccination series, overall coverage in the US would increase 3.9 percentage points, with African American and Hispanic teens having the greatest increases in coverage. PMID:26587886
Cloessner, Emily A; Stokley, Shannon; Yankey, David; Markowitz, Lauri E
2016-06-02
The current recommendation for human papillomavirus (HPV) vaccination in the United States is for 3 doses to be administered over a 6 month period. In April 2014, the World Health Organization (WHO) recommended adoption of a 2-dose schedule, with doses spaced a minimum of 6 months apart, for teens who begin the series before age 15. We analyzed data from the 2013 National Immunization Survey-Teen to examine the timing of second and third dose receipt among US adolescents. All analyses were restricted to adolescents age 13-17 y who had adequate provider data. The Wilcoxon-Mann-Whitney test measured differences in time to receive vaccine doses among demographic and socioeconomic groups. Logistic regression identified socioeconomic characteristics associated with receiving the second dose of HPV vaccine at least 6 months after the first dose. The median time for teens to receive the second dose of HPV vaccine was 2.6 months after the first dose, and the median time to receive the third dose was 4.9 months after the second dose. Minority teens and teens living below the poverty level took significantly longer to receive doses. Among teens that initiated the HPV vaccine series before age 15 y, 28.6% received the second dose at least 6 months after the first dose. If these teens, who met the WHO criteria for up-to-date HPV vaccination, were classified as having completed the vaccination series, overall coverage in the US would increase 3.9 percentage points, with African American and Hispanic teens having the greatest increases in coverage.
Structural Equation Modeling of Multivariate Time Series
ERIC Educational Resources Information Center
du Toit, Stephen H. C.; Browne, Michael W.
2007-01-01
The covariance structure of a vector autoregressive process with moving average residuals (VARMA) is derived. It differs from other available expressions for the covariance function of a stationary VARMA process and is compatible with current structural equation methodology. Structural equation modeling programs, such as LISREL, may therefore be…
7 CFR 4279.2 - Definitions and abbreviations.
Code of Federal Regulations, 2011 CFR
2011-01-01
... from a series of financial statements of a business over a period of time. Financial statement analysis... appreciation. The difference between the current net book value recorded on the financial statements (original... financial accounting records located in St. Louis, Missouri. High-impact business. A business that offers...
7 CFR 4279.2 - Definitions and abbreviations.
Code of Federal Regulations, 2012 CFR
2012-01-01
... from a series of financial statements of a business over a period of time. Financial statement analysis... appreciation. The difference between the current net book value recorded on the financial statements (original... financial accounting records located in St. Louis, Missouri. High-impact business. A business that offers...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Ravindra; Uluski, Robert; Reilly, James T.
The objective of this survey is to benchmark current practices for DMS implementation to serve as a guide for future system implementations. The survey sought information on current plans to implement DMS, DMS functions of interest, implementation challenges, functional benefits achieved, and other relevant information. These survey results were combined (where possible) with results of similar surveys conducted in the previous four years to observe trends over time.
Bidirectional dc-to-dc Power Converter
NASA Technical Reports Server (NTRS)
Griesbach, C. R.
1986-01-01
Solid-state, series-resonant converter uses high-voltage thyristors. Converter used either to convert high-voltage, low-current dc power to lowvoltage, high current power or reverse. Taking advantage of newly-available high-voltage thyristors to provide better reliability and efficiency than traditional converters that use vacuum tubes as power switches. New converter essentially maintenance free and provides greatly increased mean time between failures. Attractive in industrial applications whether or not bidirectional capability is required.
Preparations for the IGS realization of ITRF2014
NASA Astrophysics Data System (ADS)
Rebischung, Paul; Schmid, Ralf
2016-04-01
The International GNSS Service (IGS) currently prepares its own realization, called IGS14, of the latest release of the International Terrestrial Reference Frame (ITRF2014). This preparation involves: - a selection of the most suitable reference frame (RF) stations from the complete set of GNSS stations in ITRF2014; - the design of a well-distributed core network of RF stations for the purpose of aligning global GNSS solutions; - a re-evaluation of the GPS and GLONASS satellite antenna phase center offsets (PCOs), based on the SINEX files provided by the IGS Analysis Centers (ACs) in the frame of the second IGS reprocessing campaign repro2. This presentation will first cover the criteria used for the selection of the IGS14 and IGS14 core RF stations as well as preliminary station selection results. We will then use the preliminary IGS14 RF to re-align the daily IGS combined repro2 SINEX solutions and study the impact of the RF change on GNSS-derived geodetic parameter time series. In a second part, we will focus on the re-evaluation of the GNSS satellite antenna PCOs. A re-evaluation of at least their radial (z) components is indeed required, despite the negligible scale difference between ITRF2008 and ITRF2014, because of modeling changes recently introduced within the IGS which affect the scale of GNSS terrestrial frames (Earth radiation pressure, antenna thrust). Moreover, the 13 GPS and GLONASS satellites launched since September 2012 are currently assigned preliminary block-specific mean PCO values which need to be updated. From the daily AC repro2 SINEX files, we will therefore derive time series of satellite z-PCO estimates and analyze the resulting time series. Since several ACs provided all three components of the satellite PCOs in their SINEX files, we will additionally derive similar x- and y-PCO time series and discuss the relevance of their potential re-evaluation.
Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats
Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.
2012-01-01
This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.
Centrality measures in temporal networks with time series analysis
NASA Astrophysics Data System (ADS)
Huang, Qiangjuan; Zhao, Chengli; Zhang, Xue; Wang, Xiaojie; Yi, Dongyun
2017-05-01
The study of identifying important nodes in networks has a wide application in different fields. However, the current researches are mostly based on static or aggregated networks. Recently, the increasing attention to networks with time-varying structure promotes the study of node centrality in temporal networks. In this paper, we define a supra-evolution matrix to depict the temporal network structure. With using of the time series analysis, the relationships between different time layers can be learned automatically. Based on the special form of the supra-evolution matrix, the eigenvector centrality calculating problem is turned into the calculation of eigenvectors of several low-dimensional matrices through iteration, which effectively reduces the computational complexity. Experiments are carried out on two real-world temporal networks, Enron email communication network and DBLP co-authorship network, the results of which show that our method is more efficient at discovering the important nodes than the common aggregating method.
Time resolved EUV spectra from Zpinching capillary discharge plasma
NASA Astrophysics Data System (ADS)
Jancarek, Alexandr; Nevrkla, Michal; Nawaz, Fahad
2015-09-01
We developed symmetrically charged driver to obtain high voltage, high current Z-pinching capillary discharge. Plasma is created by up to 70 kA, 29 ns risetime current pulse passing through a 5 mm inner diameter, 224 mm long capillary filled with gas to initial pressure in the range of 1 kPa. Due to the low inductance design of the driver, the pinch is observable directly from the measured current curve. Time-integrated and time-resolved spectra of discharge plasma radiation are recorded together with the capillary current and analyzed. The most encouraging spectra were captured in the wavelength range 8.3 ÷ 14 nm. This spectral region contains nitrogen Balmer series lines including potentially lasing NVII 2 - 3 transition. Spectral lines are identified in the NIST database using the FLY kinetic code. The line of 13.38 nm wavelength, transition NVII 2 - 3, was observed in gated, and also in time-integrated spectra for currents >60 kA. This work has been supported by the Ministry of Education, Youth and Sports of the Czech Republic grants LG13029.
Wavelet analysis of near-resonant series RLC circuit with time-dependent forcing frequency
NASA Astrophysics Data System (ADS)
Caccamo, M. T.; Cannuli, A.; Magazù, S.
2018-07-01
In this work, the results of an analysis of the response of a near-resonant series resistance‑inductance‑capacitance (RLC) electric circuit with time-dependent forcing frequency by means of a wavelet cross-correlation approach are reported. In particular, it is shown how the wavelet approach enables frequency and time analysis of the circuit response to be carried out simultaneously—this procedure not being possible by Fourier transform, since the frequency is not stationary in time. A series RLC circuit simulation is performed by using the Simulation Program with Integrated Circuits Emphasis (SPICE), in which an oscillatory sinusoidal voltage drive signal of constant amplitude is swept through the resonant condition by progressively increasing the frequency over a 20-second time window, linearly, from 0.32 Hz to 6.69 Hz. It is shown that the wavelet cross-correlation procedure quantifies the common power between the input signal (represented by the electromotive force) and the output signal, which in the present case is a current, highlighting not only which frequencies are present but also when they occur, i.e. providing a simultaneous time-frequency analysis. The work is directed toward graduate Physics, Engineering and Mathematics students, with the main intention of introducing wavelet analysis into their data analysis toolkit.
ERP-Variations on Time Scales Between Hours and Months Derived From GNSS Observations
NASA Astrophysics Data System (ADS)
Weber, R.; Englich, S.; Mendes Cerveira, P.
2007-05-01
Current observations gained by the space geodetic techniques, especially VLBI, GPS and SLR, allow for the determination of Earth Rotation Parameters (ERPs - polar motion, UT1/LOD) with unprecedented accuracy and temporal resolution. This presentation focuses on contributions to the ERP recovery provided by satellite navigation systems (primarily GPS). The IGS (International GNSS Service), for example, currently provides daily polar motion with an accuracy of less than 0.1mas and LOD estimates with an accuracy of a few microseconds. To study more rapid variations in polar motion and LOD we established in a first step a high resolution (hourly resolution) ERP-time series from GPS observation data of the IGS network covering the year 2005. The calculations were carried out by means of the Bernese GPS Software V5.0 considering observations from a subset of 113 fairly stable stations out of the IGS05 reference frame sites. From these ERP time series the amplitudes of the major diurnal and semidiurnal variations caused by ocean tides are estimated. After correcting the series for ocean tides the remaining geodetic observed excitation is compared with variations of atmospheric excitation (AAM). To study the sensitivity of the estimates with respect to the applied mapping function we applied both the widely used NMF (Niell Mapping Function) and the VMF1 (Vienna Mapping Function 1). In addition, based on computations covering two months in 2005, the potential improvement due to the use of additional GLONASS data will be discussed.
Human Migration and Agricultural Expansion: An Impending Threat to the Maya Biosphere Reserve
NASA Technical Reports Server (NTRS)
Sader, Steven; Reining, Conard; Sever, Thomas L.; Soza, Carlos
1997-01-01
Evidence is presented of the current threats to the Maya Biosphere Reserve in northern Guatemala as derived through time-series Landsat Thematic Mapper observations and analysis. Estimates of deforestation rates and trends are examined for different management units within the reserve and buffer zones. The satellite imagery was used to quantify and monitor rates, patterns, and trends of forest clearing during a time period corresponding to new road construction and significant human migration into the newly accessible forest region. Satellite imagery is appropriate technology in a vast and remote tropical region where aerial photography and extensive field-based methods are not cost-effective and current, timely data is essential for establishing conservation priorities.
Phenological Parameters Estimation Tool
NASA Technical Reports Server (NTRS)
McKellip, Rodney D.; Ross, Kenton W.; Spruce, Joseph P.; Smoot, James C.; Ryan, Robert E.; Gasser, Gerald E.; Prados, Donald L.; Vaughan, Ronald D.
2010-01-01
The Phenological Parameters Estimation Tool (PPET) is a set of algorithms implemented in MATLAB that estimates key vegetative phenological parameters. For a given year, the PPET software package takes in temporally processed vegetation index data (3D spatio-temporal arrays) generated by the time series product tool (TSPT) and outputs spatial grids (2D arrays) of vegetation phenological parameters. As a precursor to PPET, the TSPT uses quality information for each pixel of each date to remove bad or suspect data, and then interpolates and digitally fills data voids in the time series to produce a continuous, smoothed vegetation index product. During processing, the TSPT displays NDVI (Normalized Difference Vegetation Index) time series plots and images from the temporally processed pixels. Both the TSPT and PPET currently use moderate resolution imaging spectroradiometer (MODIS) satellite multispectral data as a default, but each software package is modifiable and could be used with any high-temporal-rate remote sensing data collection system that is capable of producing vegetation indices. Raw MODIS data from the Aqua and Terra satellites is processed using the TSPT to generate a filtered time series data product. The PPET then uses the TSPT output to generate phenological parameters for desired locations. PPET output data tiles are mosaicked into a Conterminous United States (CONUS) data layer using ERDAS IMAGINE, or equivalent software package. Mosaics of the vegetation phenology data products are then reprojected to the desired map projection using ERDAS IMAGINE
Information Foraging Theory in Software Maintenance
2012-09-30
classified information, stamp classification level on the top and bottom of this page. 17. LIMITATION OF ABSTRACT. This block must be completed to assign a ...time: for example a time series plot of model reaction times to many (simulated) stimuli presented to it in a run • “ Statistical ” abstractions summed...shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number
CHRONOS: a time-varying method for microRNA-mediated subpathway enrichment analysis.
Vrahatis, Aristidis G; Dimitrakopoulou, Konstantina; Balomenos, Panos; Tsakalidis, Athanasios K; Bezerianos, Anastasios
2016-03-15
In the era of network medicine and the rapid growth of paired time series mRNA/microRNA expression experiments, there is an urgent need for pathway enrichment analysis methods able to capture the time- and condition-specific 'active parts' of the biological circuitry as well as the microRNA impact. Current methods ignore the multiple dynamical 'themes'-in the form of enriched biologically relevant microRNA-mediated subpathways-that determine the functionality of signaling networks across time. To address these challenges, we developed time-vaRying enriCHment integrOmics Subpathway aNalysis tOol (CHRONOS) by integrating time series mRNA/microRNA expression data with KEGG pathway maps and microRNA-target interactions. Specifically, microRNA-mediated subpathway topologies are extracted and evaluated based on the temporal transition and the fold change activity of the linked genes/microRNAs. Further, we provide measures that capture the structural and functional features of subpathways in relation to the complete organism pathway atlas. Our application to synthetic and real data shows that CHRONOS outperforms current subpathway-based methods into unraveling the inherent dynamic properties of pathways. CHRONOS is freely available at http://biosignal.med.upatras.gr/chronos/ tassos.bezerianos@nus.edu.sg Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
"Batch" kinetics in flow: online IR analysis and continuous control.
Moore, Jason S; Jensen, Klavs F
2014-01-07
Currently, kinetic data is either collected under steady-state conditions in flow or by generating time-series data in batch. Batch experiments are generally considered to be more suitable for the generation of kinetic data because of the ability to collect data from many time points in a single experiment. Now, a method that rapidly generates time-series reaction data from flow reactors by continuously manipulating the flow rate and reaction temperature has been developed. This approach makes use of inline IR analysis and an automated microreactor system, which allowed for rapid and tight control of the operating conditions. The conversion/residence time profiles at several temperatures were used to fit parameters to a kinetic model. This method requires significantly less time and a smaller amount of starting material compared to one-at-a-time flow experiments, and thus allows for the rapid generation of kinetic data. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Is a specific eyelid patch test series useful? Results of a French prospective study.
Assier, Haudrey; Tetart, Florence; Avenel-Audran, Martine; Barbaud, Annick; Ferrier-le Bouëdec, Marie-Christine; Giordano-Labadie, Françoise; Milpied, Brigitte; Amsler, Emmanuelle; Collet, Evelyne; Girardin, Pascal; Soria, Angèle; Waton, Julie; Truchetet, François; Bourrain, Jean-Luc; Gener, Gwendeline; Bernier, Claire; Raison-Peyron, Nadia
2018-06-08
Eyelids are frequent sites of contact dermatitis. No prospective study focused on eyelid allergic contact dermatitis (EACD) has yet been published, and this topic has never been studied in French patients. To prospectively evaluate the usefulness of an eyelid series in French patients patch tested because of EACD, and to describe these patients. We prospectively analysed standardized data for all patients referred to our departments between September 2014 and August 2016 for patch testing for suspected EACD as the main reason. All patients were patch tested with an eyelid series, the European baseline series (EBS), the French additional series, and their personal products. Patch testing with additional series and repeated open application tests (ROATs) or open tests were performed if necessary. A standardized assessment of the relevance was used, and the analysis of the results was focused on patients having positive test results with a current certain relevance. Two-hundred and sixty-four patients (238 women and 26 men) were included. Three-hundred and twenty-two tests gave positive results in 167 patients, 84 of whom had currently relevant reactions: 56 had currently relevant positive test reactions to the EBS, 16 had currently relevant positive test reactions to their personal products, 8 had currently relevant positive test reactions to the French additional series, and 4 had currently relevant positive test reactions to the eyelid series. Sixty-seven per cent of all relevant cases were related to cosmetic products. The most frequent allergens with current relevance were methylisothiazolinone (10.2%), fragrance mix I (3%), nickel (2.7%), hydroxyperoxides of linalool (2.7%) and limonene (2.3%), and Myroxylon pereirae (2.3%). Current atopic dermatitis was found in 9.5% of patients. The duration of dermatitis was shorter (23.2 vs 34.2 months; P = .035) in patients with currently relevant test reactions. The percentage of currently relevant tests remained the same when atopic patients or dermatitis localized only on the eyelids were taken into account. In French patients, testing for EACD with the extended baseline series and personal products, also including ROATs and use tests, appears to be adequate, considering the currently relevant positive test reactions. The regular addition of an eyelid series does not seem to be necessary. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Hydropy: Python package for hydrological time series handling based on Python Pandas
NASA Astrophysics Data System (ADS)
Van Hoey, Stijn; Balemans, Sophie; Nopens, Ingmar; Seuntjens, Piet
2015-04-01
Most hydrologists are dealing with time series frequently. Reading in time series, transforming them and extracting specific periods for visualisation are part of the daily work. Spreadsheet software is used a lot for these operations, but has some major drawbacks. It is mostly not reproducible, it is prone to errors and not easy to automate, which results in repetitive work when dealing with large amounts of data. Scripting languages like R and Python on the other hand, provide flexibility, enable automation and reproducibility and, hence, increase efficiency. Python has gained popularity over the last years and currently, tools for many aspects of scientific computing are readily available in Python. An increased support in controlling and managing the dependencies between packages (e.g. the Anaconda environment) allows for a wide audience to use the huge variety of available packages. Pandas is a powerful Python package for data analysis and has a lot of functionalities related to time series. As such, the package is of special interest to hydrologists. Some other packages, focussing on hydrology (e.g. Hydroclimpy by Pierre Gerard-Marchant and Hydropy by Javier Rovegno Campos), stopped active development, mainly due to the superior implementation of Pandas. We present a (revised) version of the Hydropy package that is inspired by the aforementioned packages and builds on the power of Pandas. The main idea is to add hydrological domain knowledge to the already existing Pandas functionalities. Besides, the package attempts to make the time series handling intuitive and easy to perform, thus with a clear syntax. Some illustrative examples of the current implementation starting from a Pandas DataFrame named flowdata: Creating the object flow to work with: flow = HydroAnalysis(flowdata) Retrieve only the data during winter (across all years): flow.get_season('winter') Retrieve only the data during summer of 2010: flow.get_season('summer').get_year('2010') which is equivalent to flow.get_year('2010').get_season('summer') Retrieve only the data of July and get the peak values above the 95 percentile: flow.get_season('july').get_highpeaks(above_percentile=0.95) Retrieve only the data between two specified days and selecting only the rising limbs flow.get_date_range('01/10/2008', '15/2/2014').get_climbing() Calculate the annual sum and make a plot of it: flow.frequency_resample('A', 'sum').plot()
Surface electric fields for North America during historical geomagnetic storms
Wei, Lisa H.; Homeier, Nichole; Gannon, Jennifer L.
2013-01-01
To better understand the impact of geomagnetic disturbances on the electric grid, we recreate surface electric fields from two historical geomagnetic storms—the 1989 “Quebec” storm and the 2003 “Halloween” storms. Using the Spherical Elementary Current Systems method, we interpolate sparsely distributed magnetometer data across North America. We find good agreement between the measured and interpolated data, with larger RMS deviations at higher latitudes corresponding to larger magnetic field variations. The interpolated magnetic field data are combined with surface impedances for 25 unique physiographic regions from the United States Geological Survey and literature to estimate the horizontal, orthogonal surface electric fields in 1 min time steps. The induced horizontal electric field strongly depends on the local surface impedance, resulting in surprisingly strong electric field amplitudes along the Atlantic and Gulf Coast. The relative peak electric field amplitude of each physiographic region, normalized to the value in the Interior Plains region, varies by a factor of 2 for different input magnetic field time series. The order of peak electric field amplitudes (largest to smallest), however, does not depend much on the input. These results suggest that regions at lower magnetic latitudes with high ground resistivities are also at risk from the effect of geomagnetically induced currents. The historical electric field time series are useful for estimating the flow of the induced currents through long transmission lines to study power flow and grid stability during geomagnetic disturbances.
NASA Astrophysics Data System (ADS)
Birkel, C.; Paroli, R.; Spezia, L.; Tetzlaff, D.; Soulsby, C.
2012-12-01
In this paper we present a novel model framework using the class of Markov Switching Autoregressive Models (MSARMs) to examine catchments as complex stochastic systems that exhibit non-stationary, non-linear and non-Normal rainfall-runoff and solute dynamics. Hereby, MSARMs are pairs of stochastic processes, one observed and one unobserved, or hidden. We model the unobserved process as a finite state Markov chain and assume that the observed process, given the hidden Markov chain, is conditionally autoregressive, which means that the current observation depends on its recent past (system memory). The model is fully embedded in a Bayesian analysis based on Markov Chain Monte Carlo (MCMC) algorithms for model selection and uncertainty assessment. Hereby, the autoregressive order and the dimension of the hidden Markov chain state-space are essentially self-selected. The hidden states of the Markov chain represent unobserved levels of variability in the observed process that may result from complex interactions of hydroclimatic variability on the one hand and catchment characteristics affecting water and solute storage on the other. To deal with non-stationarity, additional meteorological and hydrological time series along with a periodic component can be included in the MSARMs as covariates. This extension allows identification of potential underlying drivers of temporal rainfall-runoff and solute dynamics. We applied the MSAR model framework to streamflow and conservative tracer (deuterium and oxygen-18) time series from an intensively monitored 2.3 km2 experimental catchment in eastern Scotland. Statistical time series analysis, in the form of MSARMs, suggested that the streamflow and isotope tracer time series are not controlled by simple linear rules. MSARMs showed that the dependence of current observations on past inputs observed by transport models often in form of the long-tailing of travel time and residence time distributions can be efficiently explained by non-stationarity either of the system input (climatic variability) and/or the complexity of catchment storage characteristics. The statistical model is also capable of reproducing short (event) and longer-term (inter-event) and wet and dry dynamical "hydrological states". These reflect the non-linear transport mechanisms of flow pathways induced by transient climatic and hydrological variables and modified by catchment characteristics. We conclude that MSARMs are a powerful tool to analyze the temporal dynamics of hydrological data, allowing for explicit integration of non-stationary, non-linear and non-Normal characteristics.
Quantifying the behavior of price dynamics at opening time in stock market
NASA Astrophysics Data System (ADS)
Ochiai, Tomoshiro; Takada, Hideyuki; Nacher, Jose C.
2014-11-01
The availability of huge volume of financial data has offered the possibility for understanding the markets as a complex system characterized by several stylized facts. Here we first show that the time evolution of the Japan’s Nikkei stock average index (Nikkei 225) futures follows the resistance and breaking-acceleration effects when the complete time series data is analyzed. However, in stock markets there are periods where no regular trades occur between the close of the market on one day and the next day’s open. To examine these time gaps we decompose the time series data into opening time and intermediate time. Our analysis indicates that for the intermediate time, both the resistance and the breaking-acceleration effects are still observed. However, for the opening time there are almost no resistance and breaking-acceleration effects, and volatility is always constantly high. These findings highlight unique dynamic differences between stock markets and forex market and suggest that current risk management strategies may need to be revised to address the absence of these dynamic effects at the opening time.
The potential of using Landsat time-series to extract tropical dry forest phenology
NASA Astrophysics Data System (ADS)
Zhu, X.; Helmer, E.
2016-12-01
Vegetation phenology is the timing of seasonal developmental stages in plant life cycles. Due to the persistent cloud cover in tropical regions, current studies often use satellite data with high frequency, such as AVHRR and MODIS, to detect vegetation phenology. However, the spatial resolution of these data is from 250 m to 1 km, which does not have enough spatial details and it is difficult to relate to field observations. To produce maps of phenology at a finer spatial resolution, this study explores the feasibility of using Landsat images to detect tropical forest phenology through reconstructing a high-quality, seasonal time-series of images, and tested it in Mona Island, Puerto Rico. First, an automatic method was applied to detect cloud and cloud shadow, and a spatial interpolator was use to retrieve pixels covered by clouds, shadows, and SLC-off gaps. Second, enhanced vegetation index time-series derived from the reconstructed Landsat images were used to detect 11 phenology variables. Detected phenology is consistent with field investigations, and its spatial pattern is consistent with the rainfall distribution on this island. In addition, we may expect that phenology should correlate with forest biophysical attributes, so 47 plots with field measurement of biophysical attributes were used to indirectly validate the phenology product. Results show that phenology variables can explain a lot of variations in biophysical attributes. This study suggests that Landsat time-series has great potential to detect phenology in tropical areas.
ERIC Educational Resources Information Center
Bureau of Naval Personnel, Washington, DC.
This module covers the relationships between current and voltage; resistance in a series circuit; how to determine the values of current, voltage, resistance, and power in resistive series circuits; the effects of source internal resistance; and an introduction to the troubleshooting of series circuits. This module is divided into five lessons:…
Time Series Analysis for Spatial Node Selection in Environment Monitoring Sensor Networks
Bhandari, Siddhartha; Jurdak, Raja; Kusy, Branislav
2017-01-01
Wireless sensor networks are widely used in environmental monitoring. The number of sensor nodes to be deployed will vary depending on the desired spatio-temporal resolution. Selecting an optimal number, position and sampling rate for an array of sensor nodes in environmental monitoring is a challenging question. Most of the current solutions are either theoretical or simulation-based where the problems are tackled using random field theory, computational geometry or computer simulations, limiting their specificity to a given sensor deployment. Using an empirical dataset from a mine rehabilitation monitoring sensor network, this work proposes a data-driven approach where co-integrated time series analysis is used to select the number of sensors from a short-term deployment of a larger set of potential node positions. Analyses conducted on temperature time series show 75% of sensors are co-integrated. Using only 25% of the original nodes can generate a complete dataset within a 0.5 °C average error bound. Our data-driven approach to sensor position selection is applicable for spatiotemporal monitoring of spatially correlated environmental parameters to minimize deployment cost without compromising data resolution. PMID:29271880
A Study on Carbon Fiber (Long Fiber) Reinforced Copper Matrix Composite
1990-03-30
complex may be coke phosphate, 7 EDTA, NTA, tartrate , citrate, etc. added by proper amount of op series activating agent. It has the following property...significantly improved. Discharge with large current is proper in order to reduce the time of electrodeposition, so acidity cupric sulphate solution is
78 FR 28152 - Airworthiness Directives; Airbus Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-14
... series airplanes. The existing AD currently requires repetitive inspections of the 80VU rack lower lateral fittings for damage; repetitive inspections of the 80VU rack lower central support for cracking... fittings of the 80VU rack. This proposed AD would reduce the inspection compliance time, add an inspection...
Modeling Noisy Data with Differential Equations Using Observed and Expected Matrices
ERIC Educational Resources Information Center
Deboeck, Pascal R.; Boker, Steven M.
2010-01-01
Complex intraindividual variability observed in psychology may be well described using differential equations. It is difficult, however, to apply differential equation models in psychological contexts, as time series are frequently short, poorly sampled, and have large proportions of measurement and dynamic error. Furthermore, current methods for…
NASA Astrophysics Data System (ADS)
Chu, Enhui; Gamage, Laknath; Ishitobi, Manabu; Hiraki, Eiji; Nakaoka, Mutsuo
The A variety of switched-mode high voltage DC power supplies using voltage-fed type or current-fed type high-frequency transformer resonant inverters using MOS gate bipolar power transistors; IGBTs have been recently developed so far for a medical-use X-ray high power generator. In general, the high voltage high power X-ray generator using voltage-fed high frequency inverter with a high voltage transformer link has to meet some performances such as (i) short rising period in start transient of X-ray tube voltage (ii) no overshoot transient response in tube voltage, (iii) minimized voltage ripple in periodic steady-state under extremely wide load variations and filament heater current fluctuation conditions of the X-ray tube. This paper presents two lossless inductor snubber-assisted series resonant zero current soft switching high-frequency inverter using a diode-capacitor ladder type voltage multiplier called Cockcroft-Walton circuit, which is effectively implemented for a high DC voltage X-ray power generator. This DC high voltage generator which incorporates pulse frequency modulated series resonant inverter using IGBT power module packages is based on the operation principle of zero current soft switching commutation scheme under discontinuous resonant current and continuous resonant current transition modes. This series capacitor compensated for transformer resonant power converter with a high frequency transformer linked voltage boost multiplier can efficiently work a novel selectively-changed dual mode PFM control scheme in order to improve the start transient and steady-state response characteristics and can completely achieve stable zero current soft switching commutation tube filament current dependent for wide load parameter setting values with the aid of two lossless inductor snubbers. It is proved on the basis of simulation and experimental results in which a simple and low cost control implementation based on selectively-changed dual-mode PFM for high-voltage X-ray DC-DC power converter with a voltage multiplier strategy has some specified voltage pattern tracking voltage response performances under rapid rising time and no overshoot in start transient tube voltage as well as the minimized steady-state voltage ripple in tube voltage.
Shao, Chenxi; Xue, Yong; Fang, Fang; Bai, Fangzhou; Yin, Peifeng; Wang, Binghong
2015-07-01
The self-controlling feedback control method requires an external periodic oscillator with special design, which is technically challenging. This paper proposes a chaos control method based on time series non-uniform rational B-splines (SNURBS for short) signal feedback. It first builds the chaos phase diagram or chaotic attractor with the sampled chaotic time series and any target orbit can then be explicitly chosen according to the actual demand. Second, we use the discrete timing sequence selected from the specific target orbit to build the corresponding external SNURBS chaos periodic signal, whose difference from the system current output is used as the feedback control signal. Finally, by properly adjusting the feedback weight, we can quickly lead the system to an expected status. We demonstrate both the effectiveness and efficiency of our method by applying it to two classic chaotic systems, i.e., the Van der Pol oscillator and the Lorenz chaotic system. Further, our experimental results show that compared with delayed feedback control, our method takes less time to obtain the target point or periodic orbit (from the starting point) and that its parameters can be fine-tuned more easily.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, Chenxi, E-mail: cxshao@ustc.edu.cn; Xue, Yong; Fang, Fang
2015-07-15
The self-controlling feedback control method requires an external periodic oscillator with special design, which is technically challenging. This paper proposes a chaos control method based on time series non-uniform rational B-splines (SNURBS for short) signal feedback. It first builds the chaos phase diagram or chaotic attractor with the sampled chaotic time series and any target orbit can then be explicitly chosen according to the actual demand. Second, we use the discrete timing sequence selected from the specific target orbit to build the corresponding external SNURBS chaos periodic signal, whose difference from the system current output is used as the feedbackmore » control signal. Finally, by properly adjusting the feedback weight, we can quickly lead the system to an expected status. We demonstrate both the effectiveness and efficiency of our method by applying it to two classic chaotic systems, i.e., the Van der Pol oscillator and the Lorenz chaotic system. Further, our experimental results show that compared with delayed feedback control, our method takes less time to obtain the target point or periodic orbit (from the starting point) and that its parameters can be fine-tuned more easily.« less
Investigation on Law and Economics Based on Complex Network and Time Series Analysis
Yang, Jian; Qu, Zhao; Chang, Hui
2015-01-01
The research focuses on the cooperative relationship and the strategy tendency among three mutually interactive parties in financing: small enterprises, commercial banks and micro-credit companies. Complex network theory and time series analysis were applied to figure out the quantitative evidence. Moreover, this paper built up a fundamental model describing the particular interaction among them through evolutionary game. Combining the results of data analysis and current situation, it is justifiable to put forward reasonable legislative recommendations for regulations on lending activities among small enterprises, commercial banks and micro-credit companies. The approach in this research provides a framework for constructing mathematical models and applying econometrics and evolutionary game in the issue of corporation financing. PMID:26076460
Earth Observing System, Conclusions and Recommendations
NASA Technical Reports Server (NTRS)
1984-01-01
The following Earth Observing Systems (E.O.S.) recommendations were suggested: (1) a program must be initiated to ensure that present time series of Earth science data are maintained and continued. (2) A data system that provides easy, integrated, and complete access to past, present, and future data must be developed as soon as possible. (3) A long term research effort must be sustained to study and understand these time series of Earth observations. (4) The E.O.S. should be established as an information system to carry out those aspects of the above recommendations which go beyond existing and currently planned activities. (5) The scientific direction of the E.O.S. should be established and continued through an international scientific steering committee.
GIAnT - Generic InSAR Analysis Toolbox
NASA Astrophysics Data System (ADS)
Agram, P.; Jolivet, R.; Riel, B. V.; Simons, M.; Doin, M.; Lasserre, C.; Hetland, E. A.
2012-12-01
We present a computing framework for studying the spatio-temporal evolution of ground deformation from interferometric synthetic aperture radar (InSAR) data. Several open-source tools including Repeat Orbit Interferometry PACkage (ROI-PAC) and InSAR Scientific Computing Environment (ISCE) from NASA-JPL, and Delft Object-oriented Repeat Interferometric Software (DORIS), have enabled scientists to generate individual interferograms from raw radar data with relative ease. Numerous computational techniques and algorithms that reduce phase information from multiple interferograms to a deformation time-series have been developed and verified over the past decade. However, the sharing and direct comparison of products from multiple processing approaches has been hindered by - 1) absence of simple standards for sharing of estimated time-series products, 2) use of proprietary software tools with license restrictions and 3) the closed source nature of the exact implementation of many of these algorithms. We have developed this computing framework to address all of the above issues. We attempt to take the first steps towards creating a community software repository for InSAR time-series analysis. To date, we have implemented the short baseline subset algorithm (SBAS), NSBAS and multi-scale interferometric time-series (MInTS) in this framework and the associated source code is included in the GIAnT distribution. A number of the associated routines have been optimized for performance and scalability with large data sets. Some of the new features in our processing framework are - 1) the use of daily solutions from continuous GPS stations to correct for orbit errors, 2) the use of meteorological data sets to estimate the tropospheric delay screen and 3) a data-driven bootstrapping approach to estimate the uncertainties associated with estimated time-series products. We are currently working on incorporating tidal load corrections for individual interferograms and propagation of noise covariance models through the processing chain for robust estimation of uncertainties in the deformation estimates. We will demonstrate the ease of use of our framework with results ranging from regional scale analysis around Long Valley, CA and Parkfield, CA to continental scale analysis in Western South America. We will also present preliminary results from a new time-series approach that simultaneously estimates deformation over the complete spatial domain at all time epochs on a distributed computing platform. GIAnT has been developed entirely using open source tools and uses Python as the underlying platform. We build on the extensive numerical (NumPy) and scientific (SciPy) computing Python libraries to develop an object-oriented, flexible and modular framework for time-series InSAR applications. The toolbox is currently configured to work with outputs from ROI-PAC, ISCE and DORIS, but can easily be extended to support products from other SAR/InSAR processors. The toolbox libraries include support for hierarchical data format (HDF5) memory mapped files, parallel processing with Python's multi-processing module and support for many convex optimization solvers like CSDP, CVXOPT etc. An extensive set of routines to deal with ASCII and XML files has also been included for controlling the processing parameters.
National Geomagnetism Program: Current Status & Five-Year Plan, 2006-2010
Love, Jeffrey J.
2006-01-01
Executive Summary: The U.S. Geological Survey's Geomagnetism Program serves the scientific community and the broader public by collecting and distributing magnetometer data from an array of ground-based observatories and by conducting scientific analysis on those data. Preliminary, variational time-series can be collected and distributed in near-real time, while fully calibrated, absolute time-series are distributed after processing. The data are used by the civilian and military parts of the Federal Government, by private industry, and by academia, for a wide variety of purposes of both immediately practical importance and long-term scientific interest, including space-weather diagnosis and related hazard mitigation, mapping of the magnetic field and measurement of its activity, and research on the nature of the Earth's interior and the near-Earth space environment. This document reviews the current status of the Program, in terms of its situation within the Government and within the scientific community; summarizes the Program's operations, its staffing situation, and its facilities; describes the diversity of uses of Program magnetometer data; and presents a plan for the next 5 years for enhancing the Program's data-based services, developing products, and conducting scientific research.
Pandit, Jaideep J; Tavare, Aniket
2011-07-01
It is important that a surgical list is planned to utilise as much of the scheduled time as possible while not over-running, because this can lead to cancellation of operations. We wished to assess whether, theoretically, the known duration of individual operations could be used quantitatively to predict the likely duration of the operating list. In a university hospital setting, we first assessed the extent to which the current ad-hoc method of operating list planning was able to match the scheduled operating list times for 153 consecutive historical lists. Using receiver operating curve analysis, we assessed the ability of an alternative method to predict operating list duration for the same operating lists. This method uses a simple formula: the sum of individual operation times and a pooled standard deviation of these times. We used the operating list duration estimated from this formula to generate a probability that the operating list would finish within its scheduled time. Finally, we applied the simple formula prospectively to 150 operating lists, 'shadowing' the current ad-hoc method, to confirm the predictive ability of the formula. The ad-hoc method was very poor at planning: 50% of historical operating lists were under-booked and 37% over-booked. In contrast, the simple formula predicted the correct outcome (under-run or over-run) for 76% of these operating lists. The calculated probability that a planned series of operations will over-run or under-run was found useful in developing an algorithm to adjust the planned cases optimally. In the prospective series, 65% of operating lists were over-booked and 10% were under-booked. The formula predicted the correct outcome for 84% of operating lists. A simple quantitative method of estimating operating list duration for a series of operations leads to an algorithm (readily created on an Excel spreadsheet, http://links.lww.com/EJA/A19) that can potentially improve operating list planning.
NASA Astrophysics Data System (ADS)
Reyes, J. C.; Vernon, F. L.; Newman, R. L.; Steidl, J. H.
2010-12-01
The Waveform Server is an interactive web-based interface to multi-station, multi-sensor and multi-channel high-density time-series data stored in Center for Seismic Studies (CSS) 3.0 schema relational databases (Newman et al., 2009). In the last twelve months, based on expanded specifications and current user feedback, both the server-side infrastructure and client-side interface have been extensively rewritten. The Python Twisted server-side code-base has been fundamentally modified to now present waveform data stored in cluster-based databases using a multi-threaded architecture, in addition to supporting the pre-existing single database model. This allows interactive web-based access to high-density (broadband @ 40Hz to strong motion @ 200Hz) waveform data that can span multiple years; the common lifetime of broadband seismic networks. The client-side interface expands on it's use of simple JSON-based AJAX queries to now incorporate a variety of User Interface (UI) improvements including standardized calendars for defining time ranges, applying on-the-fly data calibration to display SI-unit data, and increased rendering speed. This presentation will outline the various cyber infrastructure challenges we have faced while developing this application, the use-cases currently in existence, and the limitations of web-based application development.
The Current Status and Future of GNSS-Meteorology in Europe
NASA Astrophysics Data System (ADS)
Jones, J.; Guerova, G.; Dousa, J.; Dick, G.; Haan, de, S.; Pottiaux, E.; Bock, O.; Pacione, R.
2017-12-01
GNSS is a well established atmospheric observing system which can accurately sense water vapour, the most abundant greenhouse gas, accounting for 60-70% of atmospheric warming. Water vapour observations are currently under-sampled in operational meteorology and obtaining and exploiting additional high-quality humidity observations is essential to improve severe weather forecasting and climate monitoring. Inconsistencies introduced into long-term time series from improved GNSS processing algorithms make climate trend analysis challenging. Ongoing re-processing efforts using state-of-the-art models are underway which will provide consistent time series' of tropospheric data, using 15+ years of GNSS observations and from over 600 stations worldwide. These datasets will enable validation of systematic biases from a range of instrumentation, improve the knowledge of climatic trends of atmospheric water vapour, and will potentially be of great benefit to global and regional NWP reanalyses and climate model simulations (e.g. IPCC AR5) COST Action ES1206 is a 4-year project, running from 2013 to 2017, which has coordinated new and improved capabilities from concurrent developments in GNSS, meteorological and climate communities. For the first time, the synergy of multi-GNSS constellations has been used to develop new, more advanced tropospheric products, exploiting the full potential of multi-GNSS on a wide range of temporal and spatial scales - from real-time products monitoring and forecasting severe weather, to the highest quality post-processed products suitable for climate research. The Action has also promoted the use of meteorological data as an input to real-time GNSS positioning, navigation, and timing services and has stimulated knowledge and data transfer throughout Europe and beyond. This presentation will give an overview of COST Action ES1206 plus an overview of ground-based GNSS-meteorology in Europe in general, including current status and future opportunities.
Tests of a low-pressure switch protected by a saturating inductor
NASA Astrophysics Data System (ADS)
Lauer, E. J.; Birx, D. L.
1981-10-01
A triggered low-pressure switch was tested switching a charged capacitor across a damping resistor simulating a transformer. A series saturating inductor protected the switch from electron beam anode damage. The capacitor was 15 micro F and charge voltages up to 50 kV were used. The time to current maximum was 5 to 8 micro S. The current terminated at about 50 micro S and voltage could be reapplied at about 100 micro S.
Human Mars Lander Design for NASA's Evolvable Mars Campaign
NASA Technical Reports Server (NTRS)
Polsgrove, Tara; Chapman, Jack; Sutherlin, Steve; Taylor, Brian; Fabisinski, Leo; Collins, Tim; Cianciolo Dwyer, Alicia; Samareh, Jamshid; Robertson, Ed; Studak, Bill;
2016-01-01
Landing humans on Mars will require entry, descent, and landing capability beyond the current state of the art. Nearly twenty times more delivered payload and an order of magnitude improvement in precision landing capability will be necessary. To better assess entry, descent, and landing technology options and sensitivities to future human mission design variations, a series of design studies on human-class Mars landers has been initiated. This paper describes the results of the first design study in the series of studies to be completed in 2016 and includes configuration, trajectory and subsystem design details for a lander with Hypersonic Inflatable Aerodynamic Decelerator (HIAD) entry technology. Future design activities in this series will focus on other entry technology options.
Geerse, Daphne J; Coolen, Bert H; Roerdink, Melvyn
2015-01-01
Walking ability is frequently assessed with the 10-meter walking test (10MWT), which may be instrumented with multiple Kinect v2 sensors to complement the typical stopwatch-based time to walk 10 meters with quantitative gait information derived from Kinect's 3D body point's time series. The current study aimed to evaluate a multi-Kinect v2 set-up for quantitative gait assessments during the 10MWT against a gold-standard motion-registration system by determining between-systems agreement for body point's time series, spatiotemporal gait parameters and the time to walk 10 meters. To this end, the 10MWT was conducted at comfortable and maximum walking speed, while 3D full-body kinematics was concurrently recorded with the multi-Kinect v2 set-up and the Optotrak motion-registration system (i.e., the gold standard). Between-systems agreement for body point's time series was assessed with the intraclass correlation coefficient (ICC). Between-systems agreement was similarly determined for the gait parameters' walking speed, cadence, step length, stride length, step width, step time, stride time (all obtained for the intermediate 6 meters) and the time to walk 10 meters, complemented by Bland-Altman's bias and limits of agreement. Body point's time series agreed well between the motion-registration systems, particularly so for body points in motion. For both comfortable and maximum walking speeds, the between-systems agreement for the time to walk 10 meters and all gait parameters except step width was high (ICC ≥ 0.888), with negligible biases and narrow limits of agreement. Hence, body point's time series and gait parameters obtained with a multi-Kinect v2 set-up match well with those derived with a gold standard in 3D measurement accuracy. Future studies are recommended to test the clinical utility of the multi-Kinect v2 set-up to automate 10MWT assessments, thereby complementing the time to walk 10 meters with reliable spatiotemporal gait parameters obtained objectively in a quick, unobtrusive and patient-friendly manner.
Geerse, Daphne J.; Coolen, Bert H.; Roerdink, Melvyn
2015-01-01
Walking ability is frequently assessed with the 10-meter walking test (10MWT), which may be instrumented with multiple Kinect v2 sensors to complement the typical stopwatch-based time to walk 10 meters with quantitative gait information derived from Kinect’s 3D body point’s time series. The current study aimed to evaluate a multi-Kinect v2 set-up for quantitative gait assessments during the 10MWT against a gold-standard motion-registration system by determining between-systems agreement for body point’s time series, spatiotemporal gait parameters and the time to walk 10 meters. To this end, the 10MWT was conducted at comfortable and maximum walking speed, while 3D full-body kinematics was concurrently recorded with the multi-Kinect v2 set-up and the Optotrak motion-registration system (i.e., the gold standard). Between-systems agreement for body point’s time series was assessed with the intraclass correlation coefficient (ICC). Between-systems agreement was similarly determined for the gait parameters’ walking speed, cadence, step length, stride length, step width, step time, stride time (all obtained for the intermediate 6 meters) and the time to walk 10 meters, complemented by Bland-Altman’s bias and limits of agreement. Body point’s time series agreed well between the motion-registration systems, particularly so for body points in motion. For both comfortable and maximum walking speeds, the between-systems agreement for the time to walk 10 meters and all gait parameters except step width was high (ICC ≥ 0.888), with negligible biases and narrow limits of agreement. Hence, body point’s time series and gait parameters obtained with a multi-Kinect v2 set-up match well with those derived with a gold standard in 3D measurement accuracy. Future studies are recommended to test the clinical utility of the multi-Kinect v2 set-up to automate 10MWT assessments, thereby complementing the time to walk 10 meters with reliable spatiotemporal gait parameters obtained objectively in a quick, unobtrusive and patient-friendly manner. PMID:26461498
NASA Astrophysics Data System (ADS)
Lhermitte, S.; Tips, M.; Verbesselt, J.; Jonckheere, I.; Van Aardt, J.; Coppin, Pol
2005-10-01
Large-scale wild fires have direct impacts on natural ecosystems and play a major role in the vegetation ecology and carbon budget. Accurate methods for describing post-fire development of vegetation are therefore essential for the understanding and monitoring of terrestrial ecosystems. Time series analysis of satellite imagery offers the potential to quantify these parameters with spatial and temporal accuracy. Current research focuses on the potential of time series analysis of SPOT Vegetation S10 data (1999-2001) to quantify the vegetation recovery of large-scale burns detected in the framework of GBA2000. The objective of this study was to provide quantitative estimates of the spatio-temporal variation of vegetation recovery based on remote sensing indicators. Southern Africa was used as a pilot study area, given the availability of ground and satellite data. An automated technique was developed to extract consistent indicators of vegetation recovery from the SPOT-VGT time series. Reference areas were used to quantify the vegetation regrowth by means of Regeneration Indices (RI). Two kinds of recovery indicators (time and value- based) were tested for RI's of NDVI, SR, SAVI, NDWI, and pure band information. The effects of vegetation structure and temporal fire regime features on the recovery indicators were subsequently analyzed. Statistical analyses were conducted to assess whether the recovery indicators were different for different vegetation types and dependent on timing of the burning season. Results highlighted the importance of appropriate reference areas and the importance of correct normalization of the SPOT-VGT data.
NASA Astrophysics Data System (ADS)
Wang, Yonggang; Tong, Liqing; Liu, Kefu
2017-06-01
The purpose of impedance matching for a Marx generator and DBD lamp is to limit the output current of the Marx generator, provide a large discharge current at ignition, and obtain fast voltage rising/falling edges and large overshoot. In this paper, different impedance matching circuits (series inductor, parallel capacitor, and series inductor combined with parallel capacitor) are analyzed. It demonstrates that a series inductor could limit the Marx current. However, the discharge current is also limited. A parallel capacitor could provide a large discharge current, but the Marx current is also enlarged. A series inductor combined with a parallel capacitor takes full advantage of the inductor and capacitor, and avoids their shortcomings. Therefore, it is a good solution. Experimental results match the theoretical analysis well and show that both the series inductor and parallel capacitor improve the performance of the system. However, the series inductor combined with the parallel capacitor has the best performance. Compared with driving the DBD lamp with a Marx generator directly, an increase of 97.3% in radiant power and an increase of 59.3% in system efficiency are achieved using this matching circuit.
Inter-annual variability and long term predictability of exchanges through the Strait of Gibraltar
NASA Astrophysics Data System (ADS)
Boutov, Dmitri; Peliz, Álvaro; Miranda, Pedro M. A.; Soares, Pedro M. M.; Cardoso, Rita M.; Prieto, Laura; Ruiz, Javier; García-Lafuente, Jesus
2014-03-01
Inter-annual variability of calculated barotropic (netflow) and simulated baroclinic (inflow and outflow) exchanges through the Strait of Gibraltar is analyzed and their response to the main modes of atmospheric variability is investigated. Time series of the outflow obtained by high resolution simulations and estimated from in-situ Acoustic Doppler Current Profiler (ADCP) current measurements are compared. The time coefficients (TC) of the leading empirical orthogonal function (EOF) modes that describe zonal atmospheric circulation in the vicinity of the Strait (1st and 3rd of Sea-Level Pressure (SLP) and 1st of the wind) show significant covariance with the inflow and outflow. Based on these analyses, a regression model between these SLP TCs and outflow of the Mediterranean Water was developed. This regression outflow time series was compared with estimates based on current meter observations and the predictability and reconstruction of past exchange variability based on atmospheric pressure fields are discussed. The simple regression model seems to reproduce the outflow evolution fairly reasonably, with the exception of the year 2008, which is apparently anomalous without available physical explanation yet. The exchange time series show a reduced inter-annual variability (less than 1%, 2.6% and 3.1% of total 2-day variability, for netflow, inflow and outflow, respectively). From a statistical point of view no clear long-term tendencies were revealed. Anomalously high baroclinic fluxes are reported for the years of 2000-2001 that are coincident with strong impact on the Alboran Sea ecosystem. The origin of the anomalous flow is associated with a strong negative anomaly (~ - 9 hPa) in atmospheric pressure fields settled north of Iberian Peninsula and extending over the central Atlantic, favoring an increased zonal circulation in winter 2000/2001. These low pressure fields forced intense and durable westerly winds in the Gulf of Cadiz-Alboran system. The signal of this anomaly is also seen in time coefficients of the most significant EOF modes. The predictability of the exchanges for future climate is discussed.
NASA Astrophysics Data System (ADS)
Luceri, V.; Sciarretta, C.; Bianco, G.
2012-12-01
The redistribution of the mass within the earth system induces changes in the Earth's gravity field. In particular, the second-degree geopotential coefficients reflect the behaviour of the Earth's inertia tensor of order 2, describing the main mass variations of our planet impacting the EOPs. Thanks to the long record of accurate and continuous laser ranging observations to Lageos and other geodetic satellites, SLR is the only current space technique capable to monitor the long time variability of the Earth's gravity field with adequate accuracy. Time series of low-degree geopotential coefficients are estimated with our analysis of SLR data (spanning more than 25 years) from several geodetic satellites in order to detect trends and periodic variations related to tidal effects and atmospheric/oceanic mass variations. This study is focused on the variations of the second-degree Stokes coefficients related to the Earth's principal figure axis and oblateness: C21, S21 and C20. On the other hand, surface mass load variations induce excitations in the EOPs that are proportional to the same second-degree coefficients. The time series of direct estimates of low degree geopotential and those derived from the EOP excitation functions are compared and presented together with their time and frequency analysis.
A 10-kW series resonant converter design, transistor characterization, and base-drive optimization
NASA Technical Reports Server (NTRS)
Robson, R. R.; Hancock, D. J.
1982-01-01
The development, components, and performance of a transistor-based 10 kW series resonant converter for use in resonant circuits in space applications is described. The transistors serve to switch on the converter current, which has a half-sinusoid waveform when the transistor is in saturation. The goal of the program was to handle an input-output voltage range of 230-270 Vdc, an output voltage range of 200-500 Vdc, and a current limit range of 0-20 A. Testing procedures for the D60T and D7ST transistors are outlined and base drive waveforms are presented. The total device dissipation was minimized and found to be independent of the regenerative feedback ratio at lower current levels. Dissipation was set at within 10% and rise times were found to be acceptable. The finished unit displayed a 91% efficiency at full power levels of 500 V and 20 A and 93.7% at 500 V and 10 A.
NASA Astrophysics Data System (ADS)
Cuansing, Eduardo C.; Liang, Gengchiau
2011-10-01
Time-dependent nonequilibrium Green's functions are used to study electron transport properties in a device consisting of two linear chain leads and a time-dependent interlead coupling that is switched on non-adiabatically. We derive a numerically exact expression for the particle current and examine its characteristics as it evolves in time from the transient regime to the long-time steady-state regime. We find that just after switch-on, the current initially overshoots the expected long-time steady-state value, oscillates and decays as a power law, and eventually settles to a steady-state value consistent with the value calculated using the Landauer formula. The power-law parameters depend on the values of the applied bias voltage, the strength of the couplings, and the speed of the switch-on. In particular, the oscillating transient current decays away longer for lower bias voltages. Furthermore, the power-law decay nature of the current suggests an equivalent series resistor-inductor-capacitor circuit wherein all of the components have time-dependent properties. Such dynamical resistive, inductive, and capacitive influences are generic in nano-circuits where dynamical switches are incorporated. We also examine the characteristics of the dynamical current in a nano-oscillator modeled by introducing a sinusoidally modulated interlead coupling between the two leads. We find that the current does not strictly follow the sinusoidal form of the coupling. In particular, the maximum current does not occur during times when the leads are exactly aligned. Instead, the times when the maximum current occurs depend on the values of the bias potential, nearest-neighbor coupling, and the interlead coupling.
Predicting disease progression from short biomarker series using expert advice algorithm
NASA Astrophysics Data System (ADS)
Morino, Kai; Hirata, Yoshito; Tomioka, Ryota; Kashima, Hisashi; Yamanishi, Kenji; Hayashi, Norihiro; Egawa, Shin; Aihara, Kazuyuki
2015-05-01
Well-trained clinicians may be able to provide diagnosis and prognosis from very short biomarker series using information and experience gained from previous patients. Although mathematical methods can potentially help clinicians to predict the progression of diseases, there is no method so far that estimates the patient state from very short time-series of a biomarker for making diagnosis and/or prognosis by employing the information of previous patients. Here, we propose a mathematical framework for integrating other patients' datasets to infer and predict the state of the disease in the current patient based on their short history. We extend a machine-learning framework of ``prediction with expert advice'' to deal with unstable dynamics. We construct this mathematical framework by combining expert advice with a mathematical model of prostate cancer. Our model predicted well the individual biomarker series of patients with prostate cancer that are used as clinical samples.
Predicting disease progression from short biomarker series using expert advice algorithm.
Morino, Kai; Hirata, Yoshito; Tomioka, Ryota; Kashima, Hisashi; Yamanishi, Kenji; Hayashi, Norihiro; Egawa, Shin; Aihara, Kazuyuki
2015-05-20
Well-trained clinicians may be able to provide diagnosis and prognosis from very short biomarker series using information and experience gained from previous patients. Although mathematical methods can potentially help clinicians to predict the progression of diseases, there is no method so far that estimates the patient state from very short time-series of a biomarker for making diagnosis and/or prognosis by employing the information of previous patients. Here, we propose a mathematical framework for integrating other patients' datasets to infer and predict the state of the disease in the current patient based on their short history. We extend a machine-learning framework of "prediction with expert advice" to deal with unstable dynamics. We construct this mathematical framework by combining expert advice with a mathematical model of prostate cancer. Our model predicted well the individual biomarker series of patients with prostate cancer that are used as clinical samples.
A Bayesian Approach to Systematic Error Correction in Kepler Photometric Time Series
NASA Astrophysics Data System (ADS)
Jenkins, Jon Michael; VanCleve, J.; Twicken, J. D.; Smith, J. C.; Kepler Science Team
2011-01-01
In order for the Kepler mission to achieve its required 20 ppm photometric precision for 6.5 hr observations of 12th magnitude stars, the Presearch Data Conditioning (PDC) software component of the Kepler Science Processing Pipeline must reduce systematic errors in flux time series to the limit of stochastic noise for errors with time-scales less than three days, without smoothing or over-fitting away the transits that Kepler seeks. The current version of PDC co-trends against ancillary engineering data and Pipeline generated data using essentially a least squares (LS) approach. This approach is successful for quiet stars when all sources of systematic error have been identified. If the stars are intrinsically variable or some sources of systematic error are unknown, LS will nonetheless attempt to explain all of a given time series, not just the part the model can explain well. Negative consequences can include loss of astrophysically interesting signal, and injection of high-frequency noise into the result. As a remedy, we present a Bayesian Maximum A Posteriori (MAP) approach, in which a subset of intrinsically quiet and highly-correlated stars is used to establish the probability density function (PDF) of robust fit parameters in a diagonalized basis. The PDFs then determine a "reasonable” range for the fit parameters for all stars, and brake the runaway fitting that can distort signals and inject noise. We present a closed-form solution for Gaussian PDFs, and show examples using publically available Quarter 1 Kepler data. A companion poster (Van Cleve et al.) shows applications and discusses current work in more detail. Kepler was selected as the 10th mission of the Discovery Program. Funding for this mission is provided by NASA, Science Mission Directorate.
ERIC Educational Resources Information Center
Mason, Emily
2010-01-01
Research investigating music textbook series is limited and has primarily focused on series no longer in publication, on two grade levels, and/or on limited cultures. The purpose of this study is to examine what countries are and have been represented in current music textbook series. Additional questions in the study pertain to frequency and…
Optimizing Use of Water Management Systems during Changes of Hydrological Conditions
NASA Astrophysics Data System (ADS)
Výleta, Roman; Škrinár, Andrej; Danáčová, Michaela; Valent, Peter
2017-10-01
When designing the water management systems and their components, there is a need of more detail research on hydrological conditions of the river basin, runoff of which creates the main source of water in the reservoir. Over the lifetime of the water management systems the hydrological time series are never repeated in the same form which served as the input for the design of the system components. The design assumes the observed time series to be representative at the time of the system use. However, it is rather unrealistic assumption, because the hydrological past will not be exactly repeated over the design lifetime. When designing the water management systems, the specialists may occasionally face the insufficient or oversized capacity design, possibly wrong specification of the management rules which may lead to their non-optimal use. It is therefore necessary to establish a comprehensive approach to simulate the fluctuations in the interannual runoff (taking into account the current dry and wet periods) in the form of stochastic modelling techniques in water management practice. The paper deals with the methodological procedure of modelling the mean monthly flows using the stochastic Thomas-Fiering model, while modification of this model by Wilson-Hilferty transformation of independent random number has been applied. This transformation usually applies in the event of significant asymmetry in the observed time series. The methodological procedure was applied on the data acquired at the gauging station of Horné Orešany in the Parná Stream. Observed mean monthly flows for the period of 1.11.1980 - 31.10.2012 served as the model input information. After extrapolation the model parameters and Wilson-Hilferty transformation parameters the synthetic time series of mean monthly flows were simulated. Those have been compared with the observed hydrological time series using basic statistical characteristics (e. g. mean, standard deviation and skewness) for testing the quality of the model simulation. The synthetic hydrological series of monthly flows were created having the same statistical properties as the time series observed in the past. The compiled model was able to take into account the diversity of extreme hydrological situations in a form of synthetic series of mean monthly flows, while the occurrence of a set of flows was confirmed, which could and may occur in the future. The results of stochastic modelling in the form of synthetic time series of mean monthly flows, which takes into account the seasonal fluctuations of runoff within the year, could be applicable in engineering hydrology (e. g. for optimum use of the existing water management system that is related to reassessment of economic risks of the system).
Methods and apparatus for controlling respective load currents of multiple series-connected loads
Datta, Michael; Lys, Ihor
2014-05-27
A lighting apparatus (100) includes one or more first LEDs (202) for generating a first spectrum of radiation (503), and one or more second LEDs (204) for generating a second different spectrum radiation (505). The first and second LEDs are electrically connected in series between a first node (516A) and a second node (516B), between which a series current (550) flows with the application of an operating voltage (516) across the nodes. A controllable current path (518) is connected in parallel with one or both of the first and second LEDs so as to at least partially divert the series current, such that a first current (552) through the first LED(s) and a second current (554) through the second LED(s) are different. Such current diversion techniques may be employed to compensate for shifts in color or color temperature of generated light during thermal transients, due to different temperature-dependent current-to-flux relationships for different types of LEDs.
Online Time Series Analysis of Land Products over Asia Monsoon Region via Giovanni
NASA Technical Reports Server (NTRS)
Shen, Suhung; Leptoukh, Gregory G.; Gerasimov, Irina
2011-01-01
Time series analysis is critical to the study of land cover/land use changes and climate. Time series studies at local-to-regional scales require higher spatial resolution, such as 1km or less, data. MODIS land products of 250m to 1km resolution enable such studies. However, such MODIS land data files are distributed in 10ox10o tiles, due to large data volumes. Conducting a time series study requires downloading all tiles that include the study area for the time period of interest, and mosaicking the tiles spatially. This can be an extremely time-consuming process. In support of the Monsoon Asia Integrated Regional Study (MAIRS) program, NASA GES DISC (Goddard Earth Sciences Data and Information Services Center) has processed MODIS land products at 1 km resolution over the Asia monsoon region (0o-60oN, 60o-150oE) with a common data structure and format. The processed data have been integrated into the Giovanni system (Goddard Interactive Online Visualization ANd aNalysis Infrastructure) that enables users to explore, analyze, and download data over an area and time period of interest easily. Currently, the following regional MODIS land products are available in Giovanni: 8-day 1km land surface temperature and active fire, monthly 1km vegetation index, and yearly 0.05o, 500m land cover types. More data will be added in the near future. By combining atmospheric and oceanic data products in the Giovanni system, it is possible to do further analyses of environmental and climate changes associated with the land, ocean, and atmosphere. This presentation demonstrates exploring land products in the Giovanni system with sample case scenarios.
What does the structure of its visibility graph tell us about the nature of the time series?
NASA Astrophysics Data System (ADS)
Franke, Jasper G.; Donner, Reik V.
2017-04-01
Visibility graphs are a recently introduced method to construct complex network representations based upon univariate time series in order to study their dynamical characteristics [1]. In the last years, this approach has been successfully applied to studying a considerable variety of geoscientific research questions and data sets, including non-trivial temporal patterns in complex earthquake catalogs [2] or time-reversibility in climate time series [3]. It has been shown that several characteristic features of the thus constructed networks differ between stochastic and deterministic (possibly chaotic) processes, which is, however, relatively hard to exploit in the case of real-world applications. In this study, we propose studying two new measures related with the network complexity of visibility graphs constructed from time series, one being a special type of network entropy [4] and the other a recently introduced measure of the heterogeneity of the network's degree distribution [5]. For paradigmatic model systems exhibiting bifurcation sequences between regular and chaotic dynamics, both properties clearly trace the transitions between both types of regimes and exhibit marked quantitative differences for regular and chaotic dynamics. Moreover, for dynamical systems with a small amount of additive noise, the considered properties demonstrate gradual changes prior to the bifurcation point. This finding appears closely related to the subsequent loss of stability of the current state known to lead to a critical slowing down as the transition point is approaches. In this spirit, both considered visibility graph characteristics provide alternative tracers of dynamical early warning signals consistent with classical indicators. Our results demonstrate that measures of visibility graph complexity (i) provide a potentially useful means to tracing changes in the dynamical patterns encoded in a univariate time series that originate from increasing autocorrelation and (ii) allow to systematically distinguish regular from deterministic-chaotic dynamics. We demonstrate the application of our method for different model systems as well as selected paleoclimate time series from the North Atlantic region. Notably, visibility graph based methods are particularly suited for studying the latter type of geoscientific data, since they do not impose intrinsic restrictions or assumptions on the nature of the time series under investigation in terms of noise process, linearity and sampling homogeneity. [1] Lacasa, Lucas, et al. "From time series to complex networks: The visibility graph." Proceedings of the National Academy of Sciences 105.13 (2008): 4972-4975. [2] Telesca, Luciano, and Michele Lovallo. "Analysis of seismic sequences by using the method of visibility graph." EPL (Europhysics Letters) 97.5 (2012): 50002. [3] Donges, Jonathan F., Reik V. Donner, and Jürgen Kurths. "Testing time series irreversibility using complex network methods." EPL (Europhysics Letters) 102.1 (2013): 10004. [4] Small, Michael. "Complex networks from time series: capturing dynamics." 2013 IEEE International Symposium on Circuits and Systems (ISCAS2013), Beijing (2013): 2509-2512. [5] Jacob, Rinku, K.P. Harikrishnan, Ranjeev Misra, and G. Ambika. "Measure for degree heterogeneity in complex networks and its application to recurrence network analysis." arXiv preprint 1605.06607 (2016).
In this study, the concept of scale analysis is applied to evaluate two state-of-science meteorological models, namely MM5 and RAMS3b, currently being used to drive regional-scale air quality models. To this end, seasonal time series of observations and predictions for temperatur...
77 FR 65810 - Airworthiness Directives; Bombardier, Inc. Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-31
... directive (AD) for certain Bombardier, Inc. Model CL-600-2B19 (Regional Jet Series 100 & 440) airplanes. That AD currently requires a one-time inspection of the shafts of the main landing gear (MLG) side... MLG side-brace fitting, and replacing the side-brace fitting shaft with the re-designed side-brace...
77 FR 34870 - Airworthiness Directives; Bombardier, Inc. Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-12
... Bombardier, Inc. Model CL-600-2B19 (Regional Jet Series 100 & 440) airplanes. The existing AD currently requires a one-time inspection of the shafts of the main landing gear (MLG) side-brace fittings to detect...-brace fitting and replacing the side-brace fitting shaft with the re-designed side-brace fitting shaft...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-02
... the SIPP, which is a household-based survey designed as a continuous series of national panels. New... the panel. The core is supplemented with questions designed to address specific needs, such as... be measured over time. The 2008 panel is currently scheduled for approximately 6 years and will...
Women of Spirit: Leaders in the Counseling Profession
ERIC Educational Resources Information Center
Black, Linda L.; Magnuson, Sandy
2005-01-01
The counseling literature documents the profession's inception and growth in a variety of ways. Current examples include Counseling Today's "ACA Turns 50" feature series and "Passing the Tradition: ACES Presidents 1940-97" (Sheeley, 1997). Professional counselors often identify the late 1940s and early 1950s as the time of the profession's…
The Status of Children, Youth and Families 1979.
ERIC Educational Resources Information Center
Administration for Children, Youth, and Families (DHEW), Washington, DC.
This publication, third in a biennial series of reports, focuses on the status of children, youth and families in 1979. Chapter I discusses historical and current demographic and economic trends that have occurred in the characteristics and circumstances of the United States population. Chapter II attempts to capture the factor of time in human…
An Evaluation of a Shared Leadership Training Program
ERIC Educational Resources Information Center
Allen, Lavonda Ann
2010-01-01
The purpose of the quantitative quasi-experimental equivalent time series study was to evaluate the effectiveness of a work based shared leadership training program in a less than 200 bed hospital in rural, south-central United States. Nursing shortages and the current emphasis on quality are factors that make recruitment and retention of nurses…
State Actions to Advance Teacher Evaluation. Educator Effectiveness Series
ERIC Educational Resources Information Center
Gandha, Tysza; Baxter, Andy
2016-01-01
This report offers state leaders key areas for action to continue progress in implementing evaluation systems, even as federal policies on teacher evaluation relax state requirements. The Southern Regional Education Board (SREB) offers its current best thinking for how state agencies can make the smartest use of funds, time and partners to refine…
Evaluation as Story: The Narrative Quality of Educational Evaluation.
ERIC Educational Resources Information Center
Wachtman, Edward L.
The author presents his opinion that educational evaluation has much similarity to the nonfiction narrative, (defined as a series of events ordered in time), particularly as it relates a current situation to future possibilities. He refers to Stake's statement that evaluation is concerned not only with outcomes but also with antecedents and with…
So Many Chemicals, So Little Time... Evolution of ...
Current testing is limited by traditional testing models and regulatory systems. An overview is given of high throughput screening approaches to provide broader chemical and biological coverage, toxicokinetics and molecular pathway data and tools to facilitate utilization for regulatory application. Presentation at the NCSU Toxicology lecture series on the Evolution of Computational Toxicology
The Political Economy of Schooling. ESA845, The Economy of Schooling.
ERIC Educational Resources Information Center
Freeland, John
This volume, part of a series of mongraphs that explore the relationship between the economy and schooling, analyzes the economic influences contributing to current pressures for changes in secondary schooling in Australian society with particular attention to the long-term structural collapse of the full-time teenage labor market. After a brief…
Computer-Aided Discovery Tools for Volcano Deformation Studies with InSAR and GPS
NASA Astrophysics Data System (ADS)
Pankratius, V.; Pilewskie, J.; Rude, C. M.; Li, J. D.; Gowanlock, M.; Bechor, N.; Herring, T.; Wauthier, C.
2016-12-01
We present a Computer-Aided Discovery approach that facilitates the cloud-scalable fusion of different data sources, such as GPS time series and Interferometric Synthetic Aperture Radar (InSAR), for the purpose of identifying the expansion centers and deformation styles of volcanoes. The tools currently developed at MIT allow the definition of alternatives for data processing pipelines that use various analysis algorithms. The Computer-Aided Discovery system automatically generates algorithmic and parameter variants to help researchers explore multidimensional data processing search spaces efficiently. We present first application examples of this technique using GPS data on volcanoes on the Aleutian Islands and work in progress on combined GPS and InSAR data in Hawaii. In the model search context, we also illustrate work in progress combining time series Principal Component Analysis with InSAR augmentation to constrain the space of possible model explanations on current empirical data sets and achieve a better identification of deformation patterns. This work is supported by NASA AIST-NNX15AG84G and NSF ACI-1442997 (PI: V. Pankratius).
Coral radiocarbon constraints on the source of the Indonesian throughflow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moore, M.D.; Schrag, D.P.; Kashgarian, M.
1997-06-01
Radiocarbon variability in {ital Porites} spp. corals from Guam and the Makassar Strait (Indonesian Seaway) was used to identify the source waters contributing to the Indonesian throughflow. Time series with bimonthly resolution were constructed using accelerator mass spectrometry. The seasonal variability ranges from 15 to 60{per_thousand}, with large interannual variability. {Delta}{sup 14}C values from Indonesia and Guam have a nearly identical range. Annual mean {Delta}{sup 14}C values from Indonesia are 50 to 60{per_thousand} higher than in corals from Canton in the South Equatorial Current [{ital Druffel}, 1987]. These observations support a year-round North Pacific source for the Indonesian throughflow andmore » imply negligible contribution by South Equatorial Current water. The large seasonality in {Delta}{sup 14}C values from both sites emphasizes the dynamic behavior of radiocarbon in the surface ocean and suggests that {Delta}{sup 14}C time series of similar resolution can help constrain seasonal and interannual changes in ocean circulation in the Pacific over the last several decades.{copyright} 1997 American Geophysical Union« less
Cobb, Stephen C; Joshi, Mukta N; Pomeroy, Robin L
2016-12-01
In-vitro and invasive in-vivo studies have reported relatively independent motion in the medial and lateral forefoot segments during gait. However, most current surface-based models have not defined medial and lateral forefoot or midfoot segments. The purpose of the current study was to determine the reliability of a 7-segment foot model that includes medial and lateral midfoot and forefoot segments during walking gait. Three-dimensional positions of marker clusters located on the leg and 6 foot segments were tracked as 10 participants completed 5 walking trials. To examine the reliability of the foot model, coefficients of multiple correlation (CMC) were calculated across the trials for each participant. Three-dimensional stance time series and range of motion (ROM) during stance were also calculated for each functional articulation. CMCs for all of the functional articulations were ≥ 0.80. Overall, the rearfoot complex (leg-calcaneus segments) was the most reliable articulation and the medial midfoot complex (calcaneus-navicular segments) was the least reliable. With respect to ROM, reliability was greatest for plantarflexion/dorsiflexion and least for abduction/adduction. Further, the stance ROM and time-series patterns results between the current study and previous invasive in-vivo studies that have assessed actual bone motion were generally consistent.
NASA Astrophysics Data System (ADS)
Balasis, Georgios; Daglis, Ioannis A.; Papadimitriou, Constantinos; Melis, Nikolaos; Giannakis, Omiros; Kontoes, Charalampos
2016-04-01
The HellENIc GeoMagnetic Array (ENIGMA) is a network of 3 ground-based magnetometer stations in the areas of Trikala, Attiki and Lakonia in Greece that provides measurements for the study of geomagnetic pulsations, resulting from the solar wind - magnetosphere coupling. ENIGMA magnetometer array enables effective remote sensing of geospace dynamics and the study of space weather effects on the ground (i.e., Geomagnetically Induced Currents - GIC). ENIGMA contributes data to SuperMAG, a worldwide collaboration of organizations and national agencies that currently operate more than 300 ground-based magnetometers. ENIGMA is currently extended and upgraded receiving financial support through the national funding KRIPIS project and European Commission's BEYOND project. In particular, the REGPOT project BEYOND is an FP7 project that aims to maintain and expand the existing state-of-the-art interdisciplinary research potential, by Building a Centre of Excellence for Earth Observation based monitoring of Natural Disasters in south-eastern Europe, with a prospect to increase its access range to the wider Mediterranean region through the integrated cooperation with twining organizations. This study explores the applicability and effectiveness of a variety of computable entropy measures to the ENIGMA time series in order to investigate dynamical complexity between pre-storm activity and magnetic storms.
Sinusoidal voltage protocols for rapid characterisation of ion channel kinetics.
Beattie, Kylie A; Hill, Adam P; Bardenet, Rémi; Cui, Yi; Vandenberg, Jamie I; Gavaghan, David J; de Boer, Teun P; Mirams, Gary R
2018-03-24
Ion current kinetics are commonly represented by current-voltage relationships, time constant-voltage relationships and subsequently mathematical models fitted to these. These experiments take substantial time, which means they are rarely performed in the same cell. Rather than traditional square-wave voltage clamps, we fitted a model to the current evoked by a novel sum-of-sinusoids voltage clamp that was only 8 s long. Short protocols that can be performed multiple times within a single cell will offer many new opportunities to measure how ion current kinetics are affected by changing conditions. The new model predicts the current under traditional square-wave protocols well, with better predictions of underlying currents than literature models. The current under a novel physiologically relevant series of action potential clamps is predicted extremely well. The short sinusoidal protocols allow a model to be fully fitted to individual cells, allowing us to examine cell-cell variability in current kinetics for the first time. Understanding the roles of ion currents is crucial to predict the action of pharmaceuticals and mutations in different scenarios, and thereby to guide clinical interventions in the heart, brain and other electrophysiological systems. Our ability to predict how ion currents contribute to cellular electrophysiology is in turn critically dependent on our characterisation of ion channel kinetics - the voltage-dependent rates of transition between open, closed and inactivated channel states. We present a new method for rapidly exploring and characterising ion channel kinetics, applying it to the hERG potassium channel as an example, with the aim of generating a quantitatively predictive representation of the ion current. We fitted a mathematical model to currents evoked by a novel 8 second sinusoidal voltage clamp in CHO cells overexpressing hERG1a. The model was then used to predict over 5 minutes of recordings in the same cell in response to further protocols: a series of traditional square step voltage clamps, and also a novel voltage clamp comprising a collection of physiologically relevant action potentials. We demonstrate that we can make predictive cell-specific models that outperform the use of averaged data from a number of different cells, and thereby examine which changes in gating are responsible for cell-cell variability in current kinetics. Our technique allows rapid collection of consistent and high quality data, from single cells, and produces more predictive mathematical ion channel models than traditional approaches. © 2018 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.
Why didn't Box-Jenkins win (again)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pack, D.J.; Downing, D.J.
This paper focuses on the forecasting performance of the Box-Jenkins methodology applied to the 111 time series of the Makridakis competition. It considers the influence of the following factors: (1) time series length, (2) time-series information (autocorrelation) content, (3) time-series outliers or structural changes, (4) averaging results over time series, and (5) forecast time origin choice. It is found that the 111 time series contain substantial numbers of very short series, series with obvious structural change, and series whose histories are relatively uninformative. If these series are typical of those that one must face in practice, the real message ofmore » the competition is that univariate time series extrapolations will frequently fail regardless of the methodology employed to produce them.« less
Studies on the wintertime current structure and T-S fine-structure in the Taiwan Strait
NASA Astrophysics Data System (ADS)
Hu, Jianyu; Fu, Zilang; Wu, Lianxing
1990-12-01
A cruise through the western sea area of the Taiwan Strait was carried out by the R/V Dong Fang Hong in December, 1987. Eight anchored and 10 not anchored stations were set up. Over 25 time-series current observations were made at each station and CTD (Conductivity-temperature-depth) measurements were made at 5 anchored and 10 not anchored stations. Based on the measured data. fine-structures and step-like vertical structures of temperature and salinity were analysed and a tentative wintertime current structure in the Taiwan Strait was described.
Multifractal analysis of visibility graph-based Ito-related connectivity time series.
Czechowski, Zbigniew; Lovallo, Michele; Telesca, Luciano
2016-02-01
In this study, we investigate multifractal properties of connectivity time series resulting from the visibility graph applied to normally distributed time series generated by the Ito equations with multiplicative power-law noise. We show that multifractality of the connectivity time series (i.e., the series of numbers of links outgoing any node) increases with the exponent of the power-law noise. The multifractality of the connectivity time series could be due to the width of connectivity degree distribution that can be related to the exit time of the associated Ito time series. Furthermore, the connectivity time series are characterized by persistence, although the original Ito time series are random; this is due to the procedure of visibility graph that, connecting the values of the time series, generates persistence but destroys most of the nonlinear correlations. Moreover, the visibility graph is sensitive for detecting wide "depressions" in input time series.
Skill Testing a Three-Dimensional Global Tide Model to Historical Current Meter Records
2013-12-17
up to 20% weaker skill in the Southern Ocean. Citation: Timko, P. G., B. K. Arbic, J. G. Richman, R . B. Scott, E. J. Metzger, and A. J. Wallcraft (2013...model were identified from a current meter archive ( CMA ) of approximately 9000 unique time series previously used by Scott et al. [2010] and Timko et al...2012]. The CMA spans 40 years of observations. Some of the velocity records used in this study represents individ- ual depth bins from ADCP’s. The
Phytoplankton pigment patterns and wind forcing off central California
NASA Technical Reports Server (NTRS)
Abbott, Mark R.; Barksdale, Brett
1991-01-01
Mesoscale variability in phytoplankton pigment distributions of central California during the spring-summer upwelling season are studied via a 4-yr time series of high-resolution coastal zone color scanner imagery. Empirical orthogonal functions are used to decompose the time series of spatial images into its dominant modes of variability. The coupling between wind forcing of the upper ocean and phytoplankton distribution on mesoscales is investigated. Wind forcing, in particular the curl of the wind stress, was found to play an important role in the distribution of phytoplankton pigment in the California Current. The spring transition varies in timing and intensity from year to year but appears to be a recurrent feature associated with the rapid onset of the upwelling-favorable winds. Although the underlying dynamics may be dominated by processes other than forcing by wind stress curl, it appears that curl may force the variability of the filaments and hence the pigment patterns.
A multiple-fan active control wind tunnel for outdoor wind speed and direction simulation
NASA Astrophysics Data System (ADS)
Wang, Jia-Ying; Meng, Qing-Hao; Luo, Bing; Zeng, Ming
2018-03-01
This article presents a new type of active controlled multiple-fan wind tunnel. The wind tunnel consists of swivel plates and arrays of direct current fans, and the rotation speed of each fan and the shaft angle of each swivel plate can be controlled independently for simulating different kinds of outdoor wind fields. To measure the similarity between the simulated wind field and the outdoor wind field, wind speed and direction time series of two kinds of wind fields are recorded by nine two-dimensional ultrasonic anemometers, and then statistical properties of the wind signals in different time scales are analyzed based on the empirical mode decomposition. In addition, the complexity of wind speed and direction time series is also investigated using multiscale entropy and multivariate multiscale entropy. Results suggest that the simulated wind field in the multiple-fan wind tunnel has a high degree of similarity with the outdoor wind field.
Global gridded crop specific agricultural areas from 1961-2014
NASA Astrophysics Data System (ADS)
Konar, M.; Jackson, N. D.
2017-12-01
Current global cropland datasets are limited in crop specificity and temporal resolution. Time series maps of crop specific agricultural areas would enable us to better understand the global agricultural geography of the 20th century. To this end, we develop a global gridded dataset of crop specific agricultural areas from 1961-2014. To do this, we downscale national cropland information using a probabilistic approach. Our method relies upon gridded Global Agro-Ecological Zones (GAEZ) maps, the History Database of the Global Environment (HYDE), and crop calendars from Sacks et al. (2010). We estimate crop-specific agricultural areas for a 0.25 degree spatial grid and annual time scale for all major crops. We validate our global estimates for the year 2000 with Monfreda et al. (2008) and our time series estimates within the United States using government data. This database will contribute to our understanding of global agricultural change of the past century.
Doran, Kara S.; Howd, Peter A.; Sallenger,, Asbury H.
2016-01-04
Recent studies, and most of their predecessors, use tide gage data to quantify SL acceleration, ASL(t). In the current study, three techniques were used to calculate acceleration from tide gage data, and of those examined, it was determined that the two techniques based on sliding a regression window through the time series are more robust compared to the technique that fits a single quadratic form to the entire time series, particularly if there is temporal variation in the magnitude of the acceleration. The single-fit quadratic regression method has been the most commonly used technique in determining acceleration in tide gage data. The inability of the single-fit method to account for time-varying acceleration may explain some of the inconsistent findings between investigators. Properly quantifying ASL(t) from field measurements is of particular importance in evaluating numerical models of past, present, and future SLR resulting from anticipated climate change.
Further developments of the Neyman-Scott clustered point process for modeling rainfall
NASA Astrophysics Data System (ADS)
Cowpertwait, Paul S. P.
1991-07-01
This paper provides some useful results for modeling rainfall. It extends work on the Neyman-Scott cluster model for simulating rainfall time series. Several important properties have previously been found for the model, for example, the expectation and variance of the amount of rain captured in an arbitrary time interval (Rodriguez-Iturbe et al., 1987a), In this paper additional properties are derived, such as the probability of an arbitrary interval of any chosen length being dry. In applications this is a desirable property to have, and is often used for fitting stochastic rainfall models to field data. The model is currently being used in rainfall time series research directed toward improving sewage systems in the United Kingdom. To illustrate the model's performance an example is given, where the model is fitted to 10 years of hourly data taken from Blackpool, England.
Uneven transitions: Period- and cohort-related changes in gender attitudes in China, 1995-2007.
Shu, Xiaoling; Zhu, Yifei
2012-09-01
This paper analyzes temporal variations in two gender attitudes in China: beliefs about gender equality and perspectives on women's combined work and family roles. It uses the most currently available population series from the 1995, 2001 and 2007 World Value Surveys of 4500 respondents and a series of multilevel cross-classified models to properly estimate period and cohort effects. Attitudes toward women's dual roles manifest neither period nor cohort effects; the population displays a universal high level of acceptance of women's paid employment. Orientations toward gender equality manifest both cohort and period effects: members of the youngest cohort of both sexes hold the most liberal attitudes; the positive effect of college education has increased over time. Attitude toward gender equality in China displays neither a shift toward conservatism nor an over-time trend toward egalitarianism in 1995-2007, a time of rapid economic growth. Copyright © 2012 Elsevier Inc. All rights reserved.
Sentinel 2 products and data quality status
NASA Astrophysics Data System (ADS)
Clerc, Sebastien; Gascon, Ferran; Bouzinac, Catherine; Touli-Lebreton, Dimitra; Francesconi, Benjamin; Lafrance, Bruno; Louis, Jerome; Alhammoud, Bahjat; Massera, Stephane; Pflug, Bringfried; Viallefont, Francoise; Pessiot, Laetitia
2017-04-01
Since July 2015, Sentinel-2A provides high-quality multi-spectral images with 10 m spatial resolution. With the launch of Sentinel-2B scheduled for early March 2017, the mission will create a consistent time series with a revisit time of 5 days. The consistency of the time series is ensured by some specific performance requirements such as multi-temporal spatial co-registration and radiometric stability, routinely monitored by the Sentinel-2 Mission Performance Centre (S2MPC). The products also provide a rich set of metadata and auxiliary data to support higher-level processing. This presentation will focus on the current status of the Sentinel-2 L1C and L2A products, including dissemination and product format aspects. Up-to-date mission performance estimations will be presented. Finally we will provide an outlook on the future evolutions: commissioning tasks for Sentinel-2B, geometric refinement, product format and processing improvements.
Blind tests of methods for InSight Mars mission: Open scientific challenge
NASA Astrophysics Data System (ADS)
Clinton, John; Ceylan, Savas; Giardini, Domenico; Khan, Amir; van Driel, Martin; Böse, Maren; Euchner, Fabian; Garcia, Raphael F.; Drilleau, Mélanie; Lognonné, Philippe; Panning, Mark; Banerdt, Bruce
2017-04-01
The Marsquake Service (MQS) will be the ground segment service within the InSight mission to Mars, which will deploy a single seismic station on Elysium Planitia in November 2018. The main tasks of the MQS are the identification and characterisation of seismicity, and managing the Martian seismic event catalogue. In advance of the mission, we have developed a series of single station event location methods that rely on a priori 1D and 3D structural models. In coordination with the Mars Structural Service, we expect to use iterative inversion techniques to revise these structural models and event locations. In order to seek methodological advancements and test our current approaches, we have designed a blind test case using Martian synthetics combined with realistic noise models for the Martian surface. We invite all scientific parties that are interested in single station approaches and in exploring the Martian time-series to participate and contribute to our blind test. We anticipate the test will can improve currently developed location and structural inversion techniques, and also allow us explore new single station techniques for moment tensor and magnitude determination. The waveforms for our test case are computed employing AxiSEM and Instaseis for a randomly selected 1D background model and event catalogue that is statistically consistent with our current expectation of Martian seismicity. Realistic seismic surface noise is superimposed to generate a continuous time-series spanning 6 months. The event catalog includes impacts as well as Martian quakes. The temporal distribution of the seismicity in the timeseries, as well as the true structural model, are not be known to any participating parties including MQS till the end of competition. We provide our internal tools such as event location codes, suite of background models, seismic phase travel times, in order to support researchers who are willing to use/improve our current methods. Following the deadline of our blind test in late 2017, we plan to combine all outcomes in an article with all participants as co-authors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pal, Ranjan; Chelmis, Charalampos; Aman, Saima
The advent of smart meters and advanced communication infrastructures catalyzes numerous smart grid applications such as dynamic demand response, and paves the way to solve challenging research problems in sustainable energy consumption. The space of solution possibilities are restricted primarily by the huge amount of generated data requiring considerable computational resources and efficient algorithms. To overcome this Big Data challenge, data clustering techniques have been proposed. Current approaches however do not scale in the face of the “increasing dimensionality” problem where a cluster point is represented by the entire customer consumption time series. To overcome this aspect we first rethinkmore » the way cluster points are created and designed, and then design an efficient online clustering technique for demand response (DR) in order to analyze high volume, high dimensional energy consumption time series data at scale, and on the fly. Our online algorithm is randomized in nature, and provides optimal performance guarantees in a computationally efficient manner. Unlike prior work we (i) study the consumption properties of the whole population simultaneously rather than developing individual models for each customer separately, claiming it to be a ‘killer’ approach that breaks the “curse of dimensionality” in online time series clustering, and (ii) provide tight performance guarantees in theory to validate our approach. Our insights are driven by the field of sociology, where collective behavior often emerges as the result of individual patterns and lifestyles.« less
TIMESERIESSTREAMING.VI: LabVIEW program for reliable data streaming of large analog time series
NASA Astrophysics Data System (ADS)
Czerwinski, Fabian; Oddershede, Lene B.
2011-02-01
With modern data acquisition devices that work fast and very precise, scientists often face the task of dealing with huge amounts of data. These need to be rapidly processed and stored onto a hard disk. We present a LabVIEW program which reliably streams analog time series of MHz sampling. Its run time has virtually no limitation. We explicitly show how to use the program to extract time series from two experiments: For a photodiode detection system that tracks the position of an optically trapped particle and for a measurement of ionic current through a glass capillary. The program is easy to use and versatile as the input can be any type of analog signal. Also, the data streaming software is simple, highly reliable, and can be easily customized to include, e.g., real-time power spectral analysis and Allan variance noise quantification. Program summaryProgram title: TimeSeriesStreaming.VI Catalogue identifier: AEHT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 250 No. of bytes in distributed program, including test data, etc.: 63 259 Distribution format: tar.gz Programming language: LabVIEW ( http://www.ni.com/labview/) Computer: Any machine running LabVIEW 8.6 or higher Operating system: Windows XP and Windows 7 RAM: 60-360 Mbyte Classification: 3 Nature of problem: For numerous scientific and engineering applications, it is highly desirable to have an efficient, reliable, and flexible program to perform data streaming of time series sampled with high frequencies and possibly for long time intervals. This type of data acquisition often produces very large amounts of data not easily streamed onto a computer hard disk using standard methods. Solution method: This LabVIEW program is developed to directly stream any kind of time series onto a hard disk. Due to optimized timing and usage of computational resources, such as multicores and protocols for memory usage, this program provides extremely reliable data acquisition. In particular, the program is optimized to deal with large amounts of data, e.g., taken with high sampling frequencies and over long time intervals. The program can be easily customized for time series analyses. Restrictions: Only tested in Windows-operating LabVIEW environments, must use TDMS format, acquisition cards must be LabVIEW compatible, driver DAQmx installed. Running time: As desirable: microseconds to hours
Patel, Ameera X; Bullmore, Edward T
2016-11-15
Connectome mapping using techniques such as functional magnetic resonance imaging (fMRI) has become a focus of systems neuroscience. There remain many statistical challenges in analysis of functional connectivity and network architecture from BOLD fMRI multivariate time series. One key statistic for any time series is its (effective) degrees of freedom, df, which will generally be less than the number of time points (or nominal degrees of freedom, N). If we know the df, then probabilistic inference on other fMRI statistics, such as the correlation between two voxel or regional time series, is feasible. However, we currently lack good estimators of df in fMRI time series, especially after the degrees of freedom of the "raw" data have been modified substantially by denoising algorithms for head movement. Here, we used a wavelet-based method both to denoise fMRI data and to estimate the (effective) df of the denoised process. We show that seed voxel correlations corrected for locally variable df could be tested for false positive connectivity with better control over Type I error and greater specificity of anatomical mapping than probabilistic connectivity maps using the nominal degrees of freedom. We also show that wavelet despiked statistics can be used to estimate all pairwise correlations between a set of regional nodes, assign a P value to each edge, and then iteratively add edges to the graph in order of increasing P. These probabilistically thresholded graphs are likely more robust to regional variation in head movement effects than comparable graphs constructed by thresholding correlations. Finally, we show that time-windowed estimates of df can be used for probabilistic connectivity testing or dynamic network analysis so that apparent changes in the functional connectome are appropriately corrected for the effects of transient noise bursts. Wavelet despiking is both an algorithm for fMRI time series denoising and an estimator of the (effective) df of denoised fMRI time series. Accurate estimation of df offers many potential advantages for probabilistically thresholding functional connectivity and network statistics tested in the context of spatially variant and non-stationary noise. Code for wavelet despiking, seed correlational testing and probabilistic graph construction is freely available to download as part of the BrainWavelet Toolbox at www.brainwavelet.org. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Dionne-Odom, Jodie; Westfall, Andrew O; Nzuobontane, Divine; Vinikoor, Michael J; Halle-Ekane, Gregory; Welty, Thomas; Tita, Alan T N
2018-01-01
Although most African countries offer hepatitis B immunization through a 3-dose vaccine series recommended at 6, 10 and 14 weeks of age, very few provide birth dose vaccination. In support of Cameroon's national plan to implement the birth dose vaccine in 2017, we investigated predictors of infant hepatitis B virus (HBV) vaccination under the current program. Using the 2011 Demographic Health Survey in Cameroon, we identified women with at least one living child (age 12-60 months) and information about the hepatitis B vaccine series. Vaccination rates were calculated, and logistic regression modeling was used to identify factors associated with 3-dose series completion. Changes over time were assessed with linear logistic model. Among 4594 mothers analyzed, 66.7% (95% confidence interval [CI]: 64.1-69.3) of infants completed the hepatitis B vaccine series; however, an average 4-week delay in series initiation was noted with median dose timing at 10, 14 and 19 weeks of age. Predictors of series completion included facility delivery (adjusted odds ratio [aOR]: 2.1; 95% CI: 1.7-2.6), household wealth (aOR: 1.9; 95% CI: 1.2-3.1 comparing the highest and lowest quintiles), Christian religion (aOR: 1.8; 95% CI: 1.3-2.5 compared with Muslim religion) and older maternal age (aOR: 1.4; 95% CI: 1.2-1.7 for 10 year units). Birth dose vaccination to reduce vertical and early childhood transmission of hepatitis B may overcome some of the obstacles to timely and complete HBV immunization in Cameroon. Increased awareness of HBV is needed among pregnant women and high-risk groups about vertical transmission, the importance of facility delivery and the effectiveness of prevention beginning with monovalent HBV vaccination at birth.
Jo, Kyuri; Jung, Inuk; Moon, Ji Hwan; Kim, Sun
2016-01-01
Motivation: To understand the dynamic nature of the biological process, it is crucial to identify perturbed pathways in an altered environment and also to infer regulators that trigger the response. Current time-series analysis methods, however, are not powerful enough to identify perturbed pathways and regulators simultaneously. Widely used methods include methods to determine gene sets such as differentially expressed genes or gene clusters and these genes sets need to be further interpreted in terms of biological pathways using other tools. Most pathway analysis methods are not designed for time series data and they do not consider gene-gene influence on the time dimension. Results: In this article, we propose a novel time-series analysis method TimeTP for determining transcription factors (TFs) regulating pathway perturbation, which narrows the focus to perturbed sub-pathways and utilizes the gene regulatory network and protein–protein interaction network to locate TFs triggering the perturbation. TimeTP first identifies perturbed sub-pathways that propagate the expression changes along the time. Starting points of the perturbed sub-pathways are mapped into the network and the most influential TFs are determined by influence maximization technique. The analysis result is visually summarized in TF-Pathway map in time clock. TimeTP was applied to PIK3CA knock-in dataset and found significant sub-pathways and their regulators relevant to the PIP3 signaling pathway. Availability and Implementation: TimeTP is implemented in Python and available at http://biohealth.snu.ac.kr/software/TimeTP/. Supplementary information: Supplementary data are available at Bioinformatics online. Contact: sunkim.bioinfo@snu.ac.kr PMID:27307609
Development of web tools to disseminate space geodesy data-related products
NASA Astrophysics Data System (ADS)
Soudarin, L.; Ferrage, P.; Mezerette, A.
2014-12-01
In order to promote the products of the DORIS system, the French Space Agency CNES has developed and implemented on the web site of the International DORIS Service (IDS) a set of plot tools to interactively build and display time series of site positions, orbit residuals and terrestrial parameters (scale, geocenter). An interactive global map is also available to select sites, and to get access to their information. Besides the products provided by the CNES Orbitography Team and the IDS components, these tools allow comparing time evolutions of coordinates for collocated DORIS and GNSS stations, thanks to the collaboration with the Terrestrial Frame Combination Center of the International GNSS Service (IGS). The next step currently in progress is the creation of a database to improve robustness and efficiency of the tools, with the objective to propose a complete web service to foster data exchange with the other geodetic services of the International Association of Geodesy (IAG). The possibility to visualize and compare position time series of the four main space geodetic techniques DORIS, GNSS, SLR and VLBI is already under way at the French level. A dedicated version of these web tools has been developed for the French Space Geodesy Research Group (GRGS). It will give access to position time series provided by the GRGS Analysis Centers involved in DORIS, GNSS, SLR and VLBI data processing for the realization of the International Terrestrial Reference Frame. In this presentation, we will describe the functionalities of these tools, and we will address some aspects of the time series (content, format).
Observing climate change trends in ocean biogeochemistry: when and where.
Henson, Stephanie A; Beaulieu, Claudie; Lampitt, Richard
2016-04-01
Understanding the influence of anthropogenic forcing on the marine biosphere is a high priority. Climate change-driven trends need to be accurately assessed and detected in a timely manner. As part of the effort towards detection of long-term trends, a network of ocean observatories and time series stations provide high quality data for a number of key parameters, such as pH, oxygen concentration or primary production (PP). Here, we use an ensemble of global coupled climate models to assess the temporal and spatial scales over which observations of eight biogeochemically relevant variables must be made to robustly detect a long-term trend. We find that, as a global average, continuous time series are required for between 14 (pH) and 32 (PP) years to distinguish a climate change trend from natural variability. Regional differences are extensive, with low latitudes and the Arctic generally needing shorter time series (<~30 years) to detect trends than other areas. In addition, we quantify the 'footprint' of existing and planned time series stations, that is the area over which a station is representative of a broader region. Footprints are generally largest for pH and sea surface temperature, but nevertheless the existing network of observatories only represents 9-15% of the global ocean surface. Our results present a quantitative framework for assessing the adequacy of current and future ocean observing networks for detection and monitoring of climate change-driven responses in the marine ecosystem. © 2016 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.
Fernandes, Lohengrin Dias de Almeida; Fagundes Netto, Eduardo Barros; Coutinho, Ricardo
2017-01-01
Currently, spatial and temporal changes in nutrients availability, marine planktonic, and fish communities are best described on a shorter than inter-annual (seasonal) scale, primarily because the simultaneous year-to-year variations in physical, chemical, and biological parameters are very complex. The limited availability of time series datasets furnishing simultaneous evaluations of temperature, nutrients, plankton, and fish have limited our ability to describe and to predict variability related to short-term process, as species-specific phenology and environmental seasonality. In the present study, we combine a computational time series analysis on a 15-year (1995–2009) weekly-sampled time series (high-resolution long-term time series, 780 weeks) with an Autoregressive Distributed Lag Model to track non-seasonal changes in 10 potentially related parameters: sea surface temperature, nutrient concentrations (NO2, NO3, NH4 and PO4), phytoplankton biomass (as in situ chlorophyll a biomass), meroplankton (barnacle and mussel larvae), and fish abundance (Mugil liza and Caranx latus). Our data demonstrate for the first time that highly intense and frequent upwelling years initiate a huge energy flux that is not fully transmitted through classical size-structured food web by bottom-up stimulus but through additional ontogenetic steps. A delayed inter-annual sequential effect from phytoplankton up to top predators as carnivorous fishes is expected if most of energy is trapped into benthic filter feeding organisms and their larval forms. These sequential events can explain major changes in ecosystem food web that were not predicted in previous short-term models. PMID:28886162
Current interruption in inductive storage systems with inertial current source
NASA Astrophysics Data System (ADS)
Vitkovitsky, I. M.; Conte, D.; Ford, R. D.; Lupton, W. H.
1980-03-01
Utilization of inertial current source inductive storage with high power output requires a switch with short opening time. This switch must operate as a circuit breaker, i.e., be capable to carry the current for a time period characteristic of inertial systems, such as homopolar generators. For reasonable efficiency, its opening time must be fast to minimize the energy dissipated in downstream fuse stages required for any additional pulse compression. A switch that satisfies these criteria, as well as other requirements such as that for high voltage operation associated with high power output, is an explosively driven switch consisting of large number of gaps arranged in series. The performance of this switch in limiting and/or interrupting currents produced by large generators has been studied. Single switch modules were designed and tested for limiting the commutating current output of 1 MW, 60 Hz, generator and 500 KJ capacitor banks. Current limiting and commutation were evaluated, using these sources, for currents ranging up to 0.4 MA. The explosive opening of the switch was found to provide an effective first stage for further pulse compression. It opens in tens of microseconds, commutates current at high efficiency ( = 905) recovers very rapidly over a wide range of operating conditions.
Regenerating time series from ordinal networks.
McCullough, Michael; Sakellariou, Konstantinos; Stemler, Thomas; Small, Michael
2017-03-01
Recently proposed ordinal networks not only afford novel methods of nonlinear time series analysis but also constitute stochastic approximations of the deterministic flow time series from which the network models are constructed. In this paper, we construct ordinal networks from discrete sampled continuous chaotic time series and then regenerate new time series by taking random walks on the ordinal network. We then investigate the extent to which the dynamics of the original time series are encoded in the ordinal networks and retained through the process of regenerating new time series by using several distinct quantitative approaches. First, we use recurrence quantification analysis on traditional recurrence plots and order recurrence plots to compare the temporal structure of the original time series with random walk surrogate time series. Second, we estimate the largest Lyapunov exponent from the original time series and investigate the extent to which this invariant measure can be estimated from the surrogate time series. Finally, estimates of correlation dimension are computed to compare the topological properties of the original and surrogate time series dynamics. Our findings show that ordinal networks constructed from univariate time series data constitute stochastic models which approximate important dynamical properties of the original systems.
Regenerating time series from ordinal networks
NASA Astrophysics Data System (ADS)
McCullough, Michael; Sakellariou, Konstantinos; Stemler, Thomas; Small, Michael
2017-03-01
Recently proposed ordinal networks not only afford novel methods of nonlinear time series analysis but also constitute stochastic approximations of the deterministic flow time series from which the network models are constructed. In this paper, we construct ordinal networks from discrete sampled continuous chaotic time series and then regenerate new time series by taking random walks on the ordinal network. We then investigate the extent to which the dynamics of the original time series are encoded in the ordinal networks and retained through the process of regenerating new time series by using several distinct quantitative approaches. First, we use recurrence quantification analysis on traditional recurrence plots and order recurrence plots to compare the temporal structure of the original time series with random walk surrogate time series. Second, we estimate the largest Lyapunov exponent from the original time series and investigate the extent to which this invariant measure can be estimated from the surrogate time series. Finally, estimates of correlation dimension are computed to compare the topological properties of the original and surrogate time series dynamics. Our findings show that ordinal networks constructed from univariate time series data constitute stochastic models which approximate important dynamical properties of the original systems.
Effects of ageing on the electrical characteristics of Zn/ZnS/n-GaAs/In structure
NASA Astrophysics Data System (ADS)
Güzeldir, B.; Sağlam, M.
2016-04-01
Zn/ZnS/n-GaAs/In structure has been fabricated by the Successive Ionic Layer Adsorption and Reaction (SILAR) method and the influence of the time dependent or ageing on the characteristic parameters are examined. The current-voltage (I-V) of the structure have been measured immediately, 1, 3, 5, 15, 30, 45, 60, 75, 90, 105, 120, 135, 150 and 165 days after fabrication of this structure. The characteristics parameters of this structure such as barrier height, ideality factor, series resistance are calculated from the I-V measurements. It has been seen that the changes of characteristic parameters such as barrier height, ideality factor and series resistance of Zn/ZnS/n-GaAs/In structure have lightly changed with increasing ageing time.
Science, technology and mission design for LATOR experiment
NASA Astrophysics Data System (ADS)
Turyshev, Slava G.; Shao, Michael; Nordtvedt, Kenneth L.
2017-11-01
The Laser Astrometric Test of Relativity (LATOR) is a Michelson-Morley-type experiment designed to test the Einstein's general theory of relativity in the most intense gravitational environment available in the solar system - the close proximity to the Sun. By using independent time-series of highly accurate measurements of the Shapiro time-delay (laser ranging accurate to 1 cm) and interferometric astrometry (accurate to 0.1 picoradian), LATOR will measure gravitational deflection of light by the solar gravity with accuracy of 1 part in a billion, a factor {30,000 better than currently available. LATOR will perform series of highly-accurate tests of gravitation and cosmology in its search for cosmological remnants of scalar field in the solar system. We present science, technology and mission design for the LATOR mission.
Directed dynamical influence is more detectable with noise
Jiang, Jun-Jie; Huang, Zi-Gang; Huang, Liang; Liu, Huan; Lai, Ying-Cheng
2016-01-01
Successful identification of directed dynamical influence in complex systems is relevant to significant problems of current interest. Traditional methods based on Granger causality and transfer entropy have issues such as difficulty with nonlinearity and large data requirement. Recently a framework based on nonlinear dynamical analysis was proposed to overcome these difficulties. We find, surprisingly, that noise can counterintuitively enhance the detectability of directed dynamical influence. In fact, intentionally injecting a proper amount of asymmetric noise into the available time series has the unexpected benefit of dramatically increasing confidence in ascertaining the directed dynamical influence in the underlying system. This result is established based on both real data and model time series from nonlinear ecosystems. We develop a physical understanding of the beneficial role of noise in enhancing detection of directed dynamical influence. PMID:27066763
Directed dynamical influence is more detectable with noise.
Jiang, Jun-Jie; Huang, Zi-Gang; Huang, Liang; Liu, Huan; Lai, Ying-Cheng
2016-04-12
Successful identification of directed dynamical influence in complex systems is relevant to significant problems of current interest. Traditional methods based on Granger causality and transfer entropy have issues such as difficulty with nonlinearity and large data requirement. Recently a framework based on nonlinear dynamical analysis was proposed to overcome these difficulties. We find, surprisingly, that noise can counterintuitively enhance the detectability of directed dynamical influence. In fact, intentionally injecting a proper amount of asymmetric noise into the available time series has the unexpected benefit of dramatically increasing confidence in ascertaining the directed dynamical influence in the underlying system. This result is established based on both real data and model time series from nonlinear ecosystems. We develop a physical understanding of the beneficial role of noise in enhancing detection of directed dynamical influence.
GPS Position Time Series @ JPL
NASA Technical Reports Server (NTRS)
Owen, Susan; Moore, Angelyn; Kedar, Sharon; Liu, Zhen; Webb, Frank; Heflin, Mike; Desai, Shailen
2013-01-01
Different flavors of GPS time series analysis at JPL - Use same GPS Precise Point Positioning Analysis raw time series - Variations in time series analysis/post-processing driven by different users. center dot JPL Global Time Series/Velocities - researchers studying reference frame, combining with VLBI/SLR/DORIS center dot JPL/SOPAC Combined Time Series/Velocities - crustal deformation for tectonic, volcanic, ground water studies center dot ARIA Time Series/Coseismic Data Products - Hazard monitoring and response focused center dot ARIA data system designed to integrate GPS and InSAR - GPS tropospheric delay used for correcting InSAR - Caltech's GIANT time series analysis uses GPS to correct orbital errors in InSAR - Zhen Liu's talking tomorrow on InSAR Time Series analysis
Towards a first detailed reconstruction of sunspot information over the last 150 years
NASA Astrophysics Data System (ADS)
Lefevre, Laure; Clette, Frédéric
2013-04-01
With four centuries of solar evolution, the International Sunspot Number (SSN) forms the longest solar time series currently available. It provides an essential reference for understanding and quantifying how the solar output has varied over decades and centuries and thus for assessing the variations of the main natural forcing on the Earth climate. For such a quantitative use, this unique time-series must be closely monitored for any possible biases and drifts. This is the main objective of the Sunspot Workshops organized jointly by the National Solar Observatory (NSO) and the Royal Observatory of Belgium (ROB) since 2010. Here, we will report about some recent outcomes of past workshops, like diagnostics of scaling errors and their proposed corrections, or the recent disagreement between the sunspot sumber and other solar indices like the 10.7cm radio flux. Our most recent analyses indicate that while part of this divergence may be due to a calibration drift in the SSN, it also results from an intrinsic change in the global magnetic parameters of sunspots and solar active regions, suggesting a possible transition to a new activity regime. Going beyond the SSN series, in the framework of the SOTERIA, TOSCA and SOLID projects, we produced a survey of all existing catalogs providing detailed sunspot information and we also located different primary solar images and drawing collections that can be exploitable to complement the existing catalogs (COMESEP project). These are first steps towards the construction of a multi-parametric time series of multiple sunspot group properties over at least the last 150 years, allowing to reconstruct and extend the current 1-D SSN series. By bringing new spatial, morphological and evolutionary information, such a data set should bring major advances for the modeling of the solar dynamo and solar irradiance. We will present here the current status of this work. The catalog now extends over the last 3 cycles (Lefevre & Clette 2011,doi:10.1007/s11207-012-0184-5). A partially complete version extends back to 1965, and will soon reach 1940 thanks to the data from the Uccle Solar Equatorial Table (USET) operated by the ROB. We will also present initial applications derived from the present version of the catalog, such as new sunspot-based solar fluxes and proxies that should ultimately help refine our knowledge of the influence of the Sun on our environment, now and throughout the ages. This work has received funding from the European Commission FP7 Project COMESEP (263252).
A Combined Length-of-Day Series Spanning 1832-1997
NASA Technical Reports Server (NTRS)
Gross, Richard S.
1999-01-01
The Earth's rotation is not constant but exhibits minute changes on all observable time scales ranging from subdaily to secular. This rich spectrum of observed Earth rotation changes reflects the rich variety of astronomical and geophysical phenomena that are causing the Earth's rotation to change, including, but not limited to, ocean and solid body tides, atmospheric wind and pressure changes, oceanic current and sea level height changes, post-glacial rebound, and torques acting at the core-mantle boundary. In particular, the decadal-scale variations of the Earth's rotation are thought to be largely caused by interactions between the Earth's outer core and mantle. Comparing the inferred Earth rotation variations caused by the various core-mantle interactions to observed variations requires Earth rotation observations spanning decades, if not centuries. During the past century many different techniques have been used to observe the Earth's rotation. By combining the individual Earth rotation series determined by each of these techniques, a series of the Earth's rotation can be obtained that is based upon independent measurements spanning the greatest possible time interval. In this study, independent observations of the Earth's rotation are combined to generate a length-of-day series spanning 1832-1997. The observations combined include lunar occultation measurements spanning 1832-1955, optical astrometric measurements spanning 1956-1982, lunar laser ranging measurements spanning 1970-1997, and very long baseline interferometric measurements spanning 1978-1998. These series are combined using a Kalman filter developed at JPL for just this purpose. The resulting combined length-of-day series will be presented and compared with other available length-of-day series of similar duration.
High Frequency Variations in Earth Orientation Derived From GNSS Observations
NASA Astrophysics Data System (ADS)
Weber, R.; Englich, S.; Snajdrova, K.; Boehm, J.
2006-12-01
Current observations gained by the space geodetic techniques, especially VLBI, GPS and SLR, allow for the determination of Earth Orientation Parameters (EOPs - polar motion, UT1/LOD, nutation offsets) with unprecedented accuracy and temporal resolution. This presentation focuses on contributions to the EOP recovery provided by satellite navigation systems (primarily GPS). The IGS (International GNSS Service), for example, currently provides daily polar motion with an accuracy of less than 0.1mas and LOD estimates with an accuracy of a few microseconds. To study more rapid variations in polar motion and LOD we established in a first step a high resolution (hourly resolution) ERP-time series from GPS observation data of the IGS network covering the period from begin of 2005 till March 2006. The calculations were carried out by means of the Bernese GPS Software V5.0 considering observations from a subset of 79 fairly stable stations out of the IGb00 reference frame sites. From these ERP time series the amplitudes of the major diurnal and semidiurnal variations caused by ocean tides are estimated. After correcting the series for ocean tides the remaining geodetic observed excitation is compared with variations of atmospheric excitation (AAM). To study the sensitivity of the estimates with respect to the applied mapping function we applied both the widely used NMF (Niell Mapping Function) and the VMF1 (Vienna Mapping Function 1). In addition, based on computations covering two months in 2005, the potential improvement due to the use of additional GLONASS data will be discussed. Finally, satellite techniques are also able to provide nutation offset rates with respect to the most recent nutation model. Based on GPS observations from 2005 we established nutation rate time series and subsequently derived the amplitudes of several nutation waves with periods less than 30 days. The results are compared to VLBI estimates processed by means of the OCCAM 6.1 software.
Mapping the structure of the world economy.
Lenzen, Manfred; Kanemoto, Keiichiro; Moran, Daniel; Geschke, Arne
2012-08-07
We have developed a new series of environmentally extended multi-region input-output (MRIO) tables with applications in carbon, water, and ecological footprinting, and Life-Cycle Assessment, as well as trend and key driver analyses. Such applications have recently been at the forefront of global policy debates, such as about assigning responsibility for emissions embodied in internationally traded products. The new time series was constructed using advanced parallelized supercomputing resources, and significantly advances the previous state of art because of four innovations. First, it is available as a continuous 20-year time series of MRIO tables. Second, it distinguishes 187 individual countries comprising more than 15,000 industry sectors, and hence offers unsurpassed detail. Third, it provides information just 1-3 years delayed therefore significantly improving timeliness. Fourth, it presents MRIO elements with accompanying standard deviations in order to allow users to understand the reliability of data. These advances will lead to material improvements in the capability of applications that rely on input-output tables. The timeliness of information means that analyses are more relevant to current policy questions. The continuity of the time series enables the robust identification of key trends and drivers of global environmental change. The high country and sector detail drastically improves the resolution of Life-Cycle Assessments. Finally, the availability of information on uncertainty allows policy-makers to quantitatively judge the level of confidence that can be placed in the results of analyses.
The Scatterometer Climate Record Pathfinder: Tools for Climate Change Studies
NASA Astrophysics Data System (ADS)
Long, D. G.; Jensen, M. A.
2001-12-01
While originally designed for wind measurement over the ocean, scatterometers have proven to be very effective in monitoring land cover and ice conditions as well. Scatterometer data is being operationally used for iceberg tracking and sea ice extent mapping. The frequent, global measurements make the instrument particularly well suited for global monitoring and the long-time series of scatterometer measurements dating back to SASS provide a valuable baseline for studies of climate change. For this reason the NASA Scatterometer Climate Record Pathfinder (SCP) project is generating a climate data record from the series of historic and ongoing, and approved scatterometer missions. Selected data is currently available from the SCP at URL http://www.scp.byu.edu/ in the form of resolution-enhanced backscatter image time series. A variety of tools for analyzing the image time series have been developed. The application of QuikSCAT data to climate change in Greenland and sea ice motion in the Arctic is illustrated. By comparing QuikSCAT with NSCAT and SASS data, long-term scatterometer-observed changes in Greenland are related to annual variations in melt extent and snow accumulation. Qu ikSCAT sampling enables high spatial resolution evaluation of the diurnal melt cycle. We demonstrate the value of the scatterometer data to augment passive microwave measurements by using PCA. The scatterometer data plays a key role in helping to discriminate physical changes in the Greenland firn from surface temperature effects.
Tang, Cui; Yin, Xianggen; Qi, Xuanwei; Zhang, Zhe
2014-01-01
The series capacitor compensation is one of the key technologies in the EHV and UHV long distance power transmission lines. This paper analyzes the operation characteristics of the main protection combined with the engineering practice when the transmission line overcompensation due to the series compensation system is modified and analyzes the influence of the transition resistance and the system operation mode on the current differential protection. According to the simulation results, it presents countermeasure on improving the sensitivity of differential current protection. PMID:25247206
Lara, Juan A; Lizcano, David; Pérez, Aurora; Valente, Juan P
2014-10-01
There are now domains where information is recorded over a period of time, leading to sequences of data known as time series. In many domains, like medicine, time series analysis requires to focus on certain regions of interest, known as events, rather than analyzing the whole time series. In this paper, we propose a framework for knowledge discovery in both one-dimensional and multidimensional time series containing events. We show how our approach can be used to classify medical time series by means of a process that identifies events in time series, generates time series reference models of representative events and compares two time series by analyzing the events they have in common. We have applied our framework on time series generated in the areas of electroencephalography (EEG) and stabilometry. Framework performance was evaluated in terms of classification accuracy, and the results confirmed that the proposed schema has potential for classifying EEG and stabilometric signals. The proposed framework is useful for discovering knowledge from medical time series containing events, such as stabilometric and electroencephalographic time series. These results would be equally applicable to other medical domains generating iconographic time series, such as, for example, electrocardiography (ECG). Copyright © 2014 Elsevier Inc. All rights reserved.
Biogeochemical Response to Mesoscale Physical Forcing in the California Current System
NASA Technical Reports Server (NTRS)
Niiler, Pearn P.; Letelier, Ricardo; Moisan, John R.; Marra, John A. (Technical Monitor)
2001-01-01
In the first part of the project, we investigated the local response of the coastal ocean ecosystems (changes in chlorophyll, concentration and chlorophyll, fluorescence quantum yield) to physical forcing by developing and deploying Autonomous Drifting Ocean Stations (ADOS) within several mesoscale features along the U.S. west coast. Also, we compared the temporal and spatial variability registered by sensors mounted in the drifters to that registered by the sensors mounted in the satellites in order to assess the scales of variability that are not resolved by the ocean color satellite. The second part of the project used the existing WOCE SVP Surface Lagrangian drifters to track individual water parcels through time. The individual drifter tracks were used to generate multivariate time series by interpolating/extracting the biological and physical data fields retrieved by remote sensors (ocean color, SST, wind speed and direction, wind stress curl, and sea level topography). The individual time series of the physical data (AVHRR, TOPEX, NCEP) were analyzed against the ocean color (SeaWiFS) time-series to determine the time scale of biological response to the physical forcing. The results from this part of the research is being used to compare the decorrelation scales of chlorophyll from a Lagrangian and Eulerian framework. The results from both parts of this research augmented the necessary time series data needed to investigate the interactions between the ocean mesoscale features, wind, and the biogeochemical processes. Using the historical Lagrangian data sets, we have completed a comparison of the decorrelation scales in both the Eulerian and Lagrangian reference frame for the SeaWiFS data set. We are continuing to investigate how these results might be used in objective mapping efforts.
New method for solving inductive electric fields in the non-uniformly conducting ionosphere
NASA Astrophysics Data System (ADS)
Vanhamäki, H.; Amm, O.; Viljanen, A.
2006-10-01
We present a new calculation method for solving inductive electric fields in the ionosphere. The time series of the potential part of the ionospheric electric field, together with the Hall and Pedersen conductances serves as the input to this method. The output is the time series of the induced rotational part of the ionospheric electric field. The calculation method works in the time-domain and can be used with non-uniform, time-dependent conductances. In addition, no particular symmetry requirements are imposed on the input potential electric field. The presented method makes use of special non-local vector basis functions called the Cartesian Elementary Current Systems (CECS). This vector basis offers a convenient way of representing curl-free and divergence-free parts of 2-dimensional vector fields and makes it possible to solve the induction problem using simple linear algebra. The new calculation method is validated by comparing it with previously published results for Alfvén wave reflection from a uniformly conducting ionosphere.
Terminator field-aligned current system: A new finding from model-assimilated data set (MADS)
NASA Astrophysics Data System (ADS)
Zhu, L.; Schunk, R. W.; Scherliess, L.; Sojka, J. J.; Gardner, L. C.; Eccles, J. V.; Rice, D.
2013-12-01
Physics-based data assimilation models have been recognized by the space science community as the most accurate approach to specify and forecast the space weather of the solar-terrestrial environment. The model-assimilated data sets (MADS) produced by these models constitute an internally consistent time series of global three-dimensional fields whose accuracy can be estimated. Because of its internal consistency of physics and completeness of descriptions on the status of global systems, the MADS has also been a powerful tool to identify the systematic errors in measurements, reveal the missing physics in physical models, and discover the important dynamical physical processes that are inadequately observed or missed by measurements due to observational limitations. In the past years, we developed a data assimilation model for the high-latitude ionospheric plasma dynamics and electrodynamics. With a set of physical models, an ensemble Kalman filter, and the ingestion of data from multiple observations, the data assimilation model can produce a self-consistent time-series of the complete descriptions of the global high-latitude ionosphere, which includes the convection electric field, horizontal and field-aligned currents, conductivity, as well as 3-D plasma densities and temperatures, In this presentation, we will show a new field-aligned current system discovered from the analysis of the MADS produced by our data assimilation model. This new current system appears and develops near the ionospheric terminator. The dynamical features of this current system will be described and its connection to the active role of the ionosphere in the M-I coupling will be discussed.
NASA Satellite Monitoring of Water Clarity in Mobile Bay for Nutrient Criteria Development
NASA Technical Reports Server (NTRS)
Blonski, Slawomir; Holekamp, Kara; Spiering, Bruce A.
2009-01-01
This project has demonstrated feasibility of deriving from MODIS daily measurements time series of water clarity parameters that provide coverage of a specific location or an area of interest for 30-50% of days. Time series derived for estuarine and coastal waters display much higher variability than time series of ecological parameters (such as vegetation indices) derived for land areas. (Temporal filtering often applied in terrestrial studies cannot be used effectively in ocean color processing). IOP-based algorithms for retrieval of diffuse light attenuation coefficient and TSS concentration perform well for the Mobile Bay environment: only a minor adjustment was needed in the TSS algorithm, despite generally recognized dependence of such algorithms on local conditions. The current IOP-based algorithm for retrieval of chlorophyll a concentration has not performed as well: a more reliable algorithm is needed that may be based on IOPs at additional wavelengths or on remote sensing reflectance from multiple spectral bands. CDOM algorithm also needs improvement to provide better separation between effects of gilvin (gelbstoff) and detritus. (Identification or development of such algorithm requires more data from in situ measurements of CDOM concentration in Gulf of Mexico coastal waters (ongoing collaboration with the EPA Gulf Ecology Division))
Wang, Li; Wang, Xiaoyi; Jin, Xuebo; Xu, Jiping; Zhang, Huiyan; Yu, Jiabin; Sun, Qian; Gao, Chong; Wang, Lingbin
2017-03-01
The formation process of algae is described inaccurately and water blooms are predicted with a low precision by current methods. In this paper, chemical mechanism of algae growth is analyzed, and a correlation analysis of chlorophyll-a and algal density is conducted by chemical measurement. Taking into account the influence of multi-factors on algae growth and water blooms, the comprehensive prediction method combined with multivariate time series and intelligent model is put forward in this paper. Firstly, through the process of photosynthesis, the main factors that affect the reproduction of the algae are analyzed. A compensation prediction method of multivariate time series analysis based on neural network and Support Vector Machine has been put forward which is combined with Kernel Principal Component Analysis to deal with dimension reduction of the influence factors of blooms. Then, Genetic Algorithm is applied to improve the generalization ability of the BP network and Least Squares Support Vector Machine. Experimental results show that this method could better compensate the prediction model of multivariate time series analysis which is an effective way to improve the description accuracy of algae growth and prediction precision of water blooms.
Chaos control by electric current in an enzymatic reaction.
Lekebusch, A; Förster, A; Schneider, F W
1996-09-01
We apply the continuous delayed feedback method of Pyragas to control chaos in the enzymatic Peroxidase-Oxidase (PO) reaction, using the electric current as the control parameter. At each data point in the time series, a time delayed feedback function applies a small amplitude perturbation to inert platinum electrodes, which causes redox processes on the surface of the electrodes. These perturbations are calculated as the difference between the previous (time delayed) signal and the actual signal. Unstable periodic P1, 1(1), and 1(2) orbits (UPOs) were stabilized in the CSTR (continuous stirred tank reactor) experiments. The stabilization is demonstrated by at least three conditions: A minimum in the experimental dispersion function, the equality of the delay time with the period of the stabilized attractor and the embedment of the stabilized periodic attractor in the chaotic attractor.
NASA Astrophysics Data System (ADS)
Xu, J.; Wang, Z.; Gwiazda, R.; Paull, C. K.; Talling, P.; Parsons, D. R.; Maier, K. L.; Simmons, S.; Cartigny, M.
2017-12-01
During a large turbidity current event observed by seven moorings placed along Monterey Canyon, offshore central California, in the axial channel between 300 and 1900 meters water depth, a conductivity/temperature sensor placed 11 meters above canyon floor on the mooring at 1500 meters water depth recorded a rapid decrease of conductivity and increase of temperature during the passage of a large turbidity current. The conductivity decline is unlikely caused by fresh water input owing to lack of precipitation in the region prior to the event. We investigated the mechanisms of turbidity currents' high sediment concentration reducing the measured conductivity. By conducting a series of laboratory experiments with a range of different concentrations, grain size, and water temperature combinations, we quantified a relationship between reduced conductivity and the elevated sediment concentration. This relationship can be used for estimating the very high sediment concentrations in a turbidity current with a condition of assuming constant salinity of the ambient seawater. The empirical relationship was then applied to the in-situ time-series of temperature and conductivity measured during this turbidity current. The highest sediment concentration, in the head of the flow, reached nearly 400 g/L (volume concentration 17%). Such a high value, which has yet been reported in literature for an oceanic turbidity current, will have significant implications for the dynamics and deposits of such flows.
NASA Astrophysics Data System (ADS)
Endo, M.; Hori, T.; Koyama, K.; Yamaguchi, I.; Arai, K.; Kaiho, K.; Yanabu, S.
2008-02-01
Using a high temperature superconductor, we constructed and tested a model Superconducting Fault Current Limiter (SFCL). SFCL which has a vacuum interrupter with electromagnetic repulsion mechanism. We set out to construct high voltage class SFCL. We produced the electromagnetic repulsion switch equipped with a 24kV vacuum interrupter(VI). There are problems that opening speed becomes late. Because the larger vacuum interrupter the heavier weight of its contact. For this reason, the current which flows in a superconductor may be unable to be interrupted within a half cycles of current. In order to solve this problem, it is necessary to change the design of the coil connected in parallel and to strengthen the electromagnetic repulsion force at the time of opening the vacuum interrupter. Then, the design of the coil was changed, and in order to examine whether the problem is solvable, the current limiting test was conducted. We examined current limiting test using 4 series and 2 parallel-connected YBCO thin films. We used 12-centimeter-long YBCO thin film. The parallel resistance (0.1Ω) is connected with each YBCO thin film. As a result, we succeed in interrupting the current of superconductor within a half cycle of it. Furthermore, series and parallel-connected YBCO thin film could limit without failure.
Detection of a sudden change of the field time series based on the Lorenz system.
Da, ChaoJiu; Li, Fang; Shen, BingLu; Yan, PengCheng; Song, Jian; Ma, DeShan
2017-01-01
We conducted an exploratory study of the detection of a sudden change of the field time series based on the numerical solution of the Lorenz system. First, the time when the Lorenz path jumped between the regions on the left and right of the equilibrium point of the Lorenz system was quantitatively marked and the sudden change time of the Lorenz system was obtained. Second, the numerical solution of the Lorenz system was regarded as a vector; thus, this solution could be considered as a vector time series. We transformed the vector time series into a time series using the vector inner product, considering the geometric and topological features of the Lorenz system path. Third, the sudden change of the resulting time series was detected using the sliding t-test method. Comparing the test results with the quantitatively marked time indicated that the method could detect every sudden change of the Lorenz path, thus the method is effective. Finally, we used the method to detect the sudden change of the pressure field time series and temperature field time series, and obtained good results for both series, which indicates that the method can apply to high-dimension vector time series. Mathematically, there is no essential difference between the field time series and vector time series; thus, we provide a new method for the detection of the sudden change of the field time series.
Classification of DNA nucleotides with transverse tunneling currents
NASA Astrophysics Data System (ADS)
Nyvold Pedersen, Jonas; Boynton, Paul; Di Ventra, Massimiliano; Jauho, Antti-Pekka; Flyvbjerg, Henrik
2017-01-01
It has been theoretically suggested and experimentally demonstrated that fast and low-cost sequencing of DNA, RNA, and peptide molecules might be achieved by passing such molecules between electrodes embedded in a nanochannel. The experimental realization of this scheme faces major challenges, however. In realistic liquid environments, typical currents in tunneling devices are of the order of picoamps. This corresponds to only six electrons per microsecond, and this number affects the integration time required to do current measurements in real experiments. This limits the speed of sequencing, though current fluctuations due to Brownian motion of the molecule average out during the required integration time. Moreover, data acquisition equipment introduces noise, and electronic filters create correlations in time-series data. We discuss how these effects must be included in the analysis of, e.g., the assignment of specific nucleobases to current signals. As the signals from different molecules overlap, unambiguous classification is impossible with a single measurement. We argue that the assignment of molecules to a signal is a standard pattern classification problem and calculation of the error rates is straightforward. The ideas presented here can be extended to other sequencing approaches of current interest.
Volatility of linear and nonlinear time series
NASA Astrophysics Data System (ADS)
Kalisky, Tomer; Ashkenazy, Yosef; Havlin, Shlomo
2005-07-01
Previous studies indicated that nonlinear properties of Gaussian distributed time series with long-range correlations, ui , can be detected and quantified by studying the correlations in the magnitude series ∣ui∣ , the “volatility.” However, the origin for this empirical observation still remains unclear and the exact relation between the correlations in ui and the correlations in ∣ui∣ is still unknown. Here we develop analytical relations between the scaling exponent of linear series ui and its magnitude series ∣ui∣ . Moreover, we find that nonlinear time series exhibit stronger (or the same) correlations in the magnitude time series compared with linear time series with the same two-point correlations. Based on these results we propose a simple model that generates multifractal time series by explicitly inserting long range correlations in the magnitude series; the nonlinear multifractal time series is generated by multiplying a long-range correlated time series (that represents the magnitude series) with uncorrelated time series [that represents the sign series sgn(ui) ]. We apply our techniques on daily deep ocean temperature records from the equatorial Pacific, the region of the El-Ninõ phenomenon, and find: (i) long-range correlations from several days to several years with 1/f power spectrum, (ii) significant nonlinear behavior as expressed by long-range correlations of the volatility series, and (iii) broad multifractal spectrum.
How Do Tides and Tsunamis Interact in a Highly Energetic Channel? The Case of Canal Chacao, Chile
NASA Astrophysics Data System (ADS)
Winckler, Patricio; Sepúlveda, Ignacio; Aron, Felipe; Contreras-López, Manuel
2017-12-01
This study aims at understanding the role of tidal level, speed, and direction in tsunami propagation in highly energetic tidal channels. The main goal is to comprehend whether tide-tsunami interactions enhance/reduce elevation, currents speeds, and arrival times, when compared to pure tsunami models and to simulations in which tides and tsunamis are linearly superimposed. We designed various numerical experiments to compute the tsunami propagation along Canal Chacao, a highly energetic channel in the Chilean Patagonia lying on a subduction margin prone to megathrust earthquakes. Three modeling approaches were implemented under the same seismic scenario: a tsunami model with a constant tide level, a series of six composite models in which independent tide and tsunami simulations are linearly superimposed, and a series of six tide-tsunami nonlinear interaction models (full models). We found that hydrodynamic patterns differ significantly among approaches, being the composite and full models sensitive to both the tidal phase at which the tsunami is triggered and the local depth of the channel. When compared to full models, composite models adequately predicted the maximum surface elevation, but largely overestimated currents. The amplitude and arrival time of the tsunami-leading wave computed with the full model was found to be strongly dependent on the direction of the tidal current and less responsive to the tide level and the tidal current speed. These outcomes emphasize the importance of addressing more carefully the interactions of tides and tsunamis on hazard assessment studies.
Wakie, Tewodros; Evangelista, Paul H.; Jarnevich, Catherine S.; Laituri, Melinda
2014-01-01
We used correlative models with species occurrence points, Moderate Resolution Imaging Spectroradiometer (MODIS) vegetation indices, and topo-climatic predictors to map the current distribution and potential habitat of invasive Prosopis juliflora in Afar, Ethiopia. Time-series of MODIS Enhanced Vegetation Indices (EVI) and Normalized Difference Vegetation Indices (NDVI) with 250 m2 spatial resolution were selected as remote sensing predictors for mapping distributions, while WorldClim bioclimatic products and generated topographic variables from the Shuttle Radar Topography Mission product (SRTM) were used to predict potential infestations. We ran Maxent models using non-correlated variables and the 143 species-occurrence points. Maxent generated probability surfaces were converted into binary maps using the 10-percentile logistic threshold values. Performances of models were evaluated using area under the receiver-operating characteristic (ROC) curve (AUC). Our results indicate that the extent of P. juliflora invasion is approximately 3,605 km2 in the Afar region (AUC = 0.94), while the potential habitat for future infestations is 5,024 km2 (AUC = 0.95). Our analyses demonstrate that time-series of MODIS vegetation indices and species occurrence points can be used with Maxent modeling software to map the current distribution of P. juliflora, while topo-climatic variables are good predictors of potential habitat in Ethiopia. Our results can quantify current and future infestations, and inform management and policy decisions for containing P. juliflora. Our methods can also be replicated for managing invasive species in other East African countries.
Completing the gaps in Kilauea's Father's Day InSAR displacement signature with ScanSAR
NASA Astrophysics Data System (ADS)
Bertran Ortiz, A.; Pepe, A.; Lanari, R.; Lundgren, P.; Rosen, P. A.
2009-12-01
Currently there are gaps in the known displacement signature obtained with InSAR at Kilauea between 2002 and 2009. InSAR data can be richer than GPS because of denser spatial cover. However, to better model rapidly varying and non-steady geophysical events InSAR is limited because of its less dense time observations of the area under study. The ScanSAR mode currently available in several satellites mitigates this effect because the satellite may illuminate a given area more than once within an orbit cycle. The Kilauea displacement graph below from Instituto per Il Rilevamento Electromagnetico dell'Ambiente (IREA) is a cut in space of the displacement signature obtained from a time series of several stripmap-to-stripmap interferograms. It shows that critical information is missing, especially between 2006 and 2007. The displacement is expected to be non-linear judging from the 2007-2008 displacement signature, thus simple interpolation would not suffice. The gap can be filled by incorporating Envisat stripmap-to-ScanSAR interferograms available during that time period. We propose leveraging JPL's new ROI-PAC ScanSAR module to create stripmap-to-ScanSAR interferograms. The new interferograms will be added to the stripmap ones in order to extend the existing stripmap time series generated by using the Small BAseline Subset (SBAS) technique. At AGU we will present denser graphs that better capture Kilauea's displacement between 2003 and 2009.
New Method for Solving Inductive Electric Fields in the Ionosphere
NASA Astrophysics Data System (ADS)
Vanhamäki, H.
2005-12-01
We present a new method for calculating inductive electric fields in the ionosphere. It is well established that on large scales the ionospheric electric field is a potential field. This is understandable, since the temporal variations of large scale current systems are generally quite slow, in the timescales of several minutes, so inductive effects should be small. However, studies of Alfven wave reflection have indicated that in some situations inductive phenomena could well play a significant role in the reflection process, and thus modify the nature of ionosphere-magnetosphere coupling. The input to our calculation method are the time series of the potential part of the ionospheric electric field together with the Hall and Pedersen conductances. The output is the time series of the induced rotational part of the ionospheric electric field. The calculation method works in the time-domain and can be used with non-uniform, time-dependent conductances. In addition no particular symmetry requirements are imposed on the input potential electric field. The presented method makes use of special non-local vector basis functions called Cartesian Elementary Current Systems (CECS). This vector basis offers a convenient way of representing curl-free and divergence-free parts of 2-dimensional vector fields and makes it possible to solve the induction problem using simple linear algebra. The new calculation method is validated by comparing it with previously published results for Alfven wave reflection from uniformly conducting ionosphere.
GPS data exploration for seismologists and geodesists
NASA Astrophysics Data System (ADS)
Webb, F.; Bock, Y.; Kedar, S.; Dong, D.; Jamason, P.; Chang, R.; Prawirodirdjo, L.; MacLeod, I.; Wadsworth, G.
2007-12-01
Over the past decade, GPS and seismic networks spanning the western US plate boundaries have produced vast amounts of data that need to be made accessible to both the geodesy and seismology communities. Unlike seismic data, raw geodetic data requires significant processing before geophysical interpretations can be made. This requires the generation of data-products (time series, velocities and strain maps) and dissemination strategies to bridge these differences and assure efficient use of data across traditionally separate communities. "GPS DATA PRODUCTS FOR SOLID EARTH SCIENCE" (GDPSES) is a multi-year NASA funded project, designed to produce and deliver high quality GPS time series, velocities, and strain fields, derived from multiple GPS networks along the western US plate boundary, and to make these products easily accessible to geophysicists. Our GPS product dissemination is through modern web-based IT methodology. Product browsing is facilitated through a web tool known as GPS Explorer and continuous streams of GPS time series are provided using web services to the seismic archive, where it can be accessed by seismologists using traditional seismic data viewing and manipulation tools. GPS-Explorer enables users to efficiently browse several layers of data products from raw data through time series, velocities and strain by providing the user with a web interface, which seamlessly interacts with a continuously updated database of these data products through the use of web-services. The current archive contains GDPSES data products beginning in 1995, and includes observations from GPS stations in EarthScope's Plate Boundary Observatory (PBO), as well as from real-time real-time CGPS stations. The generic, standards-based approach used in this project enables GDPSES to seamlessly expand indefinitely to include other space-time-dependent data products from additional GPS networks. The prototype GPS-Explorer provides users with a personalized working environment in which the user may zoom in and access subsets of the data via web services. It provides users with a variety of interactive web tools interconnected in a portlet environment to explore and save datasets of interest to return to at a later date. At the same time the GPS time series are also made available through the seismic data archive, where the GPS networks are treated as regular seismic networks, whose data is made available in data formats used by seismic utilities such as SEED readers and SAC. A key challenge, stemming from the fundamental differences between seismic and geodetic time series, is the representation of reprocessed of GPS data in the seismic archive. As GPS processing algorithms evolve and their accuracy increases, a periodic complete recreation of the the GPS time series archive is necessary.
Xu, Jingping; Lightsom, Fran; Noble, Marlene A.; Denham, Charles
2002-01-01
During the past several years, the sediment transport group in the Coastal and Marine Geology Program (CMGP) of the U. S. Geological Survey has made major revisions to its methodology of processing, analyzing, and maintaining the variety of oceanographic time-series data. First, CMGP completed the transition of the its oceanographic time-series database to a self-documenting NetCDF (Rew et al., 1997) data format. Second, CMGP’s oceanographic data variety and complexity have been greatly expanded from traditional 2-dimensional, single-point time-series measurements (e.g., Electro-magnetic current meters, transmissometers) to more advanced 3-dimensional and profiling time-series measurements due to many new acquisitions of modern instruments such as Acoustic Doppler Current Profiler (RDI, 1996), Acoustic Doppler Velocitimeter, Pulse-Coherence Acoustic Doppler Profiler (SonTek, 2001), Acoustic Bacscatter Sensor (Aquatec, 1001001001001001001). In order to accommodate the NetCDF format of data from the new instruments, a software package of processing, analyzing, and visualizing time-series oceanographic data was developed. It is named CMGTooL. The CMGTooL package contains two basic components: a user-friendly GUI for NetCDF file analysis, processing and manipulation; and a data analyzing program library. Most of the routines in the library are stand-alone programs suitable for batch processing. CMGTooL is written in MATLAB computing language (The Mathworks, 1997), therefore users must have MATLAB installed on their computer in order to use this software package. In addition, MATLAB’s Signal Processing Toolbox is also required by some CMGTooL’s routines. Like most MATLAB programs, all CMGTooL codes are compatible with different computing platforms including PC, MAC, and UNIX machines (Note: CMGTooL has been tested on different platforms that run MATLAB 5.2 (Release 10) or lower versions. Some of the commands related to MAC may not be compatible with later releases of MATLAB). The GUI and some of the library routines call low-level NetCDF file I/O, variable and attribute functions. These NetCDF exclusive functions are supported by a MATLAB toolbox named NetCDF, created by Dr. Charles Denham . This toolbox has to be installed in order to use the CMGTooL GUI. The CMGTooL GUI calls several routines that were initially developed by others. The authors would like to acknowledge the following scientists for their ideas and codes: Dr. Rich Signell (USGS), Dr. Chris Sherwood (USGS), and Dr. Bob Beardsley (WHOI). Many special terms that carry special meanings in either MATLAB or the NetCDF Toolbox are used in this manual. Users are encouraged to read the documents of MATLAB and NetCDF for references.
NASA Astrophysics Data System (ADS)
Nole, Gabriele; Scorza, Francesco; Lanorte, Antonio; Manzi, Teresa; Lasaponara, Rosa
2015-04-01
This paper aims to present the development of a tool to integrate time series from active and passive satellite sensors (such as of MODIS, Vegetation, Landsat, ASTER, COSMO, Sentinel) into a virtual laboratory to support studies on landscape and archaeological landscape, investigation on environmental changes, estimation and monitoring of natural and anthropogenic risks. The virtual laboratory is composed by both data and open source tools specifically developed for the above mentioned applications. Results obtained for investigations carried out using the implemented tools for monitoring land degradation issues and subtle changes ongoing on forestry and natural areas are herein presented. In detail MODIS, SPOT Vegetation and Landsat time series were analyzed comparing results of different statistical analyses and the results integrated with ancillary data and evaluated with field survey. The comparison of the outputs we obtained for the Basilicata Region from satellite data analyses and independent data sets clearly pointed out the reliability for the diverse change analyses we performed, at the pixel level, using MODIS, SPOT Vegetation and Landsat TM data. Next steps are going to be implemented to further advance the current Virtual Laboratory tools, by extending current facilities adding new computational algorithms and applying to other geographic regions. Acknowledgement This research was performed within the framework of the project PO FESR Basilicata 2007/2013 - Progetto di cooperazione internazionale MITRA "Remote Sensing tecnologies for Natural and Cultural heritage Degradation Monitoring for Preservation and valorization" funded by Basilicata Region Reference 1. A. Lanorte, R Lasaponara, M Lovallo, L Telesca 2014 Fisher-Shannon information plane analysis of SPOT/VEGETATION Normalized Difference Vegetation Index (NDVI) time series to characterize vegetation recovery after fire disturbance International Journal of Applied Earth Observation and Geoinformation 2441-446 2. G Calamita, A Lanorte, R Lasaponara, B Murgante, G Nole 2013 Analyzing urban sprawl applying spatial autocorrelation techniques to multi-temporal satellite data. Urban and Regional Data Management: UDMS Annual 2013, 161 3. R Lasaponara 2013 Geospatial analysis from space: Advanced approaches for data processing, information extraction and interpretation International Journal of Applied Earth Observations and Geoinformation 20 . 1-3 4. R Lasaponara, A Lanorte 2011 Satellite time-series analysis International Journal of Remote Sensing 33 (15), 4649-4652 5. G Nolè, M Danese, B Murgante, R Lasaponara, A Lanorte Using spatial autocorrelation techniques and multi-temporal satellite data for analyzing urban sprawl Computational Science and Its Applications-ICCSA 2012, 512-527
Photospheric Current Spikes as Possible Predictors of Flares
NASA Technical Reports Server (NTRS)
Goodman, Michael L.; Kwan, Chiman; Ayhan, Bulent; Shang, Eric L.
2016-01-01
Flares involve generation of the largest current densities in the solar atmosphere. This suggests the hypothesis that prior to a large (M,X) flare there are related time dependent changes in the photospheric current distribution, and hence in the resistive heating rate in neutral line regions (NLRs). If this is true, these changes might be useful predictors of flares. Preliminary evidence supporting this hypothesis is presented. Results from a data driven, near photospheric, 3D magnetohydrodynamic type model suggest the model might be useful for predicting M and X flares several hours to several days in advance. The model takes as input the photospheric magnetic field observed by the Helioseismic and Magnetic Imager (HMI) on the Solar Dynamics Observatory (SDO) satellite. The model computes quantities in every active region (AR) pixel for 14 ARs, with spurious Doppler periods due to SDO orbital motion filtered out of the time series of the magnetic field for each pixel. Spikes in the NLR resistive heating rate Q, appearing as increases by orders of magnitude above background values in the time series of Q are found to occur, and appear to be correlated with the occurrence of M or X flares a few hours to a few days later. The subset of spikes analyzed at the pixel level are found to occur on HMI and granulation scales of 1 arcsec and 12 minutes. Spikes are found in NLRs with and without M or X flares, and outside as well as inside NLRs, but the largest spikes are localized in the NLRs of ARs with M or X flares, and associated with horizontal magnetic field strengths approximately several hG, and vertical magnetic field strengths several orders of magnitude smaller. The spikes may be signatures of horizontal current sheets associated with emerging magnetic flux.
Photospheric Current Spikes as Possible Predictors of Flares
NASA Technical Reports Server (NTRS)
Goodman, Michael L.; Kwan, Chiman; Ayhan, Bulent; Shang, Eric L.
2016-01-01
Flares involve generation of the largest current densities in the solar atmosphere. This suggests the hypothesis that prior to a large (M,X) flare there are related time dependent changes in the photospheric current distribution, and hence in the resistive heating rate in neutral line regions (NLRs). If this is true, these changes might be useful predictors of flares. Evidence supporting this hypothesis is presented. Results from a data driven, near photospheric, 3D magnetohydrodynamic type model suggest the model might be useful for predicting M and X flares several hours to several days in advance. The model takes as input the photospheric magnetic field observed by the Helioseismic & Magnetic Imager (HMI) on the Solar Dynamics Observatory (SDO) satellite. The model computes quantities in every active region (AR) pixel for 14 ARs, with spurious Doppler periods due to SDO orbital motion filtered out of the time series of the magnetic field for each pixel. Spikes in the NLR resistive heating rate Q, appearing as increases by orders of magnitude above background values in the time series of Q are found to occur, and appear to be correlated with the occurrence of M or X flares a few hours to a few days later. The subset of spikes analyzed at the pixel level are found to occur on HMI and granulation scales of 1 arcsec and 12 minutes. Spikes are found in NLRs with and without M or X flares, and outside as well as inside NLRs, but the largest spikes are localized in the NLRs of ARs with M or X flares, and associated with horizontal magnetic field strengths several hG, and vertical magnetic field strengths several orders of magnitude smaller, suggesting that the spikes are associated with current sheets.
NASA Astrophysics Data System (ADS)
Kis, A.; Lemperger, I.; Wesztergom, V.; Menvielle, M.; Szalai, S.; Novák, A.; Hada, T.; Matsukiyo, S.; Lethy, A. M.
2016-12-01
Magnetotelluric method is widely applied for investigation of subsurface structures by imaging the spatial distribution of electric conductivity. The method is based on the experimental determination of surface electromagnetic impedance tensor (Z) by surface geomagnetic and telluric registrations in two perpendicular orientation. In practical explorations the accurate estimation of Z necessitates the application of robust statistical methods for two reasons:1) the geomagnetic and telluric time series' are contaminated by man-made noise components and2) the non-homogeneous behavior of ionospheric current systems in the period range of interest (ELF-ULF and longer periods) results in systematic deviation of the impedance of individual time windows.Robust statistics manage both load of Z for the purpose of subsurface investigations. However, accurate analysis of the long term temporal variation of the first and second statistical moments of Z may provide valuable information about the characteristics of the ionospheric source current systems. Temporal variation of extent, spatial variability and orientation of the ionospheric source currents has specific effects on the surface impedance tensor. Twenty year long geomagnetic and telluric recordings of the Nagycenk Geophysical Observatory provides unique opportunity to reconstruct the so called magnetotelluric source effect and obtain information about the spatial and temporal behavior of ionospheric source currents at mid-latitudes. Detailed investigation of time series of surface electromagnetic impedance tensor has been carried out in different frequency classes of the ULF range. The presentation aims to provide a brief review of our results related to long term periodic modulations, up to solar cycle scale and about eventual deviations of the electromagnetic impedance and so the reconstructed equivalent ionospheric source effects.
Duality between Time Series and Networks
Campanharo, Andriana S. L. O.; Sirer, M. Irmak; Malmgren, R. Dean; Ramos, Fernando M.; Amaral, Luís A. Nunes.
2011-01-01
Studying the interaction between a system's components and the temporal evolution of the system are two common ways to uncover and characterize its internal workings. Recently, several maps from a time series to a network have been proposed with the intent of using network metrics to characterize time series. Although these maps demonstrate that different time series result in networks with distinct topological properties, it remains unclear how these topological properties relate to the original time series. Here, we propose a map from a time series to a network with an approximate inverse operation, making it possible to use network statistics to characterize time series and time series statistics to characterize networks. As a proof of concept, we generate an ensemble of time series ranging from periodic to random and confirm that application of the proposed map retains much of the information encoded in the original time series (or networks) after application of the map (or its inverse). Our results suggest that network analysis can be used to distinguish different dynamic regimes in time series and, perhaps more importantly, time series analysis can provide a powerful set of tools that augment the traditional network analysis toolkit to quantify networks in new and useful ways. PMID:21858093
Porous siliconformation and etching process for use in silicon micromachining
Guilinger, Terry R.; Kelly, Michael J.; Martin, Jr., Samuel B.; Stevenson, Joel O.; Tsao, Sylvia S.
1991-01-01
A reproducible process for uniformly etching silicon from a series of micromechanical structures used in electrical devices and the like includes providing a micromechanical structure having a silicon layer with defined areas for removal thereon and an electrochemical cell containing an aqueous hydrofluoric acid electrolyte. The micromechanical structure is submerged in the electrochemical cell and the defined areas of the silicon layer thereon are anodically biased by passing a current through the electrochemical cell for a time period sufficient to cause the defined areas of the silicon layer to become porous. The formation of the depth of the porous silicon is regulated by controlling the amount of current passing through the electrochemical cell. The micromechanical structure is then removed from the electrochemical cell and submerged in a hydroxide solution to remove the porous silicon. The process is subsequently repeated for each of the series of micromechanical structures to achieve a reproducibility better than 0.3%.
Saturating time-delay transformer for overcurrent protection. [Patent application
Praeg, W.F.
1975-12-18
Electrical loads connected to dc supplies are protected from damage by overcurrent in the case of a load fault by connecting in series with the load a saturating transformer that detects a load fault and limits the fault current to a safe level for a period long enough to correct the fault or else disconnect the power supply.
ERIC Educational Resources Information Center
Harrington, Charles
Focusing on the time period since the 1974 Supreme Court Lau v. Nichols decision, this paper reviews Federal policy regarding bilingual education, discusses the current sociopolitical context of bilingual education, discusses evaluations of bilingual programming done to date, and examines the implications of these factors for schools and…
High Resolution Time Series Observations of Bio-Optical and Physical Variability in the Arabian Sea
1998-09-30
1995-October 20, 1995). Multi-variable moored systems ( MVMS ) were deployed by our group at 35 and 80m. The MVMS utilizes a VMCM to measure currents...similar to that of the UCSB MVMSs. WORK COMPLETED Our MVMS interdisciplinary systems with sampling intervals of a few minutes were placed on a mooring
Saturating time-delay transformer for overcurrent protection
Praeg, Walter F.
1977-01-01
Electrical loads connected to d-c supplies are protected from damage by overcurrent in the case of a load fault by connecting in series with the load a saturating transformer that detects a load fault and limits the fault current to a safe level for a period long enough to correct the fault or else disconnect the power supply.
Widening Access in Higher Education in Zimbabwe
ERIC Educational Resources Information Center
Kariwo, Michael Tonderai
2007-01-01
Higher education in Zimbabwe is undergoing changes mainly because of the rapid expansion that started in 1999. The current situation is that higher education is going through a series of crises due to the fact that government subventions are diminishing in real terms as a result of the decline in economic growth, yet at the same time, student…
"Key Concepts in ELT": Taking Stock
ERIC Educational Resources Information Center
Hall, Graham
2012-01-01
This article identifies patterns and trends within "Key Concepts in ELT", both since the inception of the feature in ELT Journal in 1993 and during the 17 years of the current editorship. After outlining the aims of the series, the article identifies key themes that have emerged over time, exploring the links between "Key Concepts" pieces and the…
Visual Analysis among Novices: Training and Trend Lines as Graphic Aids
ERIC Educational Resources Information Center
Nelson, Peter M.; Van Norman, Ethan R.; Christ, Theodore J.
2017-01-01
The current study evaluated the degree to which novice visual analysts could discern trends in simulated time-series data across differing levels of variability and extreme values. Forty-five novice visual analysts were trained in general principles of visual analysis. One group received brief training on how to identify and omit extreme values.…
Robert E. Keane; Lisa M. Holsinger; Russell A. Parsons; Kathy Gray
2008-01-01
Quantifying the historical range and variability of landscape composition and structure using simulation modeling is becoming an important means of assessing current landscape condition and prioritizing landscapes for ecosystem restoration. However, most simulated time series are generated using static climate conditions which fail to account for the predicted major...
Cointegration and Nonstationarity in the Context of Multiresolution Analysis
NASA Astrophysics Data System (ADS)
Worden, K.; Cross, E. J.; Kyprianou, A.
2011-07-01
Cointegration has established itself as a powerful means of projecting out long-term trends from time-series data in the context of econometrics. Recent work by the current authors has further established that cointegration can be applied profitably in the context of structural health monitoring (SHM), where it is desirable to project out the effects of environmental and operational variations from data in order that they do not generate false positives in diagnostic tests. The concept of cointegration is partly built on a clear understanding of the ideas of stationarity and nonstationarity for time-series. Nonstationarity in this context is 'traditionally' established through the use of statistical tests, e.g. the hypothesis test based on the augmented Dickey-Fuller statistic. However, it is important to understand the distinction in this case between 'trend' stationarity and stationarity of the AR models typically fitted as part of the analysis process. The current paper will discuss this distinction in the context of SHM data and will extend the discussion by the introduction of multi-resolution (discrete wavelet) analysis as a means of characterising the time-scales on which nonstationarity manifests itself. The discussion will be based on synthetic data and also on experimental data for the guided-wave SHM of a composite plate.
VCSELs for datacom applications
NASA Astrophysics Data System (ADS)
Wipiejewski, Torsten; Wolf, Hans-Dieter; Korte, Lutz; Huber, Wolfgang; Kristen, Guenter; Hoyler, Charlotte; Hedrich, Harald; Kleinbub, Oliver; Albrecht, Tony; Mueller, Juergen; Orth, Andreas; Spika, Zeljko; Lutgen, Stephan; Pflaeging, Hartwig; Harrasser, Joerg; Droegemueller, Karsten; Plickert, Volker; Kuhl, Detlef; Blank, Juergen; Pietsch, Doris; Stange, Herwig; Karstensen, Holger
1999-04-01
The use of oxide confined VCSELs in datacom applications is demonstrated. The devices exhibit low threshold currents of approximately 3 mA and low electrical series resistance of about 50 (Omega) . The emission wavelength is in the 850 nm range. Life times of the devices are several million hours under normal operating conditions. VCSEL arrays are employed in a high performance parallel optical link called PAROLITM. This optical ink provides 12 parallel channels with a total bandwidth exceeding 12 Gbit/s. The VCSELs optimized for the parallel optical link show excellent threshold current uniformity between channels of < 50 (mu) A. The array life time drops compared to a single device, but is still larger than 1 million hours.
Solar panel acceptance testing using a pulsed solar simulator
NASA Technical Reports Server (NTRS)
Hershey, T. L.
1977-01-01
Utilizing specific parameters as area of an individual cell, number in series and parallel, and established coefficient of current and voltage temperature dependence, a solar array irradiated with one solar constant at AMO and at ambient temperature can be characterized by a current-voltage curve for different intensities, temperatures, and even different configurations. Calibration techniques include: uniformity in area, depth and time, absolute and transfer irradiance standards, dynamic and functional check out procedures. Typical data are given for individual cell (2x2 cm) to complete flat solar array (5x5 feet) with 2660 cells and on cylindrical test items with up to 10,000 cells. The time and energy saving of such testing techniques are emphasized.
A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic
Qi, Jin-Peng; Qi, Jie; Zhang, Qing
2016-01-01
Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals. PMID:27413364
A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic.
Qi, Jin-Peng; Qi, Jie; Zhang, Qing
2016-01-01
Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals.
Equalizer system and method for series connected energy storing devices
Rouillard, Jean; Comte, Christophe; Hagen, Ronald A.; Knudson, Orlin B.; Morin, Andre; Ross, Guy
1999-01-01
An apparatus and method for regulating the charge voltage of a number of electrochemical cells connected in series is disclosed. Equalization circuitry is provided to control the amount of charge current supplied to individual electrochemical cells included within the series string of electrochemical cells without interrupting the flow of charge current through the series string. The equalization circuitry balances the potential of each of the electrochemical cells to within a pre-determined voltage setpoint tolerance during charging, and, if necessary, prior to initiating charging. Equalization of cell potentials may be effected toward the end of a charge cycle or throughout the charge cycle. Overcharge protection is also provided for each of the electrochemical cells coupled to the series connection. During a discharge mode of operation in accordance with one embodiment, the equalization circuitry is substantially non-conductive with respect to the flow of discharge current from the series string of electrochemical cells. In accordance with another embodiment, equalization of the series string of cells is effected during a discharge cycle.
ERIC Educational Resources Information Center
Bureau of Naval Personnel, Washington, DC.
The module covers series circuits which contain both resistive and reactive components and methods of solving these circuits for current, voltage, impedance, and phase angle. The module is divided into six lessons: voltage and impedance in AC (alternating current) series circuits, vector computations, rectangular and polar notation, variational…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva, E.R.C. da; Filho, B.J.C.
This paper presents a PWM current clamping circuit for improving a series resonant DC link converter. This circuit is capable of reducing current peaks to about 1.2--1.4 times the DC bias current. When desired, resonant transition creates notches in the dc link current, allowing the converter`s switches to synchronize with external PWM strategy. A regulated DC current source may be obtained--by using a conventional rectifier source--to feed a DC load or a current source inverter. Phase plane approach makes ease the understanding the operation, control and design procedure of the circuit. Another topology is derived and its features compared tomore » the first circuit. Simulation results for the simplified circuit and for a three-phase induction motor driven by such inverter will be presented. Moreover, the principle is corroborated by experimental results.« less
Optimal joule heating of the subsurface
Berryman, James G.; Daily, William D.
1994-01-01
A method for simultaneously heating the subsurface and imaging the effects of the heating. This method combines the use of tomographic imaging (electrical resistance tomography or ERT) to image electrical resistivity distribution underground, with joule heating by electrical currents injected in the ground. A potential distribution is established on a series of buried electrodes resulting in energy deposition underground which is a function of the resistivity and injection current density. Measurement of the voltages and currents also permits a tomographic reconstruction of the resistivity distribution. Using this tomographic information, the current injection pattern on the driving electrodes can be adjusted to change the current density distribution and thus optimize the heating. As the heating changes conditions, the applied current pattern can be repeatedly adjusted (based on updated resistivity tomographs) to affect real time control of the heating.
Analysis of a Series of Electromagnetic Launcher Firings,
1987-06-01
Velocities 5.2 Current-Time Analysis and Inductance - Charging Energy LoSS 9 5.3 Streal Photographs 17 5.4 Muzzle Voltage Records 20 5.5 Position-Time...t) (16) where a equals (L 0C)-1/2 If Rtot (t - u/20)/2L0 << 1, then Equation (16) becomes: ot - arsin(I/I p) ( 17 ) Thus a plot of arsin(x/I ) against...photograph corresponds to about 4.0 ps of actual time while each mm In the horizontal direction corresponds to an actual length of about 17 mm. The region
Evidence for a physical linkage between galactic cosmic rays and regional climate time series
Perry, C.A.
2007-01-01
The effects of solar variability on regional climate time series were examined using a sequence of physical connections between total solar irradiance (TSI) modulated by galactic cosmic rays (GCRs), and ocean and atmospheric patterns that affect precipitation and streamflow. The solar energy reaching the Earth's surface and its oceans is thought to be controlled through an interaction between TSI and GCRs, which are theorized to ionize the atmosphere and increase cloud formation and its resultant albedo. High (low) GCR flux may promote cloudiness (clear skies) and higher (lower) albedo at the same time that TSI is lowest (highest) in the solar cycle which in turn creates cooler (warmer) ocean temperature anomalies. These anomalies have been shown to affect atmospheric flow patterns and ultimately affect precipitation over the Midwestern United States. This investigation identified a relation among TSI and geomagnetic index aa (GI-AA), and streamflow in the Mississippi River Basin for the period 1878-2004. The GI-AA was used as a proxy for GCRs. The lag time between the solar signal and streamflow in the Mississippi River at St. Louis, Missouri is approximately 34 years. The current drought (1999-2007) in the Mississippi River Basin appears to be caused by a period of lower solar activity that occurred between 1963 and 1977. There appears to be a solar "fingerprint" that can be detected in climatic time series in other regions of the world, with each series having a unique lag time between the solar signal and the hydroclimatic response. A progression of increasing lag times can be spatially linked to the ocean conveyor belt, which may transport the solar signal over a time span of several decades. The lag times for any one region vary slightly and may be linked to the fluctuations in the velocity of the ocean conveyor belt.
Analysis of series resonant converter with series-parallel connection
NASA Astrophysics Data System (ADS)
Lin, Bor-Ren; Huang, Chien-Lan
2011-02-01
In this study, a parallel inductor-inductor-capacitor (LLC) resonant converter series-connected on the primary side and parallel-connected on the secondary side is presented for server power supply systems. Based on series resonant behaviour, the power metal-oxide-semiconductor field-effect transistors are turned on at zero voltage switching and the rectifier diodes are turned off at zero current switching. Thus, the switching losses on the power semiconductors are reduced. In the proposed converter, the primary windings of the two LLC converters are connected in series. Thus, the two converters have the same primary currents to ensure that they can supply the balance load current. On the output side, two LLC converters are connected in parallel to share the load current and to reduce the current stress on the secondary windings and the rectifier diodes. In this article, the principle of operation, steady-state analysis and design considerations of the proposed converter are provided and discussed. Experiments with a laboratory prototype with a 24 V/21 A output for server power supply were performed to verify the effectiveness of the proposed converter.
Detection of a sudden change of the field time series based on the Lorenz system
Li, Fang; Shen, BingLu; Yan, PengCheng; Song, Jian; Ma, DeShan
2017-01-01
We conducted an exploratory study of the detection of a sudden change of the field time series based on the numerical solution of the Lorenz system. First, the time when the Lorenz path jumped between the regions on the left and right of the equilibrium point of the Lorenz system was quantitatively marked and the sudden change time of the Lorenz system was obtained. Second, the numerical solution of the Lorenz system was regarded as a vector; thus, this solution could be considered as a vector time series. We transformed the vector time series into a time series using the vector inner product, considering the geometric and topological features of the Lorenz system path. Third, the sudden change of the resulting time series was detected using the sliding t-test method. Comparing the test results with the quantitatively marked time indicated that the method could detect every sudden change of the Lorenz path, thus the method is effective. Finally, we used the method to detect the sudden change of the pressure field time series and temperature field time series, and obtained good results for both series, which indicates that the method can apply to high-dimension vector time series. Mathematically, there is no essential difference between the field time series and vector time series; thus, we provide a new method for the detection of the sudden change of the field time series. PMID:28141832
Droege, T.F.
1989-12-19
A high voltage DC power supply having a first series resistor at the output for limiting current in the event of a short-circuited output, a second series resistor for sensing the magnitude of output current, and a voltage divider circuit for providing a source of feedback voltage for use in voltage regulation is disclosed. The voltage divider circuit is coupled to the second series resistor so as to compensate the feedback voltage for a voltage drop across the first series resistor. The power supply also includes a pulse-width modulated control circuit, having dual clock signals, which is responsive to both the feedback voltage and a command voltage, and also includes voltage and current measuring circuits responsive to the feedback voltage and the voltage developed across the second series resistor respectively. 7 figs.
Droege, Thomas F.
1989-01-01
A high voltage DC power supply having a first series resistor at the output for limiting current in the event of a short-circuited output, a second series resistor for sensing the magnitude of output current, and a voltage divider circuit for providing a source of feedback voltage for use in voltage regulation is disclosed. The voltage divider circuit is coupled to the second series resistor so as to compensate the feedback voltage for a voltage drop across the first series resistor. The power supply also includes a pulse-width modulated control circuit, having dual clock signals, which is responsive to both the feedback voltage and a command voltage, and also includes voltage and current measuring circuits responsive to the feedback voltage and the voltage developed across the second series resistor respectively.
Revision of Primary Series Maps
,
2000-01-01
In 1992, the U.S. Geological Survey (USGS) completed a 50-year effort to provide primary series map coverage of the United States. Many of these maps now need to be updated to reflect the construction of new roads and highways and other changes that have taken place over time. The USGS has formulated a graphic revision plan to help keep the primary series maps current. Primary series maps include 1:20,000-scale quadrangles of Puerto Rico, 1:24,000- or 1:25,000-scale quadrangles of the conterminous United States, Hawaii, and U.S. Territories, and 1:63,360-scale quadrangles of Alaska. The revision of primary series maps from new collection sources is accomplished using a variety of processes. The raster revision process combines the scanned content of paper maps with raster updating technologies. The vector revision process involves the automated plotting of updated vector files. Traditional processes use analog stereoplotters and manual scribing instruments on specially coated map separates. The ability to select from or combine these processes increases the efficiency of the National Mapping Division map revision program.
NASA Astrophysics Data System (ADS)
Mathevet, T.; Kuentz, A.; Gailhard, J.; Andreassian, V.
2013-12-01
Improving the understanding of mountain watersheds hydrological variability is a great scientific issue, for both researchers and water resources managers, such as Electricite de France (Energy and Hydropower Company). The past and current context of climate variability enhances the interest on this topic, since multi-purposes water resources management is highly sensitive to this variability. The Durance River watershed (14000 km2), situated in the French Alps, is a good example of the complexity of this issue. It is characterized by a variety of hydrological processes (from snowy to Mediterranean regimes) and a wide range of anthropogenic influences (hydropower, irrigation, flood control, tourism and water supply), mixing potential causes of changes in its hydrological regimes. As water related stakes are numerous in this watershed, improving knowledge on the hydrological variability of the Durance River appears to be essential. In this presentation, we would like to focus on a methodology we developed to build long-term historical hydrometeorological time-series, based on atmospheric reanalysis (20CR : 20th Century Reanalysis) and historical local observations. This methodology allowed us to generate precipitation, air temperature and streamflow time-series at a daily time-step for a sample of 22 watersheds, for the 1883-2010 period. These long-term streamflow reconstructions have been validated thanks to historical searches that allowed to bring to light ten long historical series of daily streamflows, beginning on the early 20th century. Reconstructions appear to have rather good statistical properties, with good correlation (greater than 0.8) and limited mean and variance bias (less than 5%). Then, these long-term hydrometeorological time-series allowed us to characterize the past variability in terms of available water resources, droughts or hydrological regime. These analyses help water resources managers to better know the range of hydrological variabilities, which are usually greatly underestimated with classical available time-series (less than 50 years).
Microforms in gravel bed rivers: Formation, disintegration, and effects on bedload transport
Strom, K.; Papanicolaou, A.N.; Evangelopoulos, N.; Odeh, M.
2004-01-01
This research aims to advance current knowledge on cluster formation and evolution by tackling some of the aspects associated with cluster microtopography and the effects of clusters on bedload transport. The specific objectives of the study are (1) to identify the bed shear stress range in which clusters form and disintegrate, (2) to quantitatively describe the spacing characteristics and orientation of clusters with respect to flow characteristics, (3) to quantify the effects clusters have on the mean bedload rate, and (4) to assess the effects of clusters on the pulsating nature of bedload. In order to meet the objectives of this study, two main experimental scenarios, namely, Test Series A and B (20 experiments overall) are considered in a laboratory flume under well-controlled conditions. Series A tests are performed to address objectives (1) and (2) while Series B is designed to meet objectives (3) and (4). Results show that cluster microforms develop in uniform sediment at 1.25 to 2 times the Shields parameter of an individual particle and start disintegrating at about 2.25 times the Shields parameter. It is found that during an unsteady flow event, effects of clusters on bedload transport rate can be classified in three different phases: a sink phase where clusters absorb incoming sediment, a neutral phase where clusters do not affect bedload, and a source phase where clusters release particles. Clusters also increase the magnitude of the fluctuations in bedload transport rate, showing that clusters amplify the unsteady nature of bedload transport. A fourth-order autoregressive, autoregressive integrated moving average model is employed to describe the time series of bedload and provide a predictive formula for predicting bedload at different periods. Finally, a change-point analysis enhanced with a binary segmentation procedure is performed to identify the abrupt changes in the bedload statistic characteristics due to the effects of clusters and detect the different phases in bedload time series using probability theory. The analysis verifies the experimental findings that three phases are detected in the bedload rate time series structure, namely, sink, neutral, and source. ?? ASCE / JUNE 2004.
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Tanaka, K.; Almeida, E. G.
1978-01-01
The author has identified the following significant results. Data obtained during the cruise of the Cabo Frio and from LANDSAT imagery are used to discuss the characteristics of a linear model which simulates wind induced currents calculated from meteorological conditions at the time of the mission. There is a significant correspondance between the model of simulated horizontal water circulation, sea surface temperature, and surface currents observed on LANDSAT imagery. Close approximations were also observed between the simulation of vertical water movement (upwelling) and the oceanographic measurements taken along a series of points of the prevailing currents.
Multiple Indicator Stationary Time Series Models.
ERIC Educational Resources Information Center
Sivo, Stephen A.
2001-01-01
Discusses the propriety and practical advantages of specifying multivariate time series models in the context of structural equation modeling for time series and longitudinal panel data. For time series data, the multiple indicator model specification improves on classical time series analysis. For panel data, the multiple indicator model…
Individualistic and Time-Varying Tree-Ring Growth to Climate Sensitivity
Carrer, Marco
2011-01-01
The development of dendrochronological time series in order to analyze climate-growth relationships usually involves first a rigorous selection of trees and then the computation of the mean tree-growth measurement series. This study suggests a change in the perspective, passing from an analysis of climate-growth relationships that typically focuses on the mean response of a species to investigating the whole range of individual responses among sample trees. Results highlight that this new approach, tested on a larch and stone pine tree-ring dataset, outperforms, in terms of information obtained, the classical one, with significant improvements regarding the strength, distribution and time-variability of the individual tree-ring growth response to climate. Moreover, a significant change over time of the tree sensitivity to climatic variability has been detected. Accordingly, the best-responder trees at any one time may not always have been the best-responders and may not continue to be so. With minor adjustments to current dendroecological protocol and adopting an individualistic approach, we can improve the quality and reliability of the ecological inferences derived from the climate-growth relationships. PMID:21829523
IDSP- INTERACTIVE DIGITAL SIGNAL PROCESSOR
NASA Technical Reports Server (NTRS)
Mish, W. H.
1994-01-01
The Interactive Digital Signal Processor, IDSP, consists of a set of time series analysis "operators" based on the various algorithms commonly used for digital signal analysis work. The processing of a digital time series to extract information is usually achieved by the application of a number of fairly standard operations. However, it is often desirable to "experiment" with various operations and combinations of operations to explore their effect on the results. IDSP is designed to provide an interactive and easy-to-use system for this type of digital time series analysis. The IDSP operators can be applied in any sensible order (even recursively), and can be applied to single time series or to simultaneous time series. IDSP is being used extensively to process data obtained from scientific instruments onboard spacecraft. It is also an excellent teaching tool for demonstrating the application of time series operators to artificially-generated signals. IDSP currently includes over 43 standard operators. Processing operators provide for Fourier transformation operations, design and application of digital filters, and Eigenvalue analysis. Additional support operators provide for data editing, display of information, graphical output, and batch operation. User-developed operators can be easily interfaced with the system to provide for expansion and experimentation. Each operator application generates one or more output files from an input file. The processing of a file can involve many operators in a complex application. IDSP maintains historical information as an integral part of each file so that the user can display the operator history of the file at any time during an interactive analysis. IDSP is written in VAX FORTRAN 77 for interactive or batch execution and has been implemented on a DEC VAX-11/780 operating under VMS. The IDSP system generates graphics output for a variety of graphics systems. The program requires the use of Versaplot and Template plotting routines and IMSL Math/Library routines. These software packages are not included in IDSP. The virtual memory requirement for the program is approximately 2.36 MB. The IDSP system was developed in 1982 and was last updated in 1986. Versaplot is a registered trademark of Versatec Inc. Template is a registered trademark of Template Graphics Software Inc. IMSL Math/Library is a registered trademark of IMSL Inc.
NASA Technical Reports Server (NTRS)
Hamilton, H. B.; Strangas, E.
1980-01-01
The time dependent solution of the magnetic field is introduced as a method for accounting for the variation, in time, of the machine parameters in predicting and analyzing the performance of the electrical machines. The method of time dependent finite element was used in combination with an also time dependent construction of a grid for the air gap region. The Maxwell stress tensor was used to calculate the airgap torque from the magnetic vector potential distribution. Incremental inductances were defined and calculated as functions of time, depending on eddy currents and saturation. The currents in all the machine circuits were calculated in the time domain based on these inductances, which were continuously updated. The method was applied to a chopper controlled DC series motor used for electric vehicle drive, and to a salient pole sychronous motor with damper bars. Simulation results were compared to experimentally obtained ones.
Alhaji, Nma B; Odetokun, Ismail A; Shittu, Aminu; Onyango, Joshua; Chafe, Umar M; Abubakar, Muhammed S; Muraina, Issa A; Fasina, Folorunso O; Lee, Hu Suk
2015-12-15
In developing countries, foetal wastage from slaughtered ruminants and the associated economic losses appear to be substantial. However, only a limited number of studies have comprehensively evaluated these trends. In the current study, secondary (retrospective) and primary data were collected and evaluated to estimate the prevalence of foetal wastage from cattle, sheep and goats slaughtered at an abattoir in Minna, Nigeria, over a 12-year period (January 2001-December 2012). Time-series modelling revealed substantial differences in the rate of foetal wastage amongst the slaughtered species, with more lambs having been wasted than calves or kids. Seasonal effects seem to influence rates of foetal wastage and certain months in the year appear to be associated with higher odds of foetal wastage. Improved management systems are suggested to reduce the risk of foetal losses.
Program for the analysis of time series. [by means of fast Fourier transform algorithm
NASA Technical Reports Server (NTRS)
Brown, T. J.; Brown, C. G.; Hardin, J. C.
1974-01-01
A digital computer program for the Fourier analysis of discrete time data is described. The program was designed to handle multiple channels of digitized data on general purpose computer systems. It is written, primarily, in a version of FORTRAN 2 currently in use on CDC 6000 series computers. Some small portions are written in CDC COMPASS, an assembler level code. However, functional descriptions of these portions are provided so that the program may be adapted for use on any facility possessing a FORTRAN compiler and random-access capability. Properly formatted digital data are windowed and analyzed by means of a fast Fourier transform algorithm to generate the following functions: (1) auto and/or cross power spectra, (2) autocorrelations and/or cross correlations, (3) Fourier coefficients, (4) coherence functions, (5) transfer functions, and (6) histograms.
Database Performance Monitoring for the Photovoltaic Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klise, Katherine A.
The Database Performance Monitoring (DPM) software (copyright in processes) is being developed at Sandia National Laboratories to perform quality control analysis on time series data. The software loads time indexed databases (currently csv format), performs a series of quality control tests defined by the user, and creates reports which include summary statistics, tables, and graphics. DPM can be setup to run on an automated schedule defined by the user. For example, the software can be run once per day to analyze data collected on the previous day. HTML formatted reports can be sent via email or hosted on a website.more » To compare performance of several databases, summary statistics and graphics can be gathered in a dashboard view which links to detailed reporting information for each database. The software can be customized for specific applications.« less
Do Current Basal Series Use Clear Explanations and Correct Exemplars in Teaching Prefixes?
ERIC Educational Resources Information Center
Volpe, Myra Elaine
A study (replicating a similar 1977 study by S. Stotsky), examined whether current basal series teach prefixion clearly. Teacher's guides, student texts, and workbooks of nine popular basal reader series were examined to ascertain whether they offered a clear definition of the term "prefix" and whether that definition was reinforced by…
NASA Astrophysics Data System (ADS)
Thiébaut, Maxime; Sentchev, Alexei
2015-04-01
We use the current velocity time series recorded by High Frequency Radars (HFR) to study circulation in highly energetic tidal basin - the Iroise sea. We focus on the analysis of tidal current pattern around the Ushant Island which is a promising site of tidal energy. The analysis reveals surface current speeds reaching 4 m/s in the North of Ushant Island and in the Fromveur Strait. In these regions 1 m/s is exceeded 60% of time and up to 70% of time in center of Fromveur. This velocity value is particularly interesting because it represents the cut-in-speed of the most of marine turbine devices. Tidal current asymmetry is not always considered in tidal energy site selection. However, this quantity plays an important role in the quantification of hydrokinetic resources. Current velocity times series recorded by HFR highlights the existence of a pronounced asymmetry in current magnitude between the flood and ebb tide ranging from -0.5 to more 2.5. Power output of free-stream devices depends to velocity cubed. Thus a small current asymmetry can generate a significant power output asymmetry. Spatial distribution of asymmetry coefficient shows persistent pattern and fine scale structure which were quantified with high degree of accuracy. The particular asymmetry evolution on both side of Fromveur strait is related to the spatial distribution of the phase lag of the principal semi-diurnal tidal constituent M2 and its higher order harmonics. In Fromveur, the asymmetry is reinforced due to the high velocity magnitude of the sixth-diurnal tidal harmonics. HF radar provides surface velocity speed, however the quantification of hydrokinetic resources has to take into account the decreasing of velocity with depth. In order to highlight this phenomenon, we plot several velocity profiles given by an ADCP which was installed in the HFR study area during the same period. The mean velocity in the water column calculated by using the ADCP data show that it is about 80% of the surface current speed. We consider this value in our calculation of power to make the power estimation of marine turbine devices more realistic. Finally, we demonstrate that in the region of opposing flood-versus ebb-dominated asymmetry occurring over limited spatial scale, it is possible to aggregated free-stream devices to provide balanced power generation over the tidal cycle. Keywords : Tidal circulation, current asymmetry, tidal energy, HF radar, Iroise Sea.
Imaging ac losses in superconducting films via scanning Hall probe microscopy
NASA Astrophysics Data System (ADS)
Dinner, Rafael B.; Moler, Kathryn A.; Feldmann, D. Matthew; Beasley, M. R.
2007-04-01
Various local probes have been applied to understanding current flow through superconducting films, which are often surprisingly inhomogeneous. Here, we show that magnetic imaging allows quantitative reconstruction of both current density J and electric field E resolved in time and space in a film carrying subcritical ac current. Current reconstruction entails inversion of the Biot-Savart law, while electric fields are reconstructed using Faraday’s law. We describe the corresponding numerical procedures, largely adapting existing work to the case of a strip carrying ac current, but including other methods of obtaining the complete electric field from the inductive portion determined by Faraday’s law. We also delineate the physical requirements behind the mathematical transformations. We then apply the procedures to images of a strip of YBa2Cu3O7-δ carrying an ac current at 400Hz . Our scanning Hall probe microscope produces a time series of magnetic images of the strip with 1μm spatial resolution and 25μs time resolution. Combining the reconstructed J and E , we obtain a complete characterization including local critical current density, E-J curves, and power losses. This analysis has a range of applications from fundamental studies of vortex dynamics to practical coated conductor development.
Rusu, Cristian; Morisi, Rita; Boschetto, Davide; Dharmakumar, Rohan; Tsaftaris, Sotirios A.
2014-01-01
This paper aims to identify approaches that generate appropriate synthetic data (computer generated) for Cardiac Phase-resolved Blood-Oxygen-Level-Dependent (CP–BOLD) MRI. CP–BOLD MRI is a new contrast agent- and stress-free approach for examining changes in myocardial oxygenation in response to coronary artery disease. However, since signal intensity changes are subtle, rapid visualization is not possible with the naked eye. Quantifying and visualizing the extent of disease relies on myocardial segmentation and registration to isolate the myocardium and establish temporal correspondences and ischemia detection algorithms to identify temporal differences in BOLD signal intensity patterns. If transmurality of the defect is of interest pixel-level analysis is necessary and thus a higher precision in registration is required. Such precision is currently not available affecting the design and performance of the ischemia detection algorithms. In this work, to enable algorithmic developments of ischemia detection irrespective to registration accuracy, we propose an approach that generates synthetic pixel-level myocardial time series. We do this by (a) modeling the temporal changes in BOLD signal intensity based on sparse multi-component dictionary learning, whereby segmentally derived myocardial time series are extracted from canine experimental data to learn the model; and (b) demonstrating the resemblance between real and synthetic time series for validation purposes. We envision that the proposed approach has the capacity to accelerate development of tools for ischemia detection while markedly reducing experimental costs so that cardiac BOLD MRI can be rapidly translated into the clinical arena for the noninvasive assessment of ischemic heart disease. PMID:24691119
NASA Astrophysics Data System (ADS)
Koslow, J. A.; Brodeur, R.; Duffy-Anderson, J. T.; Perry, I.; jimenez Rosenberg, S.; Aceves, G.
2016-02-01
Ichthyoplankton time series available from the Bering Sea, Gulf of Alaska and California Current (Oregon to Baja California) provide a potential ocean observing network to assess climate impacts on fish communities along the west coast of North America. Larval fish abundance reflects spawning stock biomass, so these data sets provide indicators of the status of a broad range of exploited and unexploited fish populations. Analyses to date have focused on individual time series, which generally exhibit significant change in relation to climate. Off California, a suite of 24 midwater fish taxa have declined > 60%, correlated with declining midwater oxygen concentrations, and overall larval fish abundance has declined 72% since 1969, a trend based on the decline of predominantly cool-water affinity taxa in response to warming ocean temperatures. Off Oregon, there were dramatic differences in community structure and abundance of larval fishes between warm and cool ocean conditions. Midwater deoxygenation and warming sea surface temperature trends are predicted to continue as a result of global climate change. US, Canadian, and Mexican fishery scientists are now collaborating in a virtual ocean observing network to synthesize available ichthyoplankton time series and compare patterns of change in relation to climate. This will provide regional indicators of populations and groups of taxa sensitive to warming, deoxygenation and potentially other stressors, establish the relevant scales of coherence among sub-regions and across Large Marine Ecosystems, and provide the basis for predicting future climate change impacts on these ecosystems.
Rusu, Cristian; Morisi, Rita; Boschetto, Davide; Dharmakumar, Rohan; Tsaftaris, Sotirios A
2014-07-01
This paper aims to identify approaches that generate appropriate synthetic data (computer generated) for cardiac phase-resolved blood-oxygen-level-dependent (CP-BOLD) MRI. CP-BOLD MRI is a new contrast agent- and stress-free approach for examining changes in myocardial oxygenation in response to coronary artery disease. However, since signal intensity changes are subtle, rapid visualization is not possible with the naked eye. Quantifying and visualizing the extent of disease relies on myocardial segmentation and registration to isolate the myocardium and establish temporal correspondences and ischemia detection algorithms to identify temporal differences in BOLD signal intensity patterns. If transmurality of the defect is of interest pixel-level analysis is necessary and thus a higher precision in registration is required. Such precision is currently not available affecting the design and performance of the ischemia detection algorithms. In this work, to enable algorithmic developments of ischemia detection irrespective to registration accuracy, we propose an approach that generates synthetic pixel-level myocardial time series. We do this by 1) modeling the temporal changes in BOLD signal intensity based on sparse multi-component dictionary learning, whereby segmentally derived myocardial time series are extracted from canine experimental data to learn the model; and 2) demonstrating the resemblance between real and synthetic time series for validation purposes. We envision that the proposed approach has the capacity to accelerate development of tools for ischemia detection while markedly reducing experimental costs so that cardiac BOLD MRI can be rapidly translated into the clinical arena for the noninvasive assessment of ischemic heart disease.
Bivariate analysis of floods in climate impact assessments.
Brunner, Manuela Irene; Sikorska, Anna E; Seibert, Jan
2018-03-01
Climate impact studies regarding floods usually focus on peak discharges and a bivariate assessment of peak discharges and hydrograph volumes is not commonly included. A joint consideration of peak discharges and hydrograph volumes, however, is crucial when assessing flood risks for current and future climate conditions. Here, we present a methodology to develop synthetic design hydrographs for future climate conditions that jointly consider peak discharges and hydrograph volumes. First, change factors are derived based on a regional climate model and are applied to observed precipitation and temperature time series. Second, the modified time series are fed into a calibrated hydrological model to simulate runoff time series for future conditions. Third, these time series are used to construct synthetic design hydrographs. The bivariate flood frequency analysis used in the construction of synthetic design hydrographs takes into account the dependence between peak discharges and hydrograph volumes, and represents the shape of the hydrograph. The latter is modeled using a probability density function while the dependence between the design variables peak discharge and hydrograph volume is modeled using a copula. We applied this approach to a set of eight mountainous catchments in Switzerland to construct catchment-specific and season-specific design hydrographs for a control and three scenario climates. Our work demonstrates that projected climate changes have an impact not only on peak discharges but also on hydrograph volumes and on hydrograph shapes both at an annual and at a seasonal scale. These changes are not necessarily proportional which implies that climate impact assessments on future floods should consider more flood characteristics than just flood peaks. Copyright © 2017. Published by Elsevier B.V.
NASA Technical Reports Server (NTRS)
Goodman, Michael L.; Kwan, Chiman; Ayhan, Bulent; Shang, Eric L.
2017-01-01
A data driven, near photospheric, 3 D, non-force free magnetohydrodynamic model pre- dicts time series of the complete current density, and the resistive heating rate Q at the photosphere in neutral line regions (NLRs) of 14 active regions (ARs). The model is driven by time series of the magnetic field B observed by the Helioseismic & Magnetic Imager on the Solar Dynamics Observatory (SDO) satellite. Spurious Doppler periods due to SDO orbital motion are filtered out of the time series for B in every AR pixel. Errors in B due to these periods can be significant. The number of occurrences N(q) of values of Q > or = q for each AR time series is found to be a scale invariant power law distribution, N(Q) / Q-s, above an AR dependent threshold value of Q, where 0.3952 < or = s < or = 0.5298 with mean and standard deviation of 0.4678 and 0.0454, indicating little variation between ARs. Observations show that the number of occurrences N(E) of coronal flares with a total energy released > or = E obeys the same type of distribution, N(E) / E-S, above an AR dependent threshold value of E, with 0.38 < or approx. S < or approx. 0.60, also with little variation among ARs. Within error margins the ranges of s and S are nearly identical. This strong similarity between N(Q) and N(E) suggests a fundamental connection between the process that drives coronal flares and the process that drives photospheric NLR heating rates in ARs. In addition, results suggest it is plausible that spikes in Q, several orders of magnitude above background values, are correlated with times of the subsequent occurrence of M or X flares.
NASA Technical Reports Server (NTRS)
Goodman, Michael L.; Kwan, Chiman; Ayhan, Bulent; Shang, Eric L.
2017-01-01
A data driven, near photospheric, 3 D, non-force free magnetohydrodynamic model predicts time series of the complete current density, and the resistive heating rate Q at the photosphere in neutral line regions (NLRs) of 14 active regions (ARs). The model is driven by time series of the magnetic field B observed by the Helioseismic and Magnetic Imager on the Solar Dynamics Observatory (SDO) satellite. Spurious Doppler periods due to SDO orbital motion are filtered out of the time series for B in every AR pixel. Errors in B due to these periods can be significant. The number of occurrences N(q) of values of Q > or = q for each AR time series is found to be a scale invariant power law distribution, N(Q) / Q-s, above an AR dependent threshold value of Q, where 0.3952 < or = s < or = 0.5298 with mean and standard deviation of 0.4678 and 0.0454, indicating little variation between ARs. Observations show that the number of occurrences N(E) of coronal flares with a total energy released > or = E obeys the same type of distribution, N(E) / E-S, above an AR dependent threshold value of E, with 0.38 < or approx. S < or approx. 0.60, also with little variation among ARs. Within error margins the ranges of s and S are nearly identical. This strong similarity between N(Q) and N(E) suggests a fundamental connection between the process that drives coronal flares and the process that drives photospheric NLR heating rates in ARs. In addition, results suggest it is plausible that spikes in Q, several orders of magnitude above background values, are correlated with times of the subsequent occurrence of M or X flares.
NASA Technical Reports Server (NTRS)
Savani, N. P.; Vourlidas, A.; Pulkkinen, A.; Nieves-Chinchilla, T.; Lavraud, B.; Owens, M. J.
2013-01-01
We investigate a coronal mass ejection (CME) propagating toward Earth on 29 March 2011. This event is specifically chosen for its predominately northward directed magnetic field, so that the influence from the momentum flux onto Earth can be isolated. We focus our study on understanding how a small Earth-directed segment propagates. Mass images are created from the white-light cameras onboard STEREO which are also converted into mass height-time maps (mass J-maps). The mass tracks on these J-maps correspond to the sheath region between the CME and its associated shockfront as detected by in situ measurements at L1. A time series of mass measurements from the STEREOCOR-2A instrument is made along the Earth propagation direction. Qualitatively, this mass time series shows a remarkable resemblance to the L1 in situ density series. The in situ measurements are used as inputs into a three-dimensional (3-D) magnetospheric space weather simulation from the Community Coordinated Modeling Center. These simulations display a sudden compression of the magnetosphere from the large momentum flux at the leading edge of the CME, and predictions are made for the time derivative of the magnetic field (dBdt) on the ground. The predicted dBdt values were then compared with the observations from specific equatorially located ground stations and showed notable similarity. This study of the momentum of a CME from the Sun down to its influence on magnetic ground stations on Earth is presented as a preliminary proof of concept, such that future attempts may try to use remote sensing to create density and velocity time series as inputs to magnetospheric simulations.
Keane, Robert E.; Rollins, Matthew; Zhu, Zhi-Liang
2007-01-01
Canopy and surface fuels in many fire-prone forests of the United States have increased over the last 70 years as a result of modern fire exclusion policies, grazing, and other land management activities. The Healthy Forest Restoration Act and National Fire Plan establish a national commitment to reduce fire hazard and restore fire-adapted ecosystems across the USA. The primary index used to prioritize treatment areas across the nation is Fire Regime Condition Class (FRCC) computed as departures of current conditions from the historical fire and landscape conditions. This paper describes a process that uses an extensive set of ecological models to map FRCC from a departure statistic computed from simulated time series of historical landscape composition. This mapping process uses a data-driven, biophysical approach where georeferenced field data, biogeochemical simulation models, and spatial data libraries are integrated using spatial statistical modeling to map environmental gradients that are then used to predict vegetation and fuels characteristics over space. These characteristics are then fed into a landscape fire and succession simulation model to simulate a time series of historical landscape compositions that are then compared to the composition of current landscapes to compute departure, and the FRCC values. Intermediate products from this process are then used to create ancillary vegetation, fuels, and fire regime layers that are useful in the eventual planning and implementation of fuel and restoration treatments at local scales. The complex integration of varied ecological models at different scales is described and problems encountered during the implementation of this process in the LANDFIRE prototype project are addressed.
State-space based analysis and forecasting of macroscopic road safety trends in Greece.
Antoniou, Constantinos; Yannis, George
2013-11-01
In this paper, macroscopic road safety trends in Greece are analyzed using state-space models and data for 52 years (1960-2011). Seemingly unrelated time series equations (SUTSE) models are developed first, followed by richer latent risk time-series (LRT) models. As reliable estimates of vehicle-kilometers are not available for Greece, the number of vehicles in circulation is used as a proxy to the exposure. Alternative considered models are presented and discussed, including diagnostics for the assessment of their model quality and recommendations for further enrichment of this model. Important interventions were incorporated in the models developed (1986 financial crisis, 1991 old-car exchange scheme, 1996 new road fatality definition) and found statistically significant. Furthermore, the forecasting results using data up to 2008 were compared with final actual data (2009-2011) indicating that the models perform properly, even in unusual situations, like the current strong financial crisis in Greece. Forecasting results up to 2020 are also presented and compared with the forecasts of a model that explicitly considers the currently on-going recession. Modeling the recession, and assuming that it will end by 2013, results in more reasonable estimates of risk and vehicle-kilometers for the 2020 horizon. This research demonstrates the benefits of using advanced state-space modeling techniques for modeling macroscopic road safety trends, such as allowing the explicit modeling of interventions. The challenges associated with the application of such state-of-the-art models for macroscopic phenomena, such as traffic fatalities in a region or country, are also highlighted. Furthermore, it is demonstrated that it is possible to apply such complex models using the relatively short time-series that are available in macroscopic road safety analysis. Copyright © 2013 Elsevier Ltd. All rights reserved.
Efficient Algorithms for Segmentation of Item-Set Time Series
NASA Astrophysics Data System (ADS)
Chundi, Parvathi; Rosenkrantz, Daniel J.
We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.
NASA Technical Reports Server (NTRS)
Beckley, Brian D.; Ray, Richard D.; Lemoine, Frank G.; Zelensky, N. P.; Holmes, S. A.; Desal, Shailen D.; Brown, Shannon; Mitchum, G. T.; Jacob, Samuel; Luthcke, Scott B.
2010-01-01
The science value of satellite altimeter observations has grown dramatically over time as enabling models and technologies have increased the value of data acquired on both past and present missions. With the prospect of an observational time series extending into several decades from TOPEX/Poseidon through Jason-1 and the Ocean Surface Topography Mission (OSTM), and further in time with a future set of operational altimeters, researchers are pushing the bounds of current technology and modeling capability in order to monitor global sea level rate at an accuracy of a few tenths of a mm/yr. The measurement of mean sea-level change from satellite altimetry requires an extreme stability of the altimeter measurement system since the signal being measured is at the level of a few mm/yr. This means that the orbit and reference frame within which the altimeter measurements are situated, and the associated altimeter corrections, must be stable and accurate enough to permit a robust MSL estimate. Foremost, orbit quality and consistency are critical to satellite altimeter measurement accuracy. The orbit defines the altimeter reference frame, and orbit error directly affects the altimeter measurement. Orbit error remains a major component in the error budget of all past and present altimeter missions. For example, inconsistencies in the International Terrestrial Reference Frame (ITRF) used to produce the precision orbits at different times cause systematic inconsistencies to appear in the multimission time-frame between TOPEX and Jason-1, and can affect the intermission calibration of these data. In an effort to adhere to cross mission consistency, we have generated the full time series of orbits for TOPEX/Poseidon (TP), Jason-1, and OSTM based on recent improvements in the satellite force models, reference systems, and modeling strategies. The recent release of the entire revised Jason-1 Geophysical Data Records, and recalibration of the microwave radiometer correction also require the further re-examination of inter-mission consistency issues. Here we present an assessment of these recent improvements to the accuracy of the 17 -year sea surface height time series, and evaluate the subsequent impact on global and regional mean sea level estimates.
New Insights into Signed Path Coefficient Granger Causality Analysis.
Zhang, Jian; Li, Chong; Jiang, Tianzi
2016-01-01
Granger causality analysis, as a time series analysis technique derived from econometrics, has been applied in an ever-increasing number of publications in the field of neuroscience, including fMRI, EEG/MEG, and fNIRS. The present study mainly focuses on the validity of "signed path coefficient Granger causality," a Granger-causality-derived analysis method that has been adopted by many fMRI researches in the last few years. This method generally estimates the causality effect among the time series by an order-1 autoregression, and defines a positive or negative coefficient as an "excitatory" or "inhibitory" influence. In the current work we conducted a series of computations from resting-state fMRI data and simulation experiments to illustrate the signed path coefficient method was flawed and untenable, due to the fact that the autoregressive coefficients were not always consistent with the real causal relationships and this would inevitablely lead to erroneous conclusions. Overall our findings suggested that the applicability of this kind of causality analysis was rather limited, hence researchers should be more cautious in applying the signed path coefficient Granger causality to fMRI data to avoid misinterpretation.
A novel weight determination method for time series data aggregation
NASA Astrophysics Data System (ADS)
Xu, Paiheng; Zhang, Rong; Deng, Yong
2017-09-01
Aggregation in time series is of great importance in time series smoothing, predicting and other time series analysis process, which makes it crucial to address the weights in times series correctly and reasonably. In this paper, a novel method to obtain the weights in time series is proposed, in which we adopt induced ordered weighted aggregation (IOWA) operator and visibility graph averaging (VGA) operator and linearly combine the weights separately generated by the two operator. The IOWA operator is introduced to the weight determination of time series, through which the time decay factor is taken into consideration. The VGA operator is able to generate weights with respect to the degree distribution in the visibility graph constructed from the corresponding time series, which reflects the relative importance of vertices in time series. The proposed method is applied to two practical datasets to illustrate its merits. The aggregation of Construction Cost Index (CCI) demonstrates the ability of proposed method to smooth time series, while the aggregation of The Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) illustrate how proposed method maintain the variation tendency of original data.
Clozapine Use During Pregnancy and Lactation: A Case-Series Report.
Imaz, M Luisa; Oriolo, Giovanni; Torra, Mercè; Soy, Dolors; García-Esteve, Lluïsa; Martin-Santos, Rocio
2018-01-01
The current prescription of clozapine in psychotic women of reproductive age makes it crucial to understand its pharmacokinetics during pregnancy and lactation as well as its risk profile for neonatal outcome. The aim of this case series was to provide new evidence on the pharmacokinetic features of clozapine that determine its passage through the placenta and amniotic fluid, as well as the neonatal clozapine elimination half-life (t1/2). This case series demonstrates for the first time that clozapine might show partial placental passage similar to other atypical antipsychotics. Clozapine levels decreased during the first few days in nursing infants. The half-life of clozapine in neonates was slightly higher than previously estimated. Clozapine use in pregnancy may be associated with diabetes mellitus, especially if there is a family history of this disease. Although no acute toxicological effects were observed in the intrauterine exposed newborn, close follow-up of pregnancy is recommended. However, these results must be taken with caution being a case series with small sample size.
A complete dc characterization of a constant-frequency, clamped-mode, series-resonant converter
NASA Technical Reports Server (NTRS)
Tsai, Fu-Sheng; Lee, Fred C.
1988-01-01
The dc behavior of a clamped-mode series-resonant converter is characterized systematically. Given a circuit operating condition, the converter's mode of operation is determined and various circuit parameters are calculated, such as average inductor current (load current), rms inductor current, peak capacitor voltage, rms switch currents, average diode currents, switch turn-on currents, and switch turn-off currents. Regions of operation are defined, and various circuit characteristics are derived to facilitate the converter design.
Rigorous Combination of GNSS and VLBI: How it Improves Earth Orientation and Reference Frames
NASA Astrophysics Data System (ADS)
Lambert, S. B.; Richard, J. Y.; Bizouard, C.; Becker, O.
2017-12-01
Current reference series (C04) of the International Earth Rotation and Reference Systems Service (IERS) are produced by a weighted combination of Earth orientation parameters (EOP) time series built up by combination centers of each technique (VLBI, GNSS, Laser ranging, DORIS). In the future, we plan to derive EOP from a rigorous combination of the normal equation systems of the four techniques.We present here the results of a rigorous combination of VLBI and GNSS pre-reduced, constraint-free, normal equations with the DYNAMO geodetic analysis software package developed and maintained by the French GRGS (Groupe de Recherche en GeÌodeÌsie Spatiale). The used normal equations are those produced separately by the IVS and IGS combination centers to which we apply our own minimal constraints.We address the usefulness of such a method with respect to the classical, a posteriori, combination method, and we show whether EOP determinations are improved.Especially, we implement external validations of the EOP series based on comparison with geophysical excitation and examination of the covariance matrices. Finally, we address the potential of the technique for the next generation celestial reference frames, which are currently determined by VLBI only.
Real-time software failure characterization
NASA Technical Reports Server (NTRS)
Dunham, Janet R.; Finelli, George B.
1990-01-01
A series of studies aimed at characterizing the fundamentals of the software failure process has been undertaken as part of a NASA project on the modeling of a real-time aerospace vehicle software reliability. An overview of these studies is provided, and the current study, an investigation of the reliability of aerospace vehicle guidance and control software, is examined. The study approach provides for the collection of life-cycle process data, and for the retention and evaluation of interim software life-cycle products.
Association mining of dependency between time series
NASA Astrophysics Data System (ADS)
Hafez, Alaaeldin
2001-03-01
Time series analysis is considered as a crucial component of strategic control over a broad variety of disciplines in business, science and engineering. Time series data is a sequence of observations collected over intervals of time. Each time series describes a phenomenon as a function of time. Analysis on time series data includes discovering trends (or patterns) in a time series sequence. In the last few years, data mining has emerged and been recognized as a new technology for data analysis. Data Mining is the process of discovering potentially valuable patterns, associations, trends, sequences and dependencies in data. Data mining techniques can discover information that many traditional business analysis and statistical techniques fail to deliver. In this paper, we adapt and innovate data mining techniques to analyze time series data. By using data mining techniques, maximal frequent patterns are discovered and used in predicting future sequences or trends, where trends describe the behavior of a sequence. In order to include different types of time series (e.g. irregular and non- systematic), we consider past frequent patterns of the same time sequences (local patterns) and of other dependent time sequences (global patterns). We use the word 'dependent' instead of the word 'similar' for emphasis on real life time series where two time series sequences could be completely different (in values, shapes, etc.), but they still react to the same conditions in a dependent way. In this paper, we propose the Dependence Mining Technique that could be used in predicting time series sequences. The proposed technique consists of three phases: (a) for all time series sequences, generate their trend sequences, (b) discover maximal frequent trend patterns, generate pattern vectors (to keep information of frequent trend patterns), use trend pattern vectors to predict future time series sequences.
31 CFR 352.7 - Issues on exchange.
Code of Federal Regulations, 2010 CFR
2010-07-01
... States Savings Notes (Freedom Shares) at their current redemption values for Series HH bonds. Series E.... The total current redemption value of the eligible securities submitted for exchange in any one transaction was required to be $500 or more. If the current redemption value was an even multiple of $500...
31 CFR 352.7 - Issues on exchange.
Code of Federal Regulations, 2013 CFR
2013-07-01
... States Savings Notes (Freedom Shares) at their current redemption values for Series HH bonds. Series E.... The total current redemption value of the eligible securities submitted for exchange in any one transaction was required to be $500 or more. If the current redemption value was an even multiple of $500...
31 CFR 352.7 - Issues on exchange.
Code of Federal Regulations, 2011 CFR
2011-07-01
... States Savings Notes (Freedom Shares) at their current redemption values for Series HH bonds. Series E.... The total current redemption value of the eligible securities submitted for exchange in any one transaction was required to be $500 or more. If the current redemption value was an even multiple of $500...
31 CFR 352.7 - Issues on exchange.
Code of Federal Regulations, 2012 CFR
2012-07-01
... States Savings Notes (Freedom Shares) at their current redemption values for Series HH bonds. Series E.... The total current redemption value of the eligible securities submitted for exchange in any one transaction was required to be $500 or more. If the current redemption value was an even multiple of $500...
NASA Astrophysics Data System (ADS)
Hermosilla, Txomin; Wulder, Michael A.; White, Joanne C.; Coops, Nicholas C.; Hobart, Geordie W.
2017-12-01
The use of time series satellite data allows for the temporally dense, systematic, transparent, and synoptic capture of land dynamics over time. Subsequent to the opening of the Landsat archive, several time series approaches for characterizing landscape change have been developed, often representing a particular analytical time window. The information richness and widespread utility of these time series data have created a need to maintain the currency of time series information via the addition of new data, as it becomes available. When an existing time series is temporally extended, it is critical that previously generated change information remains consistent, thereby not altering reported change statistics or science outcomes based on that change information. In this research, we investigate the impacts and implications of adding additional years to an existing 29-year annual Landsat time series for forest change. To do so, we undertook a spatially explicit comparison of the 29 overlapping years of a time series representing 1984-2012, with a time series representing 1984-2016. Surface reflectance values, and presence, year, and type of change were compared. We found that the addition of years to extend the time series had minimal effect on the annual surface reflectance composites, with slight band-specific differences (r ≥ 0.1) in the final years of the original time series being updated. The area of stand replacing disturbances and determination of change year are virtually unchanged for the overlapping period between the two time-series products. Over the overlapping temporal period (1984-2012), the total area of change differs by 0.53%, equating to an annual difference in change area of 0.019%. Overall, the spatial and temporal agreement of the changes detected by both time series was 96%. Further, our findings suggest that the entire pre-existing historic time series does not need to be re-processed during the update process. Critically, given the time series change detection and update approach followed here, science outcomes or reports representing one temporal epoch can be considered stable and will not be altered when a time series is updated with newly available data.
Nonequilibrium Green's function theory for nonadiabatic effects in quantum electron transport
NASA Astrophysics Data System (ADS)
Kershaw, Vincent F.; Kosov, Daniel S.
2017-12-01
We develop nonequilibrium Green's function-based transport theory, which includes effects of nonadiabatic nuclear motion in the calculation of the electric current in molecular junctions. Our approach is based on the separation of slow and fast time scales in the equations of motion for Green's functions by means of the Wigner representation. Time derivatives with respect to central time serve as a small parameter in the perturbative expansion enabling the computation of nonadiabatic corrections to molecular Green's functions. Consequently, we produce a series of analytic expressions for non-adiabatic electronic Green's functions (up to the second order in the central time derivatives), which depend not solely on the instantaneous molecular geometry but likewise on nuclear velocities and accelerations. An extended formula for electric current is derived which accounts for the non-adiabatic corrections. This theory is concisely illustrated by the calculations on a model molecular junction.
Nonequilibrium Green's function theory for nonadiabatic effects in quantum electron transport.
Kershaw, Vincent F; Kosov, Daniel S
2017-12-14
We develop nonequilibrium Green's function-based transport theory, which includes effects of nonadiabatic nuclear motion in the calculation of the electric current in molecular junctions. Our approach is based on the separation of slow and fast time scales in the equations of motion for Green's functions by means of the Wigner representation. Time derivatives with respect to central time serve as a small parameter in the perturbative expansion enabling the computation of nonadiabatic corrections to molecular Green's functions. Consequently, we produce a series of analytic expressions for non-adiabatic electronic Green's functions (up to the second order in the central time derivatives), which depend not solely on the instantaneous molecular geometry but likewise on nuclear velocities and accelerations. An extended formula for electric current is derived which accounts for the non-adiabatic corrections. This theory is concisely illustrated by the calculations on a model molecular junction.
Eddy current simulation in thick cylinders of finite length induced by coils of arbitrary geometry.
Sanchez Lopez, Hector; Poole, Michael; Crozier, Stuart
2010-12-01
Eddy currents are inevitably induced when time-varying magnetic field gradients interact with the metallic structures of a magnetic resonance imaging (MRI) scanner. The secondary magnetic field produced by this induced current degrades the spatial and temporal performance of the primary field generated by the gradient coils. Although this undesired effect can be minimized by using actively and/or passively shielded gradient coils and current pre-emphasis techniques, a residual eddy current still remains in the MRI scanner structure. Accurate simulation of these eddy currents is important in the successful design of gradient coils and magnet cryostat vessels. Efficient methods for simulating eddy currents are currently restricted to cylindrical-symmetry. The approach presented in this paper divides thick conducting cylinders into thin layers (thinner than the skin depth) and expresses the current density on each as a Fourier series. The coupling between each mode of the Fourier series with every other is modeled with an inductive network method. In this way, the eddy currents induced in realistic cryostat surfaces by coils of arbitrary geometry can be simulated. The new method was validated by simulating a canonical problem and comparing the results against a commercially available software package. An accurate skin depth of 2.76 mm was calculated in 6 min with the new method. The currents induced by an actively shielded x-gradient coil were simulated assuming a finite length cylindrical cryostat consisting of three different conducting materials. Details of the temporal-spatial induced current diffusion process were simulated through all cryostat layers, which could not be efficiently simulated with any other method. With this data, all quantities that depend on the current density, such as the secondary magnetic field, are simply evaluated. Copyright © 2010 Elsevier Inc. All rights reserved.
Highly comparative time-series analysis: the empirical structure of time series and their methods.
Fulcher, Ben D; Little, Max A; Jones, Nick S
2013-06-06
The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines.
Highly comparative time-series analysis: the empirical structure of time series and their methods
Fulcher, Ben D.; Little, Max A.; Jones, Nick S.
2013-01-01
The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines. PMID:23554344
Allagui, Anis; Freeborn, Todd J.; Elwakil, Ahmed S.; Maundy, Brent J.
2016-01-01
The electric characteristics of electric-double layer capacitors (EDLCs) are determined by their capacitance which is usually measured in the time domain from constant-current charging/discharging and cyclic voltammetry tests, and from the frequency domain using nonlinear least-squares fitting of spectral impedance. The time-voltage and current-voltage profiles from the first two techniques are commonly treated by assuming ideal SsC behavior in spite of the nonlinear response of the device, which in turn provides inaccurate values for its characteristic metrics. In this paper we revisit the calculation of capacitance, power and energy of EDLCs from the time domain constant-current step response and linear voltage waveform, under the assumption that the device behaves as an equivalent fractional-order circuit consisting of a resistance Rs in series with a constant phase element (CPE(Q, α), with Q being a pseudocapacitance and α a dispersion coefficient). In particular, we show with the derived (Rs, Q, α)-based expressions, that the corresponding nonlinear effects in voltage-time and current-voltage can be encompassed through nonlinear terms function of the coefficient α, which is not possible with the classical RsC model. We validate our formulae with the experimental measurements of different EDLCs. PMID:27934904
NASA Astrophysics Data System (ADS)
Allagui, Anis; Freeborn, Todd J.; Elwakil, Ahmed S.; Maundy, Brent J.
2016-12-01
The electric characteristics of electric-double layer capacitors (EDLCs) are determined by their capacitance which is usually measured in the time domain from constant-current charging/discharging and cyclic voltammetry tests, and from the frequency domain using nonlinear least-squares fitting of spectral impedance. The time-voltage and current-voltage profiles from the first two techniques are commonly treated by assuming ideal SsC behavior in spite of the nonlinear response of the device, which in turn provides inaccurate values for its characteristic metrics. In this paper we revisit the calculation of capacitance, power and energy of EDLCs from the time domain constant-current step response and linear voltage waveform, under the assumption that the device behaves as an equivalent fractional-order circuit consisting of a resistance Rs in series with a constant phase element (CPE(Q, α), with Q being a pseudocapacitance and α a dispersion coefficient). In particular, we show with the derived (Rs, Q, α)-based expressions, that the corresponding nonlinear effects in voltage-time and current-voltage can be encompassed through nonlinear terms function of the coefficient α, which is not possible with the classical RsC model. We validate our formulae with the experimental measurements of different EDLCs.
Allagui, Anis; Freeborn, Todd J; Elwakil, Ahmed S; Maundy, Brent J
2016-12-09
The electric characteristics of electric-double layer capacitors (EDLCs) are determined by their capacitance which is usually measured in the time domain from constant-current charging/discharging and cyclic voltammetry tests, and from the frequency domain using nonlinear least-squares fitting of spectral impedance. The time-voltage and current-voltage profiles from the first two techniques are commonly treated by assuming ideal R s C behavior in spite of the nonlinear response of the device, which in turn provides inaccurate values for its characteristic metrics [corrected]. In this paper we revisit the calculation of capacitance, power and energy of EDLCs from the time domain constant-current step response and linear voltage waveform, under the assumption that the device behaves as an equivalent fractional-order circuit consisting of a resistance R s in series with a constant phase element (CPE(Q, α), with Q being a pseudocapacitance and α a dispersion coefficient). In particular, we show with the derived (R s , Q, α)-based expressions, that the corresponding nonlinear effects in voltage-time and current-voltage can be encompassed through nonlinear terms function of the coefficient α, which is not possible with the classical R s C model. We validate our formulae with the experimental measurements of different EDLCs.
Hao, Zhi-hong; Yao, Jian-zhen; Tang, Rui-ling; Zhang, Xue-mei; Li, Wen-ge; Zhang, Qin
2015-02-01
The method for the determmation of trace boron, molybdenum, silver, tin and lead in geochemical samples by direct current are full spectrum direct reading atomic emission spectroscopy (DC-Arc-AES) was established. Direct current are full spectrum direct reading atomic emission spectrometer with a large area of solid-state detectors has functions of full spectrum direct reading and real-time background correction. The new electrodes and new buffer recipe were proposed in this paper, and have applied for national patent. Suitable analytical line pairs, back ground correcting points of elements and the internal standard method were selected, and Ge was used as internal standard. Multistage currents were selected in the research on current program, and each current set different holding time to ensure that each element has a good signal to noise ratio. Continuous rising current mode selected can effectively eliminate the splash of the sample. Argon as shielding gas can eliminate CN band generating and reduce spectral background, also plays a role in stabilizing the are, and argon flow 3.5 L x min(-1) was selected. Evaporation curve of each element was made, and it was concluded that the evaporation behavior of each element is consistent, and combined with the effects of different spectrographic times on the intensity and background, the spectrographic time of 35s was selected. In this paper, national standards substances were selected as a standard series, and the standard series includes different nature and different content of standard substances which meet the determination of trace boron, molybdenum, silver, tin and lead in geochemical samples. In the optimum experimental conditions, the detection limits for B, Mo, Ag, Sn and Pb are 1.1, 0.09, 0.01, 0.41, and 0.56 microg x g(-1) respectively, and the precisions (RSD, n=12) for B, Mo, Ag, Sn and Pb are 4.57%-7.63%, 5.14%-7.75%, 5.48%-12.30%, 3.97%-10.46%, and 4.26%-9.21% respectively. The analytical accuracy was validated by national standards and the results are in agreement with certified values. The method is simple, rapid, is an advanced analytical method for the determination of trace amounts of geochemical samples' boron, molybdenum, silver, tin and lead, and has a certain practicality.
Size of clinical trials and Introductory prices of prophylactic vaccine series
Weinberg, Steven H.; Butchart, Amy T.; Davis, Matthew M.
2012-01-01
Costs of completing the recommended immunization schedule have increased over the last decade. Access to prophylactic vaccines may become limited due to financing obstacles within current delivery systems. Vaccine prices reflect research and development expenses incurred by vaccine manufacturers, including costs associated with evaluating candidate vaccines in human subjects. If the number of subjects in clinical trials is increasing over time and associated with vaccine price, this may help explain increases in prices of vaccine series. We examined whether: (A) the initial public- and private-sector prices for recommended prophylactic vaccine series licensed and recommended in the US increased from 2000–2011, (B) the number of human subjects per licensed vaccine increased during the time period, and (C) the number of human subjects was associated with the initial public–and private–sector prices of the vaccine series. In regression analyses of 13 vaccines, approval year was not significantly associated with the number of human subjects, initial public-sector prices, or initial private-sector prices. While the number of phase II subjects was not significantly associated with prices, the numbers of phase III and combined late phase (phases II + III) subjects were significantly associated with initial public- and private-sector series prices (p < 0.05). The association between number of subjects and initial prices demonstrated diminishing marginal increases in price with increasing numbers of subjects. These findings may help guide the number of subjects required by the FDA in clinical trials, in order to reduce expenses for manufacturers and thereby help mitigate increases in initial vaccine series prices. PMID:22854668
Size of clinical trials and Introductory prices of prophylactic vaccine series.
Weinberg, Steven H; Butchart, Amy T; Davis, Matthew M
2012-08-01
Costs of completing the recommended immunization schedule have increased over the last decade. Access to prophylactic vaccines may become limited due to financing obstacles within current delivery systems. Vaccine prices reflect research and development expenses incurred by vaccine manufacturers, including costs associated with evaluating candidate vaccines in human subjects. If the number of subjects in clinical trials is increasing over time and associated with vaccine price, this may help explain increases in prices of vaccine series. We examined whether: (A) the initial public- and private-sector prices for recommended prophylactic vaccine series licensed and recommended in the US increased from 2000-2011, (B) the number of human subjects per licensed vaccine increased during the time period, and (C) the number of human subjects was associated with the initial public-and private-sector prices of the vaccine series. In regression analyses of 13 vaccines, approval year was not significantly associated with the number of human subjects, initial public-sector prices, or initial private-sector prices. While the number of phase II subjects was not significantly associated with prices, the numbers of phase III and combined late phase (phases II + III) subjects were significantly associated with initial public- and private-sector series prices (p < 0.05). The association between number of subjects and initial prices demonstrated diminishing marginal increases in price with increasing numbers of subjects. These findings may help guide the number of subjects required by the FDA in clinical trials, in order to reduce expenses for manufacturers and thereby help mitigate increases in initial vaccine series prices.
ERIC Educational Resources Information Center
Harkins, Judith E., Ed.; Virvan, Barbara M., Ed.
The conference proceedings contains 23 papers on telephone relay service, real-time captioning, and automatic speech recognition, and a glossary. The keynote address, by Representative Major R. Owens, examines current issues in federal legislation. Other papers have the following titles and authors: "Telephone Relay Service: Rationale and…
ERIC Educational Resources Information Center
Dotson, Felecia
2017-01-01
The current study examines the effects that a coaching intervention has on the occurrences of scaffolding language techniques in a teacher's instructional delivery, in addition to the possible influences that a coaching intervention has on the teacher's ability to reflect on scaffolding language. More importantly, the study considers whether…
The use of multiple imputation in the Southern Annual Forest Inventory System
Gregory A. Reams; Joseph M. McCollum
2000-01-01
The Southern Research Station is currently implementing an annual forest survey in 7 of the 13 States that it is responsible for surveying. The Southern Annual Forest Inventory System (SAFIS) sampling design is a systematic sample of five interpenetrating grids, whereby an equal number of plots are measured each year. The area-representative and time-series...
The use of multiple imputation in the Southern Annual Forest Inventory System
Gregory A. Reams; Joseph M. McCollum
2000-01-01
The Southern Research Station is currently implementing an annual forest survey in 7 of the 13 states that it is responsible for surveying. The Southern Annual Forest Inventory System (SAFIS) sampling design is a systematic sample of five interpenetrating grids, whereby an equal number of plots are measured each year. The area representative and time series nature of...
ERIC Educational Resources Information Center
Bokossa, Maxime C.; Huang, Gary G.
This report describes the imputation procedures used to deal with missing data in the National Education Longitudinal Study of 1988 (NELS:88), the only current National Center for Education Statistics (NCES) dataset that contains scores from cognitive tests given the same set of students at multiple time points. As is inevitable, cognitive test…
Now Is the Time: A Toxic Era for Child and Youth Care
ERIC Educational Resources Information Center
Child & Youth Services, 2007
2007-01-01
The purpose of this chapter is to provide a very personal series of reflections on the current debate on risk and accountability in child protection and child and youth care as the former strives to come out of a period of volunteerism and professionalize itself (McElwee, 1998; Share & McElwee, 2005). Irish culture has grown more individualist…
International petroleum statistics report
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1997-11-01
This document is a monthly publication which provides current data on international oil production,demand,imports and stocks. This report has four sections which contain time series data on world oil production and oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). Also included is oil supply/demand balance information for the world, and data on oil imports and trade by OECD countries.
76 FR 57630 - Airworthiness Directives; Airbus Model A318, A319, A320, and A321 Series Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-16
... surface. Request To Change Compliance Time Delta Air Lines (Delta) recommended that paragraph (g) of the... Mandatory Service Bulletin A320-27A1186, Revision 05, dated March 10, 2010. Delta stated that the current... the service bulletin wording. Delta also stated that the FAA AD wording does not give acceptance to...
Today's Program Is Brought to You by the Letters--Debit and Credit and by the Number Income
ERIC Educational Resources Information Center
Bush, H. Francis; Walsh, Vonda K.
2011-01-01
As each generation comes of age they receive a label. Currently, we are teaching the new millennials. Their learning style and time management preferences may respond better to a constant task and feedback cycle similar to the popular Public Television Series, Sesame Street. This paper examines the effects of requiring students to take an…
Anne Black; Dave Thomas; Jennifer Ziegler; Jim Saveland
2012-01-01
For some time now, the wildland fire community has been interested in 'organizational learning' as a way to improve safety and overall performance. For instance, in the US, federal agencies have established and continue to support the Wildland Fire Lessons Learned Center, sponsored several national conferences and are currently considering how incident...
Anne E. Black; Dave Thomas; Jennifer Ziegler; J. Saveland
2012-01-01
For some time now, the wildland fire community has been interested in 'organizational learning' as a way to improve safety and overall performance. For instance, in the US, federal agencies have established the Wildland Fire Lessons Learned Center, sponsored several national conferences, and are currently considering how incident reviews might be used to...
ERIC Educational Resources Information Center
Krucken, Georg
2011-01-01
Higher education systems in Europe are currently undergoing profound transformations. At the macro-level, there is an increase in the number of students enrolled, subjects of study offered, and university missions that have gained legitimacy over time. At the second level changes are evident at the level of university governance. New Public…
ERIC Educational Resources Information Center
Buchmiller, Archie A.
Regardless of one's philosophical views of the role of the national government, the new federal willingness to reexamine its past effectiveness and role provides an opportunity for grassroots involvement by both state educational agencies and local school districts. An evaluation of current federal enactments needs to consider several important…
ERIC Educational Resources Information Center
Ngan, Chun-Kit
2013-01-01
Making decisions over multivariate time series is an important topic which has gained significant interest in the past decade. A time series is a sequence of data points which are measured and ordered over uniform time intervals. A multivariate time series is a set of multiple, related time series in a particular domain in which domain experts…
Evaluation of the Trajectory Operations Applications Software Task (TOAST)
NASA Technical Reports Server (NTRS)
Perkins, Sharon; Martin, Andrea; Bavinger, Bill
1990-01-01
The Trajectory Operations Applications Software Task (TOAST) is a software development project under the auspices of the Mission Operations Directorate. Its purpose is to provide trajectory operation pre-mission and real-time support for the Space Shuttle program. As an Application Manager, TOAST provides an isolation layer between the underlying Unix operating system and the series of user programs. It provides two main services: a common interface to operating system functions with semantics appropriate for C or FORTRAN, and a structured input and output package that can be utilized by user application programs. In order to evaluate TOAST as an Application Manager, the task was to assess current and planned capabilities, compare capabilities to functions available in commercially-available off the shelf (COTS) and Flight Analysis Design System (FADS) users for TOAST implementation. As a result of the investigation, it was found that the current version of TOAST is well implemented and meets the needs of the real-time users. The plans for migrating TOAST to the X Window System are essentially sound; the Executive will port with minor changes, while Menu Handler will require a total rewrite. A series of recommendations for future TOAST directions are included.
Open stack thermal battery tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, Kevin N.; Roberts, Christine C.; Grillet, Anne M.
We present selected results from a series of Open Stack thermal battery tests performed in FY14 and FY15 and discuss our findings. These tests were meant to provide validation data for the comprehensive thermal battery simulation tools currently under development in Sierra/Aria under known conditions compared with as-manufactured batteries. We are able to satisfy this original objective in the present study for some test conditions. Measurements from each test include: nominal stack pressure (axial stress) vs. time in the cold state and during battery ignition, battery voltage vs. time against a prescribed current draw with periodic pulses, and images transversemore » to the battery axis from which cell displacements are computed. Six battery configurations were evaluated: 3, 5, and 10 cell stacks sandwiched between 4 layers of the materials used for axial thermal insulation, either Fiberfrax Board or MinK. In addition to the results from 3, 5, and 10 cell stacks with either in-line Fiberfrax Board or MinK insulation, a series of cell-free “control” tests were performed that show the inherent settling and stress relaxation based on the interaction between the insulation and heat pellets alone.« less
A 280-Year Long Series of Phenological Observations of Cherry Tree Blossoming Dates for Switzerland
NASA Astrophysics Data System (ADS)
Rutishauser, T.; Luterbacher, J.; Wanner, H.
2003-04-01
Phenology is generally described as the timing of life cycle phases or activities of plants and animals in their temporal occurrence throughout the year (Lieth 1974). Recent studies have shown that meteorological and climatological impacts leave their 'fingerprints' across natural systems in general and strongly influence the seasonal activities of single animal and plant species. During the 20th century, phenological observation networks have been established around the world to document and analyze the influence of the globally changing climate to plants and wildlife. This work presents a first attempt of a unique 280-year long series of phenological observations of cherry tree blossoming dates for the Swiss plateau region. In Switzerland, a nation-wide phenological observation network has been established in 1951 currently documenting 69 phenophases of 26 different plant species. A guidebook seeks to increase objectiveness in the network observations. The observations of the blooming of the cherry tree (prunus avium) were chosen to calculate a mean series for the Swiss plateau region with observations from altitudes ranging between 370 and 860 asl. A total number of 737 observations from 21 stations were used. A linear regression was established between the mean blooming date and altitude in order to correct the data to a reference altitude level. Other ecological parameters were unaccounted for. The selected network data series from 1951 to 2000 was combined and prolonged with observations from various sources back to 1721. These include several historical observation series by farmers, clergymen and teachers, data from various stations collected at the newly established Swiss meteorological network from 1864 to 1873 and the single long series of observations from Liestal starting in 1894. The homogenized time series of observations will be compared with reconstructions of late winter temperatures as well as statistical estimations of blooming time based on long instrumental data from Europe. In addition, the series is one of the few historical phenological records to assess past climate and ecological changes. Lieth, H. (1974). Phenology and Seasonality Modeling. Berlin, Heidelberg, New York, Springer.
A Review of Subsequence Time Series Clustering
Teh, Ying Wah
2014-01-01
Clustering of subsequence time series remains an open issue in time series clustering. Subsequence time series clustering is used in different fields, such as e-commerce, outlier detection, speech recognition, biological systems, DNA recognition, and text mining. One of the useful fields in the domain of subsequence time series clustering is pattern recognition. To improve this field, a sequence of time series data is used. This paper reviews some definitions and backgrounds related to subsequence time series clustering. The categorization of the literature reviews is divided into three groups: preproof, interproof, and postproof period. Moreover, various state-of-the-art approaches in performing subsequence time series clustering are discussed under each of the following categories. The strengths and weaknesses of the employed methods are evaluated as potential issues for future studies. PMID:25140332
A review of subsequence time series clustering.
Zolhavarieh, Seyedjamal; Aghabozorgi, Saeed; Teh, Ying Wah
2014-01-01
Clustering of subsequence time series remains an open issue in time series clustering. Subsequence time series clustering is used in different fields, such as e-commerce, outlier detection, speech recognition, biological systems, DNA recognition, and text mining. One of the useful fields in the domain of subsequence time series clustering is pattern recognition. To improve this field, a sequence of time series data is used. This paper reviews some definitions and backgrounds related to subsequence time series clustering. The categorization of the literature reviews is divided into three groups: preproof, interproof, and postproof period. Moreover, various state-of-the-art approaches in performing subsequence time series clustering are discussed under each of the following categories. The strengths and weaknesses of the employed methods are evaluated as potential issues for future studies.
Studies of infrasound propagation using the USArray seismic network (Invited)
NASA Astrophysics Data System (ADS)
Hedlin, M. A.; Degroot-Hedlin, C. D.; Walker, K. T.
2010-12-01
Although there are currently ~ 100 infrasound arrays worldwide, more than ever before, the station density is still insufficient to provide validation for detailed propagation modeling. Much structure in the atmosphere is short-lived and occurs at spatial scales much smaller than the average distance between infrasound stations. Relatively large infrasound signals can be observed on seismic channels due to coupling at the Earth's surface. Recent research, using data from the 70-km spaced 400-station USArray and other seismic network deployments, has shown the value of dense seismic network data for filling in the gaps between infrasound arrays. The dense sampling of the infrasound wavefield has allowed us to observe complete travel-time branches of infrasound signals and shed more light on the nature of infrasound propagation. We present early results from our studies of impulsive atmospheric sources, such as series of UTTR rocket motor detonations in Utah. The Utah blasts have been well recorded by USArray seismic stations and infrasound arrays in Nevada and Washington State. Recordings of seismic signals from a series of six events in 2007 are used to pinpoint the shot times to < 1 second. Variations in the acoustic branches and signal arrival times at the arrays are used to probe variations in atmospheric structure. Although we currently use coupled signals we anticipate studying dense acoustic network recordings as the USArray is currently being upgraded with infrasound microphones. These new sensors will allow us to make semi-continental scale network recordings of infrasound signals free of concerns about how the signals observed on seismic channels were modified when being coupled to seismic.
Multiscale structure of time series revealed by the monotony spectrum.
Vamoş, Călin
2017-03-01
Observation of complex systems produces time series with specific dynamics at different time scales. The majority of the existing numerical methods for multiscale analysis first decompose the time series into several simpler components and the multiscale structure is given by the properties of their components. We present a numerical method which describes the multiscale structure of arbitrary time series without decomposing them. It is based on the monotony spectrum defined as the variation of the mean amplitude of the monotonic segments with respect to the mean local time scale during successive averagings of the time series, the local time scales being the durations of the monotonic segments. The maxima of the monotony spectrum indicate the time scales which dominate the variations of the time series. We show that the monotony spectrum can correctly analyze a diversity of artificial time series and can discriminate the existence of deterministic variations at large time scales from the random fluctuations. As an application we analyze the multifractal structure of some hydrological time series.
The Lesson Plan of the Month. Series 3. 10 Lesson Series.
ERIC Educational Resources Information Center
Phi Alpha Delta Fraternity International, Granada Hills, CA. Public Service Center.
Focusing on current topics germane to law-related education (LRE), this guide features ten LRE lessons. As part of a series of lesson plans compiled by Phi Alpha Delta, this collection presents a lesson plan on current issues for each month of the school year. Intended for high school and middle school with adaptations for elementary school, the…
Storlazzi, C.D.; McManus, M.A.; Figurski, J.D.
2003-01-01
Thermistor chains and acoustic Doppler current profilers were deployed at the northern and southern ends of Monterey Bay to examine the thermal and hydrodynamic structure of the inner (h ??? 20 m) shelf of central California. These instruments sampled temperature and current velocity at 2-min intervals over a 13-month period from June 2000 to July 2001. Time series of these data, in conjunction with SST imagery and CODAR sea surface current maps, helped to establish the basic hydrography for Monterey Bay. Analysis of time series data revealed that depth integrated flow at both sites was shore parallel (northwest-southeast) with net flows out of the Bay (northwest). The current and temperature records were dominated by semi-diurnal and diurnal tidal signals that lagged the surface tides by 3 h on average. Over the course of an internal tidal cycle these flows were asymmetric, with the flow during the flooding internal tide to the southeast typically lasting only one-third as long as the flow to the northwest during the ebbing internal tide. The transitions from ebb to flood were rapid and bore-like in nature; they were also marked by rapid increases in temperature and high shear. During the spring and summer, when thermal stratification was high, we observed almost 2000 high-frequency (Tp ??? 4-20 min) internal waves in packets of 8-10 following the heads of these bore-like features. Previous studies along the West Coast of the US have concluded that warm water bores and high-frequency internal waves may play a significant role in the onshore transport of larvae.
Optimal joule heating of the subsurface
Berryman, J.G.; Daily, W.D.
1994-07-05
A method for simultaneously heating the subsurface and imaging the effects of the heating is disclosed. This method combines the use of tomographic imaging (electrical resistance tomography or ERT) to image electrical resistivity distribution underground, with joule heating by electrical currents injected in the ground. A potential distribution is established on a series of buried electrodes resulting in energy deposition underground which is a function of the resistivity and injection current density. Measurement of the voltages and currents also permits a tomographic reconstruction of the resistivity distribution. Using this tomographic information, the current injection pattern on the driving electrodes can be adjusted to change the current density distribution and thus optimize the heating. As the heating changes conditions, the applied current pattern can be repeatedly adjusted (based on updated resistivity tomographs) to affect real time control of the heating.
He, Wenjing; Zhu, Yuanzhong; Wang, Wenzhou; Zou, Kai; Zhang, Kai; He, Chao
2017-04-01
Pulsed magnetic field gradients generated by gradient coils are widely used in signal location in magnetic resonance imaging (MRI). However, gradient coils can also induce eddy currents in final magnetic field in the nearby conducting structures which lead to distortion and artifact in images, misguiding clinical diagnosis. We tried in our laboratory to measure the magnetic field of gradient-induced eddy current in 1.5 T superconducting magnetic resonance imaging device; and extracted key parameters including amplitude and time constant of exponential terms according to inductance-resistance series mathematical module. These parameters of both self-induced component and crossing component are useful to design digital filters to implement pulse pre-emphasize to reshape the waveform. A measure device that is a basement equipped with phantoms and receiving coils was designed and placed in the isocenter of the magnetic field. By applying testing sequence, contrast experiments were carried out in a superconducting magnet before and after eddy current compensation. Sets of one dimension signal were obtained as raw data to calculate gradient-induced eddy currents. Curve fitting by least squares method was also done to match inductance-resistance series module. The results also illustrated that pulse pre-emphasize measurement with digital filter was correct and effective in reducing eddy current effect. Pre-emphasize waveform was developed based on system function. The usefulness of pre-emphasize measurement in reducing eddy current was confirmed and the improvement was also presented. All these are valuable for reducing artifact in magnetic resonance imaging device.
Infrared helioseismology - Detection of the chromospheric mode
NASA Technical Reports Server (NTRS)
Deming, D.; Kaeufl, H. U.; Espenak, F.; Glenar, D. A.; Hill, A. A.
1986-01-01
Time-series observations of an infrared solar OH absorption line profile have been obtained on two consecutive days using a laser heterodyne spectrometer to view a 2 arcsec portion of the quiet sun at disk center. A power spectrum of the line center velocity shows the well-known photospheric p-mode oscillations very prominently, but also shows a second feature near 4.3 mHz. A power spectrum of the line intensity shows only the 4.3 mHz feature, which is identified as the fundamental p-mode resonance of the solar chromosphere. The frequency of the mode is observed to be in substantial agreement with the eigenfrequency of current chromospheric models. A time series of two beam difference measurements shows that the mode is present only for horizontal wavelengths greater than 19 Mm. The period of a chromospheric p-mode resonance is directly related to the sound travel time across the chromosphere, which depends on the chromospheric temperature and geometric height. Thus, detection of this resonance will provide an important new constraint on chromospheric models.
Shorvon, Simon
2007-01-01
This paper records the history of Epilepsia, the journal of the International League Against Epilepsy, from its inception in 1908/1909 until the beginning of its fourth series in 1961. During this time, publication was interrupted on three occasions and so the journal appeared in four series, with a complex numbering system. Over the years, the content and format of the journal has varied. Its role has changed over the years, at times primarily as a scientific organ and at other times as a source of ILAE news and reports. Concerns throughout its history have included its role as an historical record, its international representation, financial vicissitude, quality of papers, the balance between basic and clinical science, the value of clinical papers, and issues of overspecialization. Epilepsia is today the leading clinical epilepsy journal; but these are still significant concerns, and a knowledge of the history of Epilepsia is important for understanding the current position of the journal.
An Extended IEEE 118-Bus Test System With High Renewable Penetration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pena, Ivonne; Martinez-Anido, Carlo Brancucci; Hodge, Bri-Mathias
This article describes a new publicly available version of the IEEE 118-bus test system, named NREL-118. The database is based on the transmission representation (buses and lines) of the IEEE 118-bus test system, with a reconfigured generation representation using three regions of the US Western Interconnection from the latest Western Electricity Coordination Council (WECC) 2024 Common Case [1]. Time-synchronous hourly load, wind, and solar time series are provided for over one year (8784 hours). The public database presented and described in this manuscript will allow researchers to model a test power system using detailed transmission, generation, load, wind, and solarmore » data. This database includes key additional features that add to the current IEEE 118-bus test model, such as: the inclusion of 10 generation technologies with different heat rate functions, minimum stable levels and ramping rates, GHG emissions rates, regulation and contingency reserves, and hourly time series data for one full year for load, wind and solar generation.« less
A method for generating high resolution satellite image time series
NASA Astrophysics Data System (ADS)
Guo, Tao
2014-10-01
There is an increasing demand for satellite remote sensing data with both high spatial and temporal resolution in many applications. But it still is a challenge to simultaneously improve spatial resolution and temporal frequency due to the technical limits of current satellite observation systems. To this end, much R&D efforts have been ongoing for years and lead to some successes roughly in two aspects, one includes super resolution, pan-sharpen etc. methods which can effectively enhance the spatial resolution and generate good visual effects, but hardly preserve spectral signatures and result in inadequate analytical value, on the other hand, time interpolation is a straight forward method to increase temporal frequency, however it increase little informative contents in fact. In this paper we presented a novel method to simulate high resolution time series data by combing low resolution time series data and a very small number of high resolution data only. Our method starts with a pair of high and low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and then projected onto the high resolution data plane and assigned to each high resolution pixel according to the predefined temporal change patterns of each type of ground objects. Finally the simulated high resolution data is generated. A preliminary experiment shows that our method can simulate a high resolution data with a reasonable accuracy. The contribution of our method is to enable timely monitoring of temporal changes through analysis of time sequence of low resolution images only, and usage of costly high resolution data can be reduces as much as possible, and it presents a highly effective way to build up an economically operational monitoring solution for agriculture, forest, land use investigation, environment and etc. applications.
Detecting dynamic causal inference in nonlinear two-phase fracture flow
NASA Astrophysics Data System (ADS)
Faybishenko, Boris
2017-08-01
Identifying dynamic causal inference involved in flow and transport processes in complex fractured-porous media is generally a challenging task, because nonlinear and chaotic variables may be positively coupled or correlated for some periods of time, but can then become spontaneously decoupled or non-correlated. In his 2002 paper (Faybishenko, 2002), the author performed a nonlinear dynamical and chaotic analysis of time-series data obtained from the fracture flow experiment conducted by Persoff and Pruess (1995), and, based on the visual examination of time series data, hypothesized that the observed pressure oscillations at both inlet and outlet edges of the fracture result from a superposition of both forward and return waves of pressure propagation through the fracture. In the current paper, the author explores an application of a combination of methods for detecting nonlinear chaotic dynamics behavior along with the multivariate Granger Causality (G-causality) time series test. Based on the G-causality test, the author infers that his hypothesis is correct, and presents a causation loop diagram of the spatial-temporal distribution of gas, liquid, and capillary pressures measured at the inlet and outlet of the fracture. The causal modeling approach can be used for the analysis of other hydrological processes, for example, infiltration and pumping tests in heterogeneous subsurface media, and climatic processes, for example, to find correlations between various meteorological parameters, such as temperature, solar radiation, barometric pressure, etc.
Carrer, Marco; von Arx, Georg; Castagneri, Daniele; Petit, Giai
2015-01-01
Trees are among the best natural archives of past environmental information. Xylem anatomy preserves information related to tree allometry and ecophysiological performance, which is not available from the more customary ring-width or wood-density proxy parameters. Recent technological advances make tree-ring anatomy very attractive because time frames of many centuries can now be covered. This calls for the proper treatment of time series of xylem anatomical attributes. In this article, we synthesize current knowledge on the biophysical and physiological mechanisms influencing the short- to long-term variation in the most widely used wood-anatomical feature, namely conduit size. We also clarify the strong mechanistic link between conduit-lumen size, tree hydraulic architecture and height growth. Among the key consequences of these biophysical constraints is the pervasive, increasing trend of conduit size during ontogeny. Such knowledge is required to process time series of anatomical parameters correctly in order to obtain the information of interest. An appropriate standardization procedure is fundamental when analysing long tree-ring-related chronologies. When dealing with wood-anatomical parameters, this is even more critical. Only an interdisciplinary approach involving ecophysiology, wood anatomy and dendrochronology will help to distill the valuable information about tree height growth and past environmental variability correctly. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Quantification of brain macrostates using dynamical nonstationarity of physiological time series.
Latchoumane, Charles-Francois Vincent; Jeong, Jaeseung
2011-04-01
The brain shows complex, nonstationarity temporal dynamics, with abrupt micro- and macrostate transitions during its information processing. Detecting and characterizing these transitions in dynamical states of the brain is a critical issue in the field of neuroscience and psychiatry. In the current study, a novel method is proposed to quantify brain macrostates (e.g., sleep stages or cognitive states) from shifts of dynamical microstates or dynamical nonstationarity. A ``dynamical microstate'' is a temporal unit of the information processing in the brain with fixed dynamical parameters and specific spatial distribution. In this proposed approach, a phase-space-based dynamical dissimilarity map (DDM) is used to detect transitions between dynamically stationary microstates in the time series, and Tsallis time-dependent entropy is applied to quantify dynamical patterns of transitions in the DDM. We demonstrate that the DDM successfully detects transitions between microstates of different temporal dynamics in the simulated physiological time series against high levels of noise. Based on the assumption of nonlinear, deterministic brain dynamics, we also demonstrate that dynamical nonstationarity analysis is useful to quantify brain macrostates (sleep stages I, II, III, IV, and rapid eye movement (REM) sleep) from sleep EEGs with an overall accuracy of 77%. We suggest that dynamical nonstationarity is a useful tool to quantify macroscopic mental states (statistical integration) of the brain using dynamical transitions at the microscopic scale in physiological data.
Initial Validation of NDVI time seriesfrom AVHRR, VEGETATION, and MODIS
NASA Technical Reports Server (NTRS)
Morisette, Jeffrey T.; Pinzon, Jorge E.; Brown, Molly E.; Tucker, Jim; Justice, Christopher O.
2004-01-01
The paper will address Theme 7: Multi-sensor opportunities for VEGETATION. We present analysis of a long-term vegetation record derived from three moderate resolution sensors: AVHRR, VEGETATION, and MODIS. While empirically based manipulation can ensure agreement between the three data sets, there is a need to validate the series. This paper uses atmospherically corrected ETM+ data available over the EOS Land Validation Core Sites as an independent data set with which to compare the time series. We use ETM+ data from 15 globally distributed sites, 7 of which contain repeat coverage in time. These high-resolution data are compared to the values of each sensor by spatially aggregating the ETM+ to each specific sensors' spatial coverage. The aggregated ETM+ value provides a point estimate for a specific site on a specific date. The standard deviation of that point estimate is used to construct a confidence interval for that point estimate. The values from each moderate resolution sensor are then evaluated with respect to that confident interval. Result show that AVHRR, VEGETATION, and MODIS data can be combined to assess temporal uncertainties and address data continuity issues and that the atmospherically corrected ETM+ data provide an independent source with which to compare that record. The final product is a consistent time series climate record that links historical observations to current and future measurements.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-07
... Series, adjusted option series and any options series until the time to expiration for such series is... time to expiration for such series is less than nine months be treated differently. Specifically, under... series until the time to expiration for such series is less than nine months. Accordingly, the...
Marwaha, Puneeta; Sunkaria, Ramesh Kumar
2016-09-01
The sample entropy (SampEn) has been widely used to quantify the complexity of RR-interval time series. It is a fact that higher complexity, and hence, entropy is associated with the RR-interval time series of healthy subjects. But, SampEn suffers from the disadvantage that it assigns higher entropy to the randomized surrogate time series as well as to certain pathological time series, which is a misleading observation. This wrong estimation of the complexity of a time series may be due to the fact that the existing SampEn technique updates the threshold value as a function of long-term standard deviation (SD) of a time series. However, time series of certain pathologies exhibits substantial variability in beat-to-beat fluctuations. So the SD of the first order difference (short term SD) of the time series should be considered while updating threshold value, to account for period-to-period variations inherited in a time series. In the present work, improved sample entropy (I-SampEn), a new methodology has been proposed in which threshold value is updated by considering the period-to-period variations of a time series. The I-SampEn technique results in assigning higher entropy value to age-matched healthy subjects than patients suffering atrial fibrillation (AF) and diabetes mellitus (DM). Our results are in agreement with the theory of reduction in complexity of RR-interval time series in patients suffering from chronic cardiovascular and non-cardiovascular diseases.
In situ measurements of wind and current speed and relationship between output power and turbulence
NASA Astrophysics Data System (ADS)
Duran Medina, Olmo; Schmitt, François G.; Sentchev, Alexei; Calif, Rudy
2015-04-01
In a context of energy transition, wind and tidal energy are sources of clean energy with the potential of partially satisfying the growing demand. The main problem of this type of energy, and other types of renewable energy remains the discontinuity of the electric power produced in different scales, inducing large fluctuations also called intermittency. This intermittency of wind and tidal energy is inherent to the turbulent nature of wind and marine currents. We consider this intermittent power production in strong relation with the turbulent intermittency of the resource. The turbulence theory is multifractal energy cascades models, a classic in physics of turbulence. From earlier studies in atmospheric sciences, we learn that wind speed and the aggregate power output are intermittent and multifractal over a wide range of scales [Calif and Schmitt 2014]. We want to extend this study to a marine current turbine and compare the scaling properties for those renewable energy sources. We consider here coupling between simultaneous velocity time series and output power from a wind turbine and a marine current turbine. Wind turbine data were obtained from Denmark and marine current data from Western Scheldt, Belgium where a prototype of a vertical and horizontal marine current turbines are tested. After an estimation of their Fourier density power spectra, we study their scaling properties in Kolmogorov's theory and the framework of fully developed turbulence. Hence, we employ a Hilbert-based methodology, namely arbitrary-order Hilbert spectral analysis [Calif et al. 2013a, 2013b] to characterize the intermittent property of the wind and marine current velocity in order to characterize the intermittent nature of the fluid. This method is used in order to obtain the spectrum and the corresponding power law for non-linear and non-stationary time series. The goal is to study the non-linear transfer characteristics in a multi-scale and multi-intensity framework.
NASA Astrophysics Data System (ADS)
Grilli, Stéphan T.; Guérin, Charles-Antoine; Shelby, Michael; Grilli, Annette R.; Moran, Patrick; Grosdidier, Samuel; Insua, Tania L.
2017-08-01
In past work, tsunami detection algorithms (TDAs) have been proposed, and successfully applied to offline tsunami detection, based on analyzing tsunami currents inverted from high-frequency (HF) radar Doppler spectra. With this method, however, the detection of small and short-lived tsunami currents in the most distant radar ranges is challenging due to conflicting requirements on the Doppler spectra integration time and resolution. To circumvent this issue, in Part I of this work, we proposed an alternative TDA, referred to as time correlation (TC) TDA, that does not require inverting currents, but instead detects changes in patterns of correlations of radar signal time series measured in pairs of cells located along the main directions of tsunami propagation (predicted by geometric optics theory); such correlations can be maximized when one signal is time-shifted by the pre-computed long wave propagation time. We initially validated the TC-TDA based on numerical simulations of idealized tsunamis in a simplified geometry. Here, we further develop, extend, and apply the TC algorithm to more realistic tsunami case studies. These are performed in the area West of Vancouver Island, BC, where Ocean Networks Canada recently deployed a HF radar (in Tofino, BC), to detect tsunamis from far- and near-field sources, up to a 110 km range. Two case studies are considered, both simulated using long wave models (1) a far-field seismic, and (2) a near-field landslide, tsunami. Pending the availability of radar data, a radar signal simulator is parameterized for the Tofino HF radar characteristics, in particular its signal-to-noise ratio with range, and combined with the simulated tsunami currents to produce realistic time series of backscattered radar signal from a dense grid of cells. Numerical experiments show that the arrival of a tsunami causes a clear change in radar signal correlation patterns, even at the most distant ranges beyond the continental shelf, thus making an early tsunami detection possible with the TC-TDA. Based on these results, we discuss how the new algorithm could be combined with standard methods proposed earlier, based on a Doppler analysis, to develop a new tsunami detection system based on HF radar data, that could increase warning time. This will be the object of future work, which will be based on actual, rather than simulated, radar data.
NOAA satellite observing systems: status and plans
NASA Astrophysics Data System (ADS)
John Hussey, W.; Schneider, Stanley R.; Gird, Ronald S.; Needham, Bruce H.
1991-07-01
NOAA's National Environmental Satellite, Data, and Information Service (NESDIS) operates separates series of environmental monitoring satellites in polar and geostationary orbits. Two geostationary spacecraft are normally in opration: one stationed at 75° E longitude (GOES-EAST), and one stationed at 135° W longitude (GOES-WEST). Owing to a combination of premature in-orbit failures and a launch failure there is only one GOES satellite currently operational, GOES-7, which is migrated between 95° and 105° W longitude depending upon season. GOES-7 was launched in February 1987. Its primary observing instrument is a combined imager/sounder, the VISSR Atmospheric Sounder (VAS). The first in the next series of GOES satellite, (GOES I-M), is scheduled for launch in 1992. The major upgrade over the current GOES satellites will be the introduction of simultaneous imaging and sounding capability and improvements in imaging IR and sounding resolution. Because of the long lead times necessary in designing and building new systems, NOAA, in cooperation with NASA, has already begun the planning and study process for the GOES-N series of satellites, which will fly early in the next century. NOAA operates a two polar satellite system with equatorial nodal crossing times of 0730 (descending) and 1345 (ascending). The current operational satellites are NOAA-10 (AM) and NOAA-11 (PM). The next in the series (NOAA-D, which will become NOAA-12 once operational) is scheduled for launch in early summer 1991. The instruments onboard are used to make global measurements of numerous parameters such as atmospheric temperature, water vapor, ozone, sea surface temperature, sea ice, and vegetation. The NOAA K-N series of satellites, scheduled for deployment in the mid 1990's, will provide upgraded imaging and sounding capability. The imager will be enhanced to include a sixth channel for cloud/ice descrimination. A 15 channel advanced microwave sounder will be manifested for atmospheric temperature retrievals, and a seperate 5 channel advanced microwave sounder will be used for atmospheric water vapor retrievals. The polar program will undergo major changes beginning in the late 1990's. The morning polar metsat service will become the responsibility of the Europeans with NOAA providing an operational sensor payload. The afternoon metsat service will be continued by NOAA with a new block of satellites and instruments beginning at NOAA-O. NOAA will also be closely cooperating with NASA in this time frame. A number of the instruments on NASA's Earth Observing System (EOS) platforms, scheduled for launch beginning in the late 1990's, have been designated "prototype operational" and may become candidates for eventual flight on NOAA operational spacecraft.
2011-01-01
Background Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Results Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. Conclusions The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html. PMID:21851598
Yuan, Yuan; Chen, Yi-Ping Phoebe; Ni, Shengyu; Xu, Augix Guohua; Tang, Lin; Vingron, Martin; Somel, Mehmet; Khaitovich, Philipp
2011-08-18
Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html.
NASA Technical Reports Server (NTRS)
Goodman, Michael L.; Kwan, Chiman; Ayhan, Bulent; Shang, Eric L.
2017-01-01
There are many flare forecasting models. For an excellent review and comparison of some of them see Barnes et al. (2016). All these models are successful to some degree, but there is a need for better models. We claim the most successful models explicitly or implicitly base their forecasts on various estimates of components of the photospheric current density J, based on observations of the photospheric magnetic field B. However, none of the models we are aware of compute the complete J. We seek to develop a better model based on computing the complete photospheric J. Initial results from this model are presented in this talk. We present a data driven, near photospheric, 3 D, non-force free magnetohydrodynamic (MHD) model that computes time series of the total J, and associated resistive heating rate in each pixel at the photosphere in the neutral line regions (NLRs) of 14 active regions (ARs). The model is driven by time series of B measured by the Helioseismic & Magnetic Imager (HMI) on the Solar Dynamics Observatory (SDO) satellite. Spurious Doppler periods due to SDO orbital motion are filtered out of the time series of B in every AR pixel. Errors in B due to these periods can be significant.
Data mining on long-term barometric data within the ARISE2 project
NASA Astrophysics Data System (ADS)
Hupe, Patrick; Ceranna, Lars; Pilger, Christoph
2016-04-01
The Comprehensive nuclear-Test-Ban Treaty (CTBT) led to the implementation of an international infrasound array network. The International Monitoring System (IMS) network includes 48 certified stations, each providing data for up to 15 years. As part of work package 3 of the ARISE2 project (Atmospheric dynamics Research InfraStructure in Europe, phase 2) the data sets will be statistically evaluated with regard on atmospheric dynamics. The current study focusses on fluctuations of absolute air pressure. Time series have been analysed for 17 monitoring stations which are located all over the world between Greenland and Antarctica along the latitudes to represent different climate zones and characteristic atmospheric conditions. Hence this enables quantitative comparisons between those regions. Analyses are shown including wavelet power spectra, multi-annual time series of average variances with regard to long-wave scales, and spectral densities to derive characteristics and special events. Evaluations reveal periodicities in average variances on 2 to 20 day scale with a maximum in the winter months and a minimum in summer of the respective hemisphere. This basically applies to time series of IMS stations beyond the tropics where the dominance of cyclones and anticyclones changes with seasons. Furthermore, spectral density analyses illustrate striking signals for several dynamic activities within one day, e.g., the semidiurnal tide.
Le Strat, Yann
2017-01-01
The objective of this paper is to evaluate a panel of statistical algorithms for temporal outbreak detection. Based on a large dataset of simulated weekly surveillance time series, we performed a systematic assessment of 21 statistical algorithms, 19 implemented in the R package surveillance and two other methods. We estimated false positive rate (FPR), probability of detection (POD), probability of detection during the first week, sensitivity, specificity, negative and positive predictive values and F1-measure for each detection method. Then, to identify the factors associated with these performance measures, we ran multivariate Poisson regression models adjusted for the characteristics of the simulated time series (trend, seasonality, dispersion, outbreak sizes, etc.). The FPR ranged from 0.7% to 59.9% and the POD from 43.3% to 88.7%. Some methods had a very high specificity, up to 99.4%, but a low sensitivity. Methods with a high sensitivity (up to 79.5%) had a low specificity. All methods had a high negative predictive value, over 94%, while positive predictive values ranged from 6.5% to 68.4%. Multivariate Poisson regression models showed that performance measures were strongly influenced by the characteristics of time series. Past or current outbreak size and duration strongly influenced detection performances. PMID:28715489
Wind speed time series reconstruction using a hybrid neural genetic approach
NASA Astrophysics Data System (ADS)
Rodriguez, H.; Flores, J. J.; Puig, V.; Morales, L.; Guerra, A.; Calderon, F.
2017-11-01
Currently, electric energy is used in practically all modern human activities. Most of the energy produced came from fossil fuels, making irreversible damage to the environment. Lately, there has been an effort by nations to produce energy using clean methods, such as solar and wind energy, among others. Wind energy is one of the cleanest alternatives. However, the wind speed is not constant, making the planning and operation at electric power systems a difficult activity. Knowing in advance the amount of raw material (wind speed) used for energy production allows us to estimate the energy to be generated by the power plant, helping the maintenance planning, the operational management, optimal operational cost. For these reasons, the forecast of wind speed becomes a necessary task. The forecast process involves the use of past observations from the variable to forecast (wind speed). To measure wind speed, weather stations use devices called anemometers, but due to poor maintenance, connection error, or natural wear, they may present false or missing data. In this work, a hybrid methodology is proposed, and it uses a compact genetic algorithm with an artificial neural network to reconstruct wind speed time series. The proposed methodology reconstructs the time series using a ANN defined by a Compact Genetic Algorithm.
Summer monsoon response of the Northern Somali Current, 1995
NASA Astrophysics Data System (ADS)
Schott, Friedrich; Fischer, Jürgen; Garternicht, Ulf; Quadfasel, Detlef
Preliminary results on the development of the northern Somali Current regime and Great Whirl during the summer monsoon of 1995 are reported. They are based on the water mass and current profiling observations from three shipboard surveys of R/V Meteor and on the time series from a moored current-meter and ADCP array. The monsoon response of the GW was deep-reaching, to more than 1000m. involving large deep transports. The northern Somali Current was found to be disconnected from the interior Arabian Sea in latitude range 4°N-12°N in both, water mass properties and current fields. Instead, communication dominantly occurs through the passages between Socotra and the African continent. From moored stations in the main passage a northward throughflow from the Somali Current to the Gulf of Aden of about 5 Sv was determined for the summer monsoon of 1995.
Superstatistics analysis of the ion current distribution function: Met3PbCl influence study.
Miśkiewicz, Janusz; Trela, Zenon; Przestalski, Stanisław; Karcz, Waldemar
2010-09-01
A novel analysis of ion current time series is proposed. It is shown that higher (second, third and fourth) statistical moments of the ion current probability distribution function (PDF) can yield new information about ion channel properties. The method is illustrated on a two-state model where the PDF of the compound states are given by normal distributions. The proposed method was applied to the analysis of the SV cation channels of vacuolar membrane of Beta vulgaris and the influence of trimethyllead chloride (Met(3)PbCl) on the ion current probability distribution. Ion currents were measured by patch-clamp technique. It was shown that Met(3)PbCl influences the variance of the open-state ion current but does not alter the PDF of the closed-state ion current. Incorporation of higher statistical moments into the standard investigation of ion channel properties is proposed.
Methods of Reverberation Mapping. I. Time-lag Determination by Measures of Randomness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chelouche, Doron; Pozo-Nuñez, Francisco; Zucker, Shay, E-mail: doron@sci.haifa.ac.il, E-mail: francisco.pozon@gmail.com, E-mail: shayz@post.tau.ac.il
A class of methods for measuring time delays between astronomical time series is introduced in the context of quasar reverberation mapping, which is based on measures of randomness or complexity of the data. Several distinct statistical estimators are considered that do not rely on polynomial interpolations of the light curves nor on their stochastic modeling, and do not require binning in correlation space. Methods based on von Neumann’s mean-square successive-difference estimator are found to be superior to those using other estimators. An optimized von Neumann scheme is formulated, which better handles sparsely sampled data and outperforms current implementations of discretemore » correlation function methods. This scheme is applied to existing reverberation data of varying quality, and consistency with previously reported time delays is found. In particular, the size–luminosity relation of the broad-line region in quasars is recovered with a scatter comparable to that obtained by other works, yet with fewer assumptions made concerning the process underlying the variability. The proposed method for time-lag determination is particularly relevant for irregularly sampled time series, and in cases where the process underlying the variability cannot be adequately modeled.« less
Direct determination of geocenter motion by combining SLR, VLBI, GNSS, and DORIS time series
NASA Astrophysics Data System (ADS)
Wu, X.; Abbondanza, C.; Altamimi, Z.; Chin, T. M.; Collilieux, X.; Gross, R. S.; Heflin, M. B.; Jiang, Y.; Parker, J. W.
2013-12-01
The longest-wavelength surface mass transport includes three degree-one spherical harmonic components involving hemispherical mass exchanges. The mass load causes geocenter motion between the center-of-mass of the total Earth system (CM) and the center-of-figure of the solid Earth surface (CF), and deforms the solid Earth. Estimation of the degree-1 surface mass changes through CM-CF and degree-1 deformation signatures from space geodetic techniques can thus complement GRACE's time-variable gravity data to form a complete change spectrum up to a high resolution. Currently, SLR is considered the most accurate technique for direct geocenter motion determination. By tracking satellite motion from ground stations, SLR determines the motion between CM and the geometric center of its ground network (CN). This motion is then used to approximate CM-CF and subsequently for deriving degree-1 mass changes. However, the SLR network is very sparse and uneven in global distribution. The average number of operational tracking stations is about 20 in recent years. The poor network geometry can have a large CN-CF motion and is not ideal for the determination of CM-CF motion and degree-1 mass changes. We recently realized an experimental Terrestrial Reference Frame (TRF) through station time series using the Kalman filter and the RTS smoother. The TRF has its origin defined at nearly instantaneous CM using weekly SLR measurement time series. VLBI, GNSS and DORIS time series are combined weekly with those of SLR and tied to the geocentric (CM) reference frame through local tie measurements and co-motion constraints on co-located geodetic stations. The unified geocentric time series of the four geodetic techniques provide a much better network geometry for direct geodetic determination of geocenter motion. Results from this direct approach using a 90-station network compares favorably with those obtained from joint inversions of GPS/GRACE data and ocean bottom pressure models. We will also show that a previously identified discrepancy in X-component between direct SLR orbit-tracking and inverse determined geocenter motions is largely reconciled with the new unified network.
NASA Technical Reports Server (NTRS)
Lieneweg, Udo (Inventor)
1988-01-01
A system is provided for use with wafers that include multiple integrated circuits that include two conductive layers in contact at multiple interfaces. Contact chains are formed beside the integrated circuits, each contact chain formed of the same two layers as the circuits, in the form of conductive segments alternating between the upper and lower layers and with the ends of the segments connected in series through interfaces. A current source passes a current through the series-connected segments, by way of a pair of current tabs connected to opposite ends of the series of segments. While the current flows, voltage measurements are taken between each of a plurality of pairs of voltage tabs, the two tabs of each pair connected to opposite ends of an interface that lies along the series-connected segments. A plot of interface conductances on a normal probability chart, enables prediction of the yield of good integrated circuits from the wafer.
NASA Technical Reports Server (NTRS)
Lieneweg, U. (Inventor)
1986-01-01
A system is provided for use with wafers that include multiple integrated circuits that include two conductive layers in contact at multiple interfaces. Contact chains are formed beside the integrated circuits, each contact chain formed of the same two layers as the circuits, in the form of conductive segments alternating between the upper and lower layers and with the ends of the segments connected in series through interfaces. A current source passes a current through the series-connected segments, by way of a pair of current tabs connected to opposite ends of the series of segments. While the current flows, voltage measurements are taken between each of a plurality of pairs of voltage tabs, the two tabs of each pair connected to opposite ends of an interface that lies along the series-connected segments. A plot of interface conductances on normal probability chart enables prediction of the yield of good integrated circuits from the wafer.
LORETA EEG phase reset of the default mode network.
Thatcher, Robert W; North, Duane M; Biver, Carl J
2014-01-01
The purpose of this study was to explore phase reset of 3-dimensional current sources in Brodmann areas located in the human default mode network (DMN) using Low Resolution Electromagnetic Tomography (LORETA) of the human electroencephalogram (EEG). The EEG was recorded from 19 scalp locations from 70 healthy normal subjects ranging in age from 13 to 20 years. A time point by time point computation of LORETA current sources were computed for 14 Brodmann areas comprising the DMN in the delta frequency band. The Hilbert transform of the LORETA time series was used to compute the instantaneous phase differences between all pairs of Brodmann areas. Phase shift and lock durations were calculated based on the 1st and 2nd derivatives of the time series of phase differences. Phase shift duration exhibited three discrete modes at approximately: (1) 25 ms, (2) 50 ms, and (3) 65 ms. Phase lock duration present primarily at: (1) 300-350 ms and (2) 350-450 ms. Phase shift and lock durations were inversely related and exhibited an exponential change with distance between Brodmann areas. The results are explained by local neural packing density of network hubs and an exponential decrease in connections with distance from a hub. The results are consistent with a discrete temporal model of brain function where anatomical hubs behave like a "shutter" that opens and closes at specific durations as nodes of a network giving rise to temporarily phase locked clusters of neurons for specific durations.
Cross-recurrence quantification analysis of categorical and continuous time series: an R package
Coco, Moreno I.; Dale, Rick
2014-01-01
This paper describes the R package crqa to perform cross-recurrence quantification analysis of two time series of either a categorical or continuous nature. Streams of behavioral information, from eye movements to linguistic elements, unfold over time. When two people interact, such as in conversation, they often adapt to each other, leading these behavioral levels to exhibit recurrent states. In dialog, for example, interlocutors adapt to each other by exchanging interactive cues: smiles, nods, gestures, choice of words, and so on. In order for us to capture closely the goings-on of dynamic interaction, and uncover the extent of coupling between two individuals, we need to quantify how much recurrence is taking place at these levels. Methods available in crqa would allow researchers in cognitive science to pose such questions as how much are two people recurrent at some level of analysis, what is the characteristic lag time for one person to maximally match another, or whether one person is leading another. First, we set the theoretical ground to understand the difference between “correlation” and “co-visitation” when comparing two time series, using an aggregative or cross-recurrence approach. Then, we describe more formally the principles of cross-recurrence, and show with the current package how to carry out analyses applying them. We end the paper by comparing computational efficiency, and results’ consistency, of crqa R package, with the benchmark MATLAB toolbox crptoolbox (Marwan, 2013). We show perfect comparability between the two libraries on both levels. PMID:25018736
Enhancing Discovery, Search, and Access of NASA Hydrological Data by Leveraging GEOSS
NASA Technical Reports Server (NTRS)
Teng, William L.
2015-01-01
An ongoing NASA-funded project has removed a longstanding barrier to accessing NASA data (i.e., accessing archived time-step array data as point-time series) for selected variables of the North American and Global Land Data Assimilation Systems (NLDAS and GLDAS, respectively) and other EOSDIS (Earth Observing System Data Information System) data sets (e.g., precipitation, soil moisture). These time series (data rods) are pre-generated. Data rods Web services are accessible through the CUAHSI Hydrologic Information System (HIS) and the Goddard Earth Sciences Data and Information Services Center (GES DISC) but are not easily discoverable by users of other non-NASA data systems. The Global Earth Observation System of Systems (GEOSS) is a logical mechanism for providing access to the data rods. An ongoing GEOSS Water Services project aims to develop a distributed, global registry of water data, map, and modeling services cataloged using the standards and procedures of the Open Geospatial Consortium and the World Meteorological Organization. The ongoing data rods project has demonstrated the feasibility of leveraging the GEOSS infrastructure to help provide access to time series of model grid information or grids of information over a geographical domain for a particular time interval. A recently-begun, related NASA-funded ACCESS-GEOSS project expands on these prior efforts. Current work is focused on both improving the performance of the generation of on-the-fly (OTF) data rods and the Web interfaces from which users can easily discover, search, and access NASA data.
Quantifying new water fractions and water age distributions using ensemble hydrograph separation
NASA Astrophysics Data System (ADS)
Kirchner, James
2017-04-01
Catchment transit times are important controls on contaminant transport, weathering rates, and runoff chemistry. Recent theoretical studies have shown that catchment transit time distributions are nonstationary, reflecting the temporal variability in precipitation forcing, the structural heterogeneity of catchments themselves, and the nonlinearity of the mechanisms controlling storage and transport in the subsurface. The challenge of empirically estimating these nonstationary transit time distributions in real-world catchments, however, has only begun to be explored. Long, high-frequency tracer time series are now becoming available, creating new opportunities to study how rainfall becomes streamflow on timescales of minutes to days following the onset of precipitation. Here I show that the conventional formula used for hydrograph separation can be converted into an equivalent linear regression equation that quantifies the fraction of current rainfall in streamflow across ensembles of precipitation events. These ensembles can be selected to represent different discharge ranges, different precipitation intensities, or different levels of antecedent moisture, thus quantifying how the fraction of "new water" in streamflow varies with forcings such as these. I further show how this approach can be generalized to empirically determine the contributions of precipitation inputs to streamflow across a range of time lags. In this way the short-term tail of the transit time distribution can be directly quantified for an ensemble of precipitation events. Benchmark testing with a simple, nonlinear, nonstationary catchment model demonstrates that this approach quantitatively measures the short tail of the transit time distribution for a wide range of catchment response characteristics. In combination with reactive tracer time series, this approach can potentially be extended to measure short-term chemical reaction rates at the catchment scale. High-frequency tracer time series from several experimental catchments will be used to demonstrate the utility of the new approach outlined here.
VPV--The velocity profile viewer user manual
Donovan, John M.
2004-01-01
The Velocity Profile Viewer (VPV) is a tool for visualizing time series of velocity profiles developed by the U.S. Geological Survey (USGS). The USGS uses VPV to preview and present measured velocity data from acoustic Doppler current profilers and simulated velocity data from three-dimensional estuarine, river, and lake hydrodynamic models. The data can be viewed as an animated three-dimensional profile or as a stack of time-series graphs that each represents a location in the water column. The graphically displayed data are shown at each time step like frames of animation. The animation can play at several different speeds or can be suspended on one frame. The viewing angle and time can be manipulated using mouse interaction. A number of options control the appearance of the profile and the graphs. VPV cannot edit or save data, but it can create a Post-Script file showing the velocity profile in three dimensions. This user manual describes how to use each of these features. VPV is available and can be downloaded for free from the World Wide Web at http://ca.water.usgs.gov/program/sfbay/vpv.
Zhou, Renjie; Yang, Chen; Wan, Jian; Zhang, Wei; Guan, Bo; Xiong, Naixue
2017-01-01
Measurement of time series complexity and predictability is sometimes the cornerstone for proposing solutions to topology and congestion control problems in sensor networks. As a method of measuring time series complexity and predictability, multiscale entropy (MSE) has been widely applied in many fields. However, sample entropy, which is the fundamental component of MSE, measures the similarity of two subsequences of a time series with either zero or one, but without in-between values, which causes sudden changes of entropy values even if the time series embraces small changes. This problem becomes especially severe when the length of time series is getting short. For solving such the problem, we propose flexible multiscale entropy (FMSE), which introduces a novel similarity function measuring the similarity of two subsequences with full-range values from zero to one, and thus increases the reliability and stability of measuring time series complexity. The proposed method is evaluated on both synthetic and real time series, including white noise, 1/f noise and real vibration signals. The evaluation results demonstrate that FMSE has a significant improvement in reliability and stability of measuring complexity of time series, especially when the length of time series is short, compared to MSE and composite multiscale entropy (CMSE). The proposed method FMSE is capable of improving the performance of time series analysis based topology and traffic congestion control techniques. PMID:28383496
Zhou, Renjie; Yang, Chen; Wan, Jian; Zhang, Wei; Guan, Bo; Xiong, Naixue
2017-04-06
Measurement of time series complexity and predictability is sometimes the cornerstone for proposing solutions to topology and congestion control problems in sensor networks. As a method of measuring time series complexity and predictability, multiscale entropy (MSE) has been widely applied in many fields. However, sample entropy, which is the fundamental component of MSE, measures the similarity of two subsequences of a time series with either zero or one, but without in-between values, which causes sudden changes of entropy values even if the time series embraces small changes. This problem becomes especially severe when the length of time series is getting short. For solving such the problem, we propose flexible multiscale entropy (FMSE), which introduces a novel similarity function measuring the similarity of two subsequences with full-range values from zero to one, and thus increases the reliability and stability of measuring time series complexity. The proposed method is evaluated on both synthetic and real time series, including white noise, 1/f noise and real vibration signals. The evaluation results demonstrate that FMSE has a significant improvement in reliability and stability of measuring complexity of time series, especially when the length of time series is short, compared to MSE and composite multiscale entropy (CMSE). The proposed method FMSE is capable of improving the performance of time series analysis based topology and traffic congestion control techniques.
Empirical method to measure stochasticity and multifractality in nonlinear time series
NASA Astrophysics Data System (ADS)
Lin, Chih-Hao; Chang, Chia-Seng; Li, Sai-Ping
2013-12-01
An empirical algorithm is used here to study the stochastic and multifractal nature of nonlinear time series. A parameter can be defined to quantitatively measure the deviation of the time series from a Wiener process so that the stochasticity of different time series can be compared. The local volatility of the time series under study can be constructed using this algorithm, and the multifractal structure of the time series can be analyzed by using this local volatility. As an example, we employ this method to analyze financial time series from different stock markets. The result shows that while developed markets evolve very much like an Ito process, the emergent markets are far from efficient. Differences about the multifractal structures and leverage effects between developed and emergent markets are discussed. The algorithm used here can be applied in a similar fashion to study time series of other complex systems.
Identification and Inference for Econometric Models
NASA Astrophysics Data System (ADS)
Andrews, Donald W. K.; Stock, James H.
2005-07-01
This volume contains the papers presented in honor of the lifelong achievements of Thomas J. Rothenberg on the occasion of his retirement. The authors of the chapters include many of the leading econometricians of our day, and the chapters address topics of current research significance in econometric theory. The chapters cover four themes: identification and efficient estimation in econometrics, asymptotic approximations to the distributions of econometric estimators and tests, inference involving potentially nonstationary time series, such as processes that might have a unit autoregressive root, and nonparametric and semiparametric inference. Several of the chapters provide overviews and treatments of basic conceptual issues, while others advance our understanding of the properties of existing econometric procedures and/or propose new ones. Specific topics include identification in nonlinear models, inference with weak instruments, tests for nonstationary in time series and panel data, generalized empirical likelihood estimation, and the bootstrap.
Constraining the physics of carbon crystallization through pulsations of a massive DAV BPM37093
NASA Astrophysics Data System (ADS)
Nitta, Atsuko; Kepler, S. O.; Chené, André-Nicolas; Koester, D.; Provencal, J. L.; Kleinmani, S. J.; Sullivan, D. J.; Chote, Paul; Sefako, Ramotholo; Kanaan, Antonio; Romero, Alejandra; Corti, Mariela; Kilic, Mukremin; Montgomery, M. H.; Winget, D. E.
We are trying to reduce the largest uncertainties in using white dwarf stars as Galactic chronometers by understanding the details of carbon crystalliazation that currently result in a 1-2 Gyr uncertainty in the ages of the oldest white dwarf stars. We expect the coolest white dwarf stars to have crystallized interiors, but theory also predicts hotter white dwarf stars, if they are massive enough, will also have some core crystallization. BPM 37093 is the first discovered of only a handful of known massive white dwarf stars that are also pulsating DAV, or ZZ Ceti, variables. Our approach is to use the pulsations to constrain the core composition and amount of crystallization. Here we report our analysis of 4 hours of continuous time series spectroscopy of BPM 37093 with Gemini South combined with simultaneous time-series photometry from Mt. John (New Zealand), SAAO, PROMPT, and Complejo Astronomico El Leoncito (CASLEO, Argentina).
Demonstration of a Dual-Band Mid-Wavelength HgCdTe Detector Operating at Room Temperature
NASA Astrophysics Data System (ADS)
Martyniuk, P.; Madejczyk, P.; Gawron, W.; Rutkowski, J.
2018-03-01
In this paper, the performance of sequential dual-band mid-wavelength N+-n-p-p-P+-p-p-n-n+ back-to-back HgCdTe photodiode grown by metal-organic chemical vapor deposition (MOCVD) operating at room temperature is presented. The details of the MOCVD growth procedure are given. The influence of p-type separating-barrier layer on dark current, photocurrent and response time was analyzed. Detectivity without immersion D * higher than 1 × 108 cmHz1/2/W was estimated for λ Peak = 3.2 μm and 4.2 μm, respectively. A response time of τ s ˜ 1 ns could be reached in both MW1 and MW2 ranges for the optimal P+ barrier Cd composition at the range 0.38-0.42, and extra series resistance related to the processing R Series equal to 500 Ω.
Industry contributions to aggregate workplace injury and illness rate trends: 1992-2008.
Ruser, John W
2014-10-01
Aggregate workplace injury and illness rates have generally declined over the past quarter century. Assessing which industries contributed to these declines is hampered by industry coding changes that broke time series data. Ratios were estimated to convert older incidence rate data to current industry codes and to create long industry time series from data of the BLS Survey of Occupational Injuries and Illnesses. These data were used to assess contributions to aggregate trends from within-industry incidence rate trends and across-industry hours shifts. Hours shifts toward safer industries do not explain aggregate incidence rate declines. Rather declines resulted from within-industry declines. The top 20 contributors out of 307 industries account for 40 percent of the decline and include both goods-producing and service-providing industries. These data help focus future research on industries responsible for rate declines and factors hypothesized as contributing to declines. © Published 2014 by Wiley Periodicals, Inc.
78 FR 79333 - Airworthiness Directives; Airbus Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-30
...We propose to supersede airworthiness directive (AD) 2000-12- 12, for certain Airbus Model A300, A300-600, and A310 series airplanes. AD 2000-12-12 currently requires inspecting to detect cracks in the lower spar axis of the nacelle pylon between ribs 9 and 10, and repair if necessary. AD 2000-12-12 also provides for optional modification of the pylon, which terminates the inspections for Model A300 series airplanes. Since we issued AD 2000-12-12, we have received reports of cracking of the lower pylon spar after accomplishing the existing modification and have determined that shorter initial and repetitive inspection compliance times are necessary to address the identified unsafe condition. This proposed AD would reduce the initial and repetitive inspection compliance times. We are proposing this AD to detect and correct fatigue cracking, which could result in reduced structural integrity of the lower spar of the nacelle pylon.
Method of estimating pulse response using an impedance spectrum
Morrison, John L; Morrison, William H; Christophersen, Jon P; Motloch, Chester G
2014-10-21
Electrochemical Impedance Spectrum data are used to predict pulse performance of an energy storage device. The impedance spectrum may be obtained in-situ. A simulation waveform includes a pulse wave with a period greater than or equal to the lowest frequency used in the impedance measurement. Fourier series coefficients of the pulse train can be obtained. The number of harmonic constituents in the Fourier series are selected so as to appropriately resolve the response, but the maximum frequency should be less than or equal to the highest frequency used in the impedance measurement. Using a current pulse as an example, the Fourier coefficients of the pulse are multiplied by the impedance spectrum at corresponding frequencies to obtain Fourier coefficients of the voltage response to the desired pulse. The Fourier coefficients of the response are then summed and reassembled to obtain the overall time domain estimate of the voltage using the Fourier series analysis.
Event Detection for Hydrothermal Plumes: A case study at Grotto Vent
NASA Astrophysics Data System (ADS)
Bemis, K. G.; Ozer, S.; Xu, G.; Rona, P. A.; Silver, D.
2012-12-01
Evidence is mounting that geologic events such as volcanic eruptions (and intrusions) and earthquakes (near and far) influence the flow rates and temperatures of hydrothermal systems. Connecting such suppositions to observations of hydrothermal output is challenging, but new ongoing time series have the potential to capture such events. This study explores using activity detection, a technique modified from computer vision, to identify pre-defined events within an extended time series recorded by COVIS (Cabled Observatory Vent Imaging Sonar) and applies it to a time series, with gaps, from Sept 2010 to the present; available measurements include plume orientation, plume rise rate, and diffuse flow area at the NEPTUNE Canada Observatory at Grotto Vent, Main Endeavour Field, Juan de Fuca Ridge. Activity detection is the process of finding a pattern (activity) in a data set containing many different types of patterns. Among many approaches proposed to model and detect activities, we have chosen a graph-based technique, Petri Nets, as they do not require training data to model the activity. They use the domain expert's knowledge to build the activity as a combination of feature states and their transitions (actions). Starting from a conceptual model of how hydrothermal plumes respond to daily tides, we have developed a Petri Net based detection algorithm that identifies deviations from the specified response. Initially we assumed that the orientation of the plume would change smoothly and symmetrically in a consistent daily pattern. However, results indicate that the rate of directional changes varies. The present Petri Net detects unusually large and rapid changes in direction or amount of bending; however inspection of Figure 1 suggests that many of the events detected may be artifacts resulting from gaps in the data or from the large temporal spacing. Still, considerable complexity overlies the "normal" tidal response pattern (the data has a dominant frequency of ~12.9 hours). We are in the process of defining several events of particular scientific interest: 1) transient behavioral changes associated with atmospheric storms, earthquakes or volcanic intrusions or eruptions, 2) mutual interaction of neighboring plumes on each other's behavior, and 3) rapid shifts in plume direction that indicate the presence of unusual currents or changes in currents. We will query the existing data to see if these relationships are ever observed as well as testing our understanding of the "normal" pattern of response to tidal currents.Figure 1. Arrows indicate plume orientation at a given time (time axis in days after 9/29/10) and stars indicate times when orientation changes rapidly.
Measuring Multiple Resistances Using Single-Point Excitation
NASA Technical Reports Server (NTRS)
Hall, Dan; Davies, Frank
2009-01-01
In a proposed method of determining the resistances of individual DC electrical devices connected in a series or parallel string, no attempt would be made to perform direct measurements on individual devices. Instead, (1) the devices would be instrumented by connecting reactive circuit components in parallel and/or in series with the devices, as appropriate; (2) a pulse or AC voltage excitation would be applied at a single point on the string; and (3) the transient or AC steady-state current response of the string would be measured at that point only. Each reactive component(s) associated with each device would be distinct in order to associate a unique time-dependent response with that device.
Multiscale Poincaré plots for visualizing the structure of heartbeat time series.
Henriques, Teresa S; Mariani, Sara; Burykin, Anton; Rodrigues, Filipa; Silva, Tiago F; Goldberger, Ary L
2016-02-09
Poincaré delay maps are widely used in the analysis of cardiac interbeat interval (RR) dynamics. To facilitate visualization of the structure of these time series, we introduce multiscale Poincaré (MSP) plots. Starting with the original RR time series, the method employs a coarse-graining procedure to create a family of time series, each of which represents the system's dynamics in a different time scale. Next, the Poincaré plots are constructed for the original and the coarse-grained time series. Finally, as an optional adjunct, color can be added to each point to represent its normalized frequency. We illustrate the MSP method on simulated Gaussian white and 1/f noise time series. The MSP plots of 1/f noise time series reveal relative conservation of the phase space area over multiple time scales, while those of white noise show a marked reduction in area. We also show how MSP plots can be used to illustrate the loss of complexity when heartbeat time series from healthy subjects are compared with those from patients with chronic (congestive) heart failure syndrome or with atrial fibrillation. This generalized multiscale approach to Poincaré plots may be useful in visualizing other types of time series.
The influence of decadal scale climactic events on the transport of larvae.
NASA Astrophysics Data System (ADS)
Rasmuson, L. K.; Edwards, C. A.; Shanks, A.
2016-02-01
Understanding the processes that influence larval transport remains an important, yet difficult, task. This is especially true as more studies demonstrate that biological and physical oceanographic processes vary at long (e.g. decadal+) time scales. We used individual based biophysical models to study transport of Dungeness crab larvae (the most economically valuable fishery on the West Coast of the Continental United States) over a 10-year period; during both positive and negative phases of the Pacific decadal oscillation (PDO). A physical oceanographic model of the California current was developed using the Regional Ocean Modeling System with 1/30-degree resolution. Measured and modeled PDO indices were positively correlated. The biological model was implemented using the Lagrangian Transport Model, and modified to incorporate temperature dependent development and stage specific behaviors. Super individuals were used to scale production and incorporate mortality. Models were validated using time series statistics to compare measured and modeled daily recruitment. More larvae recruited, in both our measured and modeled time series, during negative PDOs. Our work suggests larvae exhibit a vertically migratory behavior too or almost too the bottom each day. During positive PDO years larvae were competent to settle earlier than negative PDO years, however, pelagic larval durations did not differ. The southern end of the population appears to be a sink population, which likely explains the decline in commercial catch. Ultimately, the population is much more demographically closed than previously thought. We hypothesize the stronger flow in the California current during negative PDO's enhances membership of larvae in the current. Further, migrating almost too the bottom causes larvae to enter the benthic boundary layer on the continental shelf and the California undercurrent on the continental slope, both, which decrease net alongshore advection. These factors result in a higher number of larvae closing their larval phase within the California current. We hypothesize Dungeness crabs have evolved to complete their larval phase within the oceanographic context of the California current and differences with the oceanography in the Alaska current may explain the difficulties in managing fisheries.