Power Allocation Strategies for Distributed Space-Time Codes in Amplify-and-Forward Mode
NASA Astrophysics Data System (ADS)
Maham, Behrouz; Hjørungnes, Are
2009-12-01
We consider a wireless relay network with Rayleigh fading channels and apply distributed space-time coding (DSTC) in amplify-and-forward (AF) mode. It is assumed that the relays have statistical channel state information (CSI) of the local source-relay channels, while the destination has full instantaneous CSI of the channels. It turns out that, combined with the minimum SNR based power allocation in the relays, AF DSTC results in a new opportunistic relaying scheme, in which the best relay is selected to retransmit the source's signal. Furthermore, we have derived the optimum power allocation between two cooperative transmission phases by maximizing the average received SNR at the destination. Next, assuming M-PSK and M-QAM modulations, we analyze the performance of cooperative diversity wireless networks using AF opportunistic relaying. We also derive an approximate formula for the symbol error rate (SER) of AF DSTC. Assuming the use of full-diversity space-time codes, we derive two power allocation strategies minimizing the approximate SER expressions, for constrained transmit power. Our analytical results have been confirmed by simulation results, using full-rate, full-diversity distributed space-time codes.
Weighted adaptively grouped multilevel space time trellis codes
NASA Astrophysics Data System (ADS)
Jain, Dharmvir; Sharma, Sanjay
2015-05-01
In existing grouped multilevel space-time trellis codes (GMLSTTCs), the groups of transmit antennas are predefined, and the transmit power is equally distributed across all transmit antennas. When the channel parameters are perfectly known at the transmitter, adaptive antenna grouping and beamforming scheme can achieve the better performance by optimum grouping of transmit antennas and properly weighting transmitted signals based on the available channel information. In this paper, we present a new code designed by combining GMLSTTCs, adaptive antenna grouping and beamforming using the channel state information at transmitter (CSIT), henceforth referred to as weighted adaptively grouped multilevel space time trellis codes (WAGMLSTTCs). The CSIT is used to adaptively group the transmitting antennas and provide a beamforming scheme by allocating the different powers to the transmit antennas. Simulation results show that WAGMLSTTCs provide improvement in error performance of 2.6 dB over GMLSTTCs.
Space-Time Code Designs for Broadband Wireless Communications
2005-03-01
Decoding Algorithms (i). Fast iterative decoding algorithms for lattice based space-time coded MIMO systems and single antenna vector OFDM systems: We...Information Theory, vol. 49, p.313, Jan. 2003. 5. G. Fan and X.-G. Xia, " Wavelet - Based Texture Analysis and Synthesis Using Hidden Markov Models," IEEE...PSK, and CPM signals, lattice based space-time codes, and unitary differential space-time codes for large number of transmit antennas. We want to
Differential Cooperative Communications with Space-Time Network Coding
2010-01-01
The received signal at Un in the mth time slot of Phase I is ykmn = √ Ptg k mnv k m + w k mn, (1) where Pt is the power constraint of the user nodes, w...rate (SER) at Un for the symbols from Um is pmn , βmn’s are independent Bernoulli random variables with a distribution P (βmn) = { 1− pmn , for βmn = 1... pmn , for βmn = 0 . (17) The SER for M-QAM modulation can be expressed as [12] pmn = F2 ( 1 + bqγmn sin2 θ ) , (18) where bq = bQAM 2 = 3 2(M+1) and γmn
On the Application of Time-Reversed Space-Time Block Code to Aeronautical Telemetry
2014-06-01
Keying (SOQPSK), bit error rate (BER), Orthogonal Frequency Division Multiplexing ( OFDM ), Generalized time-reversed space-time block codes (GTR-STBC) 16...Alamouti code [4]) is optimum [2]. Although OFDM is generally applied on a per subcarrier basis in frequency selective fading, it is not a viable
Differential Space-Time Coding Scheme Using Star Quadrature Amplitude Modulation Method
NASA Astrophysics Data System (ADS)
Yu, Xiangbin; Xu, DaZhuan; Bi, Guangguo
2006-12-01
Differential space-time coding (DSTC) has received much interest as it obviates the requirement of the channel state information at the receiver while maintaining the desired properties of space-time coding techniques. In this paper, by introducing star quadrature amplitude modulation (star QAM) method, two kinds of multiple amplitudes DSTC schemes are proposed. One is based on differential unitary space-time coding (DUSTC) scheme, and the other is based on differential orthogonal space-time coding (DOSTC) scheme. Corresponding bit-error-rate (BER) performance and coding-gain analysis are given, respectively. The proposed schemes can avoid the performance loss of conventional DSTC schemes based on phase-shift keying (PSK) modulation in high spectrum efficiency via multiple amplitudes modulation. Compared with conventional PSK-based DSTC schemes, the developed schemes have higher spectrum efficiency via carrying information not only on phases but also on amplitudes, and have higher coding gain. Moreover, the first scheme can implement low-complexity differential modulation and different code rates and be applied to any number of transmit antennas; while the second scheme has simple decoder and high code rate in the case of 3 and 4 antennas. The simulation results show that our schemes have lower BER when compared with conventional DUSTC and DOSTC schemes.
Novel space-time trellis codes for free-space optical communications using transmit laser selection.
García-Zambrana, Antonio; Boluda-Ruiz, Rubén; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz
2015-09-21
In this paper, the deployment of novel space-time trellis codes (STTCs) with transmit laser selection (TLS) for free-space optical (FSO) communication systems using intensity modulation and direct detection (IM/DD) over atmospheric turbulence and misalignment fading channels is presented. Combining TLS and STTC with rate 1 bit/(s · Hz), a new code design criterion based on the use of the largest order statistics is here proposed for multiple-input/single-output (MISO) FSO systems in order to improve the diversity order gain by properly chosing the transmit lasers out of the available L lasers. Based on a pairwise error probability (PEP) analysis, closed-form asymptotic bit error-rate (BER) expressions in the range from low to high signal-to-noise ratio (SNR) are derived when the irradiance of the transmitted optical beam is susceptible to moderate-to-strong turbulence conditions, following a gamma-gamma (GG) distribution, and pointing error effects, following a misalignment fading model where the effect of beam width, detector size and jitter variance is considered. Obtained results show diversity orders of 2L and 3L when simple two-state and four-state STTCs are considered, respectively. Simulation results are further demonstrated to confirm the analytical results.
NASA Astrophysics Data System (ADS)
Yu, Xiangbin; Dong, Tao; Xu, Dazhuan; Bi, Guangguo
2010-09-01
By introducing an orthogonal space-time coding scheme, multiuser code division multiple access (CDMA) systems with different space time codes are given, and corresponding system performance is investigated over a Nakagami-m fading channel. A low-complexity multiuser receiver scheme is developed for space-time block coded CDMA (STBC-CDMA) systems. The scheme can make full use of the complex orthogonality of space-time block coding to simplify the high decoding complexity of the existing scheme. Compared to the existing scheme with exponential decoding complexity, it has linear decoding complexity. Based on the performance analysis and mathematical calculation, the average bit error rate (BER) of the system is derived in detail for integer m and non-integer m, respectively. As a result, a tight closed-form BER expression is obtained for STBC-CDMA with an orthogonal spreading code, and an approximate closed-form BER expression is attained for STBC-CDMA with a quasi-orthogonal spreading code. Simulation results show that the proposed scheme can achieve almost the same performance as the existing scheme with low complexity. Moreover, the simulation results for average BER are consistent with the theoretical analysis.
Adaptive Switching Technique for Space-Time/Frequency Coded OFDM Systems
NASA Astrophysics Data System (ADS)
Change, Namseok; Gil, Gye-Tae; Kang, Joonhyuk; Yu, Gangyoul
In this letter, a switched transmission technique is presented that can provide the orthogonal frequency division multiplexing (OFDM) systems with additional diversity gain. The space-time block coding (STBC) and space-frequency block coding (SFBC) are considered for the transmission of the OFDM signals. A proper coding scheme is selected based on the estimated normalized delay spread and normalized Doppler spread. The selection criterion is derived empirically. It is shown through computer simulations that the proposed switching technique can improve the bit error rate (BER) performance of an OFDM system when the corresponding wireless channel has some time variation of time selectivity as well as frequency selectivity.
Power optimization of wireless media systems with space-time block codes.
Yousefi'zadeh, Homayoun; Jafarkhani, Hamid; Moshfeghi, Mehran
2004-07-01
We present analytical and numerical solutions to the problem of power control in wireless media systems with multiple antennas. We formulate a set of optimization problems aimed at minimizing total power consumption of wireless media systems subject to a given level of QoS and an available bit rate. Our formulation takes into consideration the power consumption related to source coding, channel coding, and transmission of multiple-transmit antennas. In our study, we consider Gauss-Markov and video source models, Rayleigh fading channels along with the Bernoulli/Gilbert-Elliott loss models, and space-time block codes.
Alamouti-Type Space-Time Coding for Free-Space Optical Communication with Direct Detection
NASA Astrophysics Data System (ADS)
Simon, M. K.; Vilnrotter, V.
2003-11-01
In optical communication systems employing direct detection at the receiver, intensity modulations such as on-off keying (OOK) or pulse-position modulation (PPM) are commonly used to convey the information. Consider the possibility of applying space-time coding in such a scenario, using, for example, an Alamouti-type coding scheme [1]. Implicit in the Alamouti code is the fact that the modulation that defines the signal set is such that it is meaningful to transmit and detect both the signal and its negative. While modulations such as phase-shift keying (PSK) and quadrature amplitude modulation (QAM) naturally fall into this class, OOK and PPM do not since the signal polarity (phase) would not be detected at the receiver. We investigate a modification of the Alamouti code to be used with such modulations that has the same desirable properties as the conventional Alamouti code but does not rely on the necessity of transmitting the negative of a signal.
An algorithm for space-time block code classification using higher-order statistics (HOS).
Yan, Wenjun; Zhang, Limin; Ling, Qing
2016-01-01
This paper proposes a novel algorithm for space-time block code classification, when a single antenna is employed at the receiver. The algorithm exploits the discriminating features provided by the higher-order cumulants of the received signal. It does not require estimation of channel and information of the noise. Computer simulations are conducted to evaluate the performance of the proposed algorithm. The results show the performance of the algorithm is good.
2×Nr MIMO ARQ Scheme Using Multi-Strata Space-Time Codes
NASA Astrophysics Data System (ADS)
Ko, Dongju; Lee, Jeong Woo
We propose a 2×Nr MIMO ARQ scheme that uses multi-strata space-time codes composed of two layers. The phase and transmit power of each layer are assigned adaptively at each transmission round to mitigate the inter-layer interference and improve the block error rate by retransmission. Simulation results show that the proposed scheme achieves better performance than the conventional schemes in terms of the throughput and the block error rate.
A novel repetition space-time coding scheme for mobile FSO systems
NASA Astrophysics Data System (ADS)
Li, Ming; Cao, Yang; Li, Shu-ming; Yang, Shao-wen
2015-03-01
Considering the influence of more random atmospheric turbulence, worse pointing errors and highly dynamic link on the transmission performance of mobile multiple-input multiple-output (MIMO) free space optics (FSO) communication systems, this paper establishes a channel model for the mobile platform. Based on the combination of Alamouti space-time code and time hopping ultra-wide band (TH-UWB) communications, a novel repetition space-time coding (RSTC) method for mobile 2×2 free-space optical communications with pulse position modulation (PPM) is developed. In particular, two decoding methods of equal gain combining (EGC) maximum likelihood detection (MLD) and correlation matrix detection (CMD) are derived. When a quasi-static fading and weak turbulence channel model are considered, simulation results show that whether the channel state information (CSI) is known or not, the coding system demonstrates more significant performance of the symbol error rate (SER) than the uncoding. In other words, transmitting diversity can be achieved while conveying the information only through the time delays of the modulated signals transmitted from different antennas. CMD has almost the same effect of signal combining with maximal ratio combining (MRC). However, when the channel correlation increases, SER performance of the coding 2×2 system degrades significantly.
Numerical relativity for D dimensional axially symmetric space-times: Formalism and code tests
NASA Astrophysics Data System (ADS)
Zilhão, Miguel; Witek, Helvi; Sperhake, Ulrich; Cardoso, Vitor; Gualtieri, Leonardo; Herdeiro, Carlos; Nerozzi, Andrea
2010-04-01
The numerical evolution of Einstein’s field equations in a generic background has the potential to answer a variety of important questions in physics: from applications to the gauge-gravity duality, to modeling black hole production in TeV gravity scenarios, to analysis of the stability of exact solutions, and to tests of cosmic censorship. In order to investigate these questions, we extend numerical relativity to more general space-times than those investigated hitherto, by developing a framework to study the numerical evolution of D dimensional vacuum space-times with an SO(D-2) isometry group for D≥5, or SO(D-3) for D≥6. Performing a dimensional reduction on a (D-4) sphere, the D dimensional vacuum Einstein equations are rewritten as a 3+1 dimensional system with source terms, and presented in the Baumgarte, Shapiro, Shibata, and Nakamura formulation. This allows the use of existing 3+1 dimensional numerical codes with small adaptations. Brill-Lindquist initial data are constructed in D dimensions and a procedure to match them to our 3+1 dimensional evolution equations is given. We have implemented our framework by adapting the Lean code and perform a variety of simulations of nonspinning black hole space-times. Specifically, we present a modified moving puncture gauge, which facilitates long-term stable simulations in D=5. We further demonstrate the internal consistency of the code by studying convergence and comparing numerical versus analytic results in the case of geodesic slicing for D=5, 6.
Adaptive sphere decoding for space-time codes of wireless MIMO communications
NASA Astrophysics Data System (ADS)
Chen, Xinjia; Walker, Ernest
2010-04-01
In this paper, we develop an adaptive sphere decoding technique for space-time coding of wireless MIMO communications. This technique makes use of the statistics of previous decoding results to reduce the decoding complexity of subsequent decoding process. Specially, we propose a method for the determination of the initial sphere radius for the decoding process of future time-frame based on a queue of records of minimum sphere radius obtained from the decoding process of previous time-frames. Concrete methods have been derived for the choice of appropriate queue sizes. Numerical experiment is performed for demonstrating the efficiency of the adaptive technique.
Two Novel Space-Time Coding Techniques Designed for UWB MISO Systems Based on Wavelet Transform.
Zaki, Amira Ibrahim; Badran, Ehab F; El-Khamy, Said E
2016-01-01
In this paper two novel space-time coding multi-input single-output (STC MISO) schemes, designed especially for Ultra-Wideband (UWB) systems, are introduced. The proposed schemes are referred to as wavelet space-time coding (WSTC) schemes. The WSTC schemes are based on two types of multiplexing, spatial and wavelet domain multiplexing. In WSTC schemes, four symbols are transmitted on the same UWB transmission pulse with the same bandwidth, symbol duration, and number of transmitting antennas of the conventional STC MISO scheme. The used mother wavelet (MW) is selected to be highly correlated with transmitted pulse shape and such that the multiplexed signal has almost the same spectral characteristics as those of the original UWB pulse. The two WSTC techniques increase the data rate to four times that of the conventional STC. The first WSTC scheme increases the data rate with a simple combination process. The second scheme achieves the increase in the data rate with a less complex receiver and better performance than the first scheme due to the spatial diversity introduced by the structure of its transmitter and receiver. The two schemes use Rake receivers to collect the energy in the dense multipath channel components. The simulation results show that the proposed WSTC schemes have better performance than the conventional scheme in addition to increasing the data rate to four times that of the conventional STC scheme.
Two Novel Space-Time Coding Techniques Designed for UWB MISO Systems Based on Wavelet Transform
Zaki, Amira Ibrahim; El-Khamy, Said E.
2016-01-01
In this paper two novel space-time coding multi-input single-output (STC MISO) schemes, designed especially for Ultra-Wideband (UWB) systems, are introduced. The proposed schemes are referred to as wavelet space-time coding (WSTC) schemes. The WSTC schemes are based on two types of multiplexing, spatial and wavelet domain multiplexing. In WSTC schemes, four symbols are transmitted on the same UWB transmission pulse with the same bandwidth, symbol duration, and number of transmitting antennas of the conventional STC MISO scheme. The used mother wavelet (MW) is selected to be highly correlated with transmitted pulse shape and such that the multiplexed signal has almost the same spectral characteristics as those of the original UWB pulse. The two WSTC techniques increase the data rate to four times that of the conventional STC. The first WSTC scheme increases the data rate with a simple combination process. The second scheme achieves the increase in the data rate with a less complex receiver and better performance than the first scheme due to the spatial diversity introduced by the structure of its transmitter and receiver. The two schemes use Rake receivers to collect the energy in the dense multipath channel components. The simulation results show that the proposed WSTC schemes have better performance than the conventional scheme in addition to increasing the data rate to four times that of the conventional STC scheme. PMID:27959939
NASA Astrophysics Data System (ADS)
Zanotti, Olindo; Dumbser, Michael
2015-03-01
We present a high order one-step ADER-WENO finite volume scheme with space-time adaptive mesh refinement (AMR) for the solution of the special relativistic hydrodynamic and magnetohydrodynamic equations. By adopting a local discontinuous Galerkin predictor method, a high order one-step time discretization is obtained, with no need for Runge-Kutta sub-steps. This turns out to be particularly advantageous in combination with space-time adaptive mesh refinement, which has been implemented following a "cell-by-cell" approach. As in existing second order AMR methods, also the present higher order AMR algorithm features time-accurate local time stepping (LTS), where grids on different spatial refinement levels are allowed to use different time steps. We also compare two different Riemann solvers for the computation of the numerical fluxes at the cell interfaces. The new scheme has been validated over a sample of numerical test problems in one, two and three spatial dimensions, exploring its ability in resolving the propagation of relativistic hydrodynamical and magnetohydrodynamical waves in different physical regimes. The astrophysical relevance of the new code for the study of the Richtmyer-Meshkov instability is briefly discussed in view of future applications.
A multi-layer VLC imaging system based on space-time trace-orthogonal coding
NASA Astrophysics Data System (ADS)
Li, Peng-Xu; Yang, Yu-Hong; Zhu, Yi-Jun; Zhang, Yan-Yu
2017-02-01
In visible light communication (VLC) imaging systems, different properties of data are usually demanded for transmission with different priorities in terms of reliability and/or validity. For this consideration, a novel transmission scheme called space-time trace-orthogonal coding (STTOC) for VLC is proposed in this paper by taking full advantage of the characteristics of time-domain transmission and space-domain orthogonality. Then, several constellation designs for different priority strategies subject to the total power constraint are presented. One significant advantage of this novel scheme is that the inter-layer interference (ILI) can be eliminated completely and the computation complexity of maximum likelihood (ML) detection is linear. Computer simulations verify the correctness of our theoretical analysis, and demonstrate that both transmission rate and error performance of the proposed scheme greatly outperform the conventional multi-layer transmission system.
General relativistic radiative transfer code in rotating black hole space-time: ARTIST
NASA Astrophysics Data System (ADS)
Takahashi, Rohta; Umemura, Masayuki
2017-02-01
We present a general relativistic radiative transfer code, ARTIST (Authentic Radiative Transfer In Space-Time), that is a perfectly causal scheme to pursue the propagation of radiation with absorption and scattering around a Kerr black hole. The code explicitly solves the invariant radiation intensity along null geodesics in the Kerr-Schild coordinates, and therefore properly includes light bending, Doppler boosting, frame dragging, and gravitational redshifts. The notable aspect of ARTIST is that it conserves the radiative energy with high accuracy, and is not subject to the numerical diffusion, since the transfer is solved on long characteristics along null geodesics. We first solve the wavefront propagation around a Kerr black hole that was originally explored by Hanni. This demonstrates repeated wavefront collisions, light bending, and causal propagation of radiation with the speed of light. We show that the decay rate of the total energy of wavefronts near a black hole is determined solely by the black hole spin in late phases, in agreement with analytic expectations. As a result, the ARTIST turns out to correctly solve the general relativistic radiation fields until late phases as t ˜ 90 M. We also explore the effects of absorption and scattering, and apply this code for a photon wall problem and an orbiting hotspot problem. All the simulations in this study are performed in the equatorial plane around a Kerr black hole. The ARTIST is the first step to realize the general relativistic radiation hydrodynamics.
Distributed Space-Time Coding for Cooperative Networks
2006-12-05
log2 M . (10) We assume that the channels are Rayleigh fading, so that |h|2 is an exponential random variable with expected value σ2h = 1/r α, where r is...analysis to Nakagami - m -fading channels and we showed that the advantage decreases as the index m increases, i.e. as the channel tends to be less and less...symbols over two successive time periods, so that TSR2D = 2Ts and TS2R = 2 log2( M )Ts/ log2( Q ). The sequence transmitted by the source-relay pair is
Wireless Network Cocast: Cooperative Communications with Space-Time Network Coding
2011-04-21
the transformed-based STNC for different numbers of user nodes (N = 2 and N = 3), QPSK and 16- QAM modulation , and (a) (M = 1) and (b) (M = 2...and (b) 16- QAM modulations . . . . . . . . . . . . . . . . . 95 4.6 Performance comparison between the proposed STNC scheme and a scheme employing...distributed Alamouti code for N = 2 and M = 2, (a) QPSK and (b) 16- QAM modulations . . . . . . . . . . . . . . . . . 96 5.1 A multi-source wireless network
A two-level space-time color-coding method for 3D measurements using structured light
NASA Astrophysics Data System (ADS)
Xue, Qi; Wang, Zhao; Huang, Junhui; Gao, Jianmin; Qi, Zhaoshuai
2015-11-01
Color-coding methods have significantly improved the measurement efficiency of structured light systems. However, some problems, such as color crosstalk and chromatic aberration, decrease the measurement accuracy of the system. A two-level space-time color-coding method is thus proposed in this paper. The method, which includes a space-code level and a time-code level, is shown to be reliable and efficient. The influence of chromatic aberration is completely mitigated when using this method. Additionally, a self-adaptive windowed Fourier transform is used to eliminate all color crosstalk components. Theoretical analyses and experiments have shown that the proposed coding method solves the problems of color crosstalk and chromatic aberration effectively. Additionally, the method guarantees high measurement accuracy which is very close to the measurement accuracy using monochromatic coded patterns.
Low Complexity Receiver Based Space-Time Codes for Broadband Wireless Communications
2011-01-31
STBC family is a combina- tion/overlay between orthogonal STBC and Toeplitz codes, which could be viewed as a generalization of overlapped Alamouti...codes (OAC) and Toeplitz codes recently pro- posed in the literature. It is shown that the newly proposed STBC may outperform the existing codes when...mixed asynchronous signals in the first time-slot by a Toeplitz matrix, and then broadcasts them back to the terminals in the second time-slot. A
Energy Distributions in Szekeres Type I and II Space-Times
NASA Astrophysics Data System (ADS)
Ayguen, S.; Ayguen, M.; Tarhan, I.
2006-10-01
In this study, in context of general relativity we consider Einstein, Bergmann-Thomson, M{o}ller and Landau-Lifshitz energy-momentum definitions and we compute the total energy distribution (due to matter and fields including gravitation) of the universe based on Szekeres class I and class II space-times. We show that Einstein and Bergmann-Thomson definitions of the energy-momentum complexes give the same results, while M{o}ller's and Landau-Lifshitz's energy-momentum definition does not provide same results for Szekeres class II space. The definitions of Einstein, Bergmann-Thomson and M{o}ller definitions of the energy-momentum complexes give similar results in Szekeres class I space-time.
Computer code for space-time diagnostics of nuclear safety parameters
Solovyev, D. A.; Semenov, A. A.; Gruzdov, F. V.; Druzhaev, A. A.; Shchukin, N. V.; Dolgenko, S. G.; Solovyeva, I. V.; Ovchinnikova, E. A.
2012-07-01
The computer code ECRAN 3D (Experimental and Calculation Reactor Analysis) is designed for continuous monitoring and diagnostics of reactor cores and databases for RBMK-1000 on the basis of analytical methods for the interrelation parameters of nuclear safety. The code algorithms are based on the analysis of deviations between the physically obtained figures and the results of neutron-physical and thermal-hydraulic calculations. Discrepancies between the measured and calculated signals are equivalent to obtaining inadequacy between performance of the physical device and its simulator. The diagnostics system can solve the following problems: identification of facts and time for inconsistent results, localization of failures, identification and quantification of the causes for inconsistencies. These problems can be effectively solved only when the computer code is working in a real-time mode. This leads to increasing requirements for a higher code performance. As false operations can lead to significant economic losses, the diagnostics system must be based on the certified software tools. POLARIS, version 4.2.1 is used for the neutron-physical calculation in the computer code ECRAN 3D. (authors)
Riemer, Martin; Diersch, Nadine; Bublatzky, Florian; Wolbers, Thomas
2016-04-01
The mental representations of space, time, and number magnitude are inherently linked. The right posterior parietal cortex (PPC) has been suggested to contain a general magnitude system that underlies the overlap between various perceptual dimensions. However, comparative studies including spatial, temporal, and numerical dimensions are missing. In a unified paradigm, we compared the impact of right PPC inhibition on associations with spatial response codes (i.e., Simon, SNARC, and STARC effects) and on congruency effects between space, time, and numbers. Prolonged cortical inhibition was induced by continuous theta-burst stimulation (cTBS), a protocol for transcranial magnetic stimulation (TMS), at the right intraparietal sulcus (IPS). Our results show that congruency effects, but not response code associations, are affected by right PPC inhibition, indicating different neuronal mechanisms underlying these effects. Furthermore, the results demonstrate that interactions between space and time perception are reflected in congruency effects, but not in an association between time and spatial response codes. Taken together, these results implicate that the congruency between purely perceptual dimensions is processed in PPC areas along the IPS, while the congruency between percepts and behavioral responses is independent of this region. Copyright © 2016 Elsevier Inc. All rights reserved.
Gao, Feng-hua; Zhang, Shi-qing; He, Jia-chang; Wang, Tian-ping; Zhang, Gong-hua; Li, Ting-ting
2013-08-01
To explore the application of space-time scan statistics in the early warning of distribution of schistosome infected Oncomelania hupensis snails. The data of distribution of infected snails in Anhui Province from 2006 to 2012 were collected, and a spatial database was established by ArcGIS 9.3. The prospective spatial-temporal cluster analysis was done by SaTscan 9.1.1 at the village level. Four space-time clusters, which were distributed over the Yangtze River and its branch rivers, were detected. The space-time scan statistics could detect the distribution of infected snails early, and the space-time clusters found were the key and difficult points of schistosomiasis control in Anhui Province.
Space-time signal processing for distributed pattern detection in sensor networks
NASA Astrophysics Data System (ADS)
Paffenroth, Randy C.; Du Toit, Philip C.; Scharf, Louis L.; Jayasumana, Anura P.; Banadara, Vidarshana; Nong, Ryan
2012-05-01
We present a theory and algorithm for detecting and classifying weak, distributed patterns in network data that provide actionable information with quantiable measures of uncertainty. Our work demonstrates the eectiveness of space-time inference on graphs, robust matrix completion, and second order analysis for the detection of distributed patterns that are not discernible at the level of individual nodes. Motivated by the importance of the problem, we are specically interested in detecting weak patterns in computer networks related to Cyber Situational Awareness. Our focus is on scenarios where the nodes (terminals, routers, servers, etc.) are sensors that provide measurements (of packet rates, user activity, central processing unit usage, etc.) that, when viewed independently, cannot provide a denitive determination of the underlying pattern, but when fused with data from across the network both spatially and temporally, the relevant patterns emerge. The approach is applicable to many types of sensor networks including computer networks, wireless networks, mobile sensor networks, and social networks, as well as in contexts such as databases and disease outbreaks.
UNIX code management and distribution
Hung, T.; Kunz, P.F.
1992-09-01
We describe a code management and distribution system based on tools freely available for the UNIX systems. At the master site, version control is managed with CVS, which is a layer on top of RCS, and distribution is done via NFS mounted file systems. At remote sites, small modifications to CVS provide for interactive transactions with the CVS system at the master site such that remote developers are true peers in the code development process.
NASA Astrophysics Data System (ADS)
Maurya, S. K.; Gupta, Y. K.
2012-08-01
A family of anisotropic fluid distributions is constructed using a space-time describing a family of charged perfect fluid distributions. The anisotropy parameter is taken to be twice the square of electric intensity used in the charged fluid distributions. As the anisotropy parameter (or the electric intensity) is zero at the centre and is monotonically increasing towards the pressure-free interface, we have utilized the anisotropic fluid distributions to create Boson-type neutron stars models which join smoothly to the Schwarzschild exterior metric. All the physical entities such as energy density, radial pressure, tangential pressure and velocity of sound are monotonically decreasing towards the surface. Different members of the above family are characterized by a positive integral number n. It is observed that the maximum mass (which is 5.8051 solar mass for n = 4) starts decreasing for n > 4. But this reaches a non-zero terminal value (2.8010 solar mass) as n tends to infinity.
Hundessa, Samuel H; Williams, Gail; Li, Shanshan; Guo, Jinpeng; Chen, Linping; Zhang, Wenyi; Guo, Yuming
2016-12-19
Despite the declining burden of malaria in China, the disease remains a significant public health problem with periodic outbreaks and spatial variation across the country. A better understanding of the spatial and temporal characteristics of malaria is essential for consolidating the disease control and elimination programme. This study aims to understand the spatial and spatiotemporal distribution of Plasmodium vivax and Plasmodium falciparum malaria in China during 2005-2009. Global Moran's I statistics was used to detect a spatial distribution of local P. falciparum and P. vivax malaria at the county level. Spatial and space-time scan statistics were applied to detect spatial and spatiotemporal clusters, respectively. Both P. vivax and P. falciparum malaria showed spatial autocorrelation. The most likely spatial cluster of P. vivax was detected in northern Anhui province between 2005 and 2009, and western Yunnan province between 2010 and 2014. For P. falciparum, the clusters included several counties of western Yunnan province from 2005 to 2011, Guangxi from 2012 to 2013, and Anhui in 2014. The most likely space-time clusters of P. vivax malaria and P. falciparum malaria were detected in northern Anhui province and western Yunnan province, respectively, during 2005-2009. The spatial and space-time cluster analysis identified high-risk areas and periods for both P. vivax and P. falciparum malaria. Both malaria types showed significant spatial and spatiotemporal variations. Contrary to P. vivax, the high-risk areas for P. falciparum malaria shifted from the west to the east of China. Further studies are required to examine the spatial changes in risk of malaria transmission and identify the underlying causes of elevated risk in the high-risk areas.
Ranta, J; Pitkäniemi, J; Karvonen, M; Virtala, E; Rusanen, J; Colpaert, A; Naukkarinen, A; Tuomilehto, J
1996-12-15
We developed a test statistic based on an approach of Whittemore et al. (1987) to detect space-time clustering for non-infectious diseases. We extended the spatial test of Whittemore et al. by deriving conditional probabilities for Poisson distributed random variables. To combine spatial and time distances we defined a distance matrix D, where dij is the distance between the ith and jth cell in a three-dimensional space-time grid. Spatial and temporal components are controlled by a weight. By altering the weight, both marginal tests and the intermediate test can be reached. Allowing a continuum from a pure spatial to a pure temporal test, the best result will be gained by trying different weights, because the occurrence of a disease might show some temporal and some spatial tendency to cluster. We examined the behaviour of the test statistic by simulating different distributions for cases and the population. The test was applied to the incidence data of insulin-dependent diabetes mellitus in Finland. This test could be used in the analysis of data which are localized according to map co-ordinates, by addresses or postcodes. This information is important when using the Geographical Information System (GIS) technology to compute the pairwise distances needed for the proposed test.
Seismicity along the Main Marmara Fault, Turkey: from space-time distribution to repeating events
NASA Astrophysics Data System (ADS)
Schmittbuhl, Jean; Karabulut, Hayrullah; Lengliné, Olivier; Bouchon, Michel
2016-04-01
The North Anatolian Fault (NAF) poses a significant hazard for the large cities surrounding the Marmara Sea region particularly the megalopolis of Istanbul. Indeed, the NAF is presently hosting a long unruptured segment below the Sea of Marmara. This seismic gap is approximately 150 km long and corresponds to the Main Marmara Fault (MMF). The seismicity along the Main Marmara Fault (MMF) below the Marmara Sea is analyzed here during the 2007-2012 period to provide insights on the recent evolution of this important regional seismic gap. High precision locations show that seismicity is strongly varying along strike and depth providing fine details of the fault behavior that are inaccessible from geodetic inversions. The activity strongly clusters at the regions of transition between basins. The Central basin shows significant seismicity located below the shallow locking depth inferred from GPS measurements. Its b-value is low and the average seismic slip is high. Interestingly we found also several long term repeating earthquakes in this domain. Using a template matching technique, we evidenced two new families of repeaters: a first family that typically belongs to aftershock sequences and a second family of long lasting repeaters with a multi-month recurrence period. All observations are consistent with a deep creep of this segment. On the contrary, the Kumburgaz basin at the center of the fault shows sparse seismicity with the hallmarks of a locked segment. In the eastern Marmara Sea, the seismicity distribution along the Princes Island segment in the Cinarcik basin, is consistent with the geodetic locking depth of 10km and a low contribution to the regional seismic energy release. The assessment of the locked segment areas provide an estimate of the magnitude of the main forthcoming event to be about 7.3 assuming that the rupture will not enter significantly within creeping domains.
Typical BWR/4 MSIV closure ATWS analysis using RAMONA-3B code with space-time neutron kinetics
Neymotin, L.; Saha, P.
1984-01-01
A best-estimate analysis of a typical BWR/4 MSIV closure ATWS has been performed using the RAMONA-3B code with three-dimensional neutron kinetics. All safety features, namely, the safety and relief valves, recirculation pump trip, high pressure safety injections and the standby liquid control system (boron injection), were assumed to work as designed. No other operator action was assumed. The results show a strong spatial dependence of reactor power during the transient. After the initial peak of pressure and reactor power, the reactor vessel pressure oscillated between the relief valve set points, and the reactor power oscillated between 20 to 50% of the steady state power until the hot shutdown condition was reached at approximately 1400 seconds. The suppression pool bulk water temperature at this time was predicted to be approx. 96/sup 0/C (205/sup 0/F). In view of code performance and reasonable computer running time, the RAMONA-3B code is recommended for further best-estimate analyses of ATWS-type events in BWRs.
The triple distribution of codes and ordered codes
Trinker, Horst
2011-01-01
We study the distribution of triples of codewords of codes and ordered codes. Schrijver [A. Schrijver, New code upper bounds from the Terwilliger algebra and semidefinite programming, IEEE Trans. Inform. Theory 51 (8) (2005) 2859–2866] used the triple distribution of a code to establish a bound on the number of codewords based on semidefinite programming. In the first part of this work, we generalize this approach for ordered codes. In the second part, we consider linear codes and linear ordered codes and present a MacWilliams-type identity for the triple distribution of their dual code. Based on the non-negativity of this linear transform, we establish a linear programming bound and conclude with a table of parameters for which this bound yields better results than the standard linear programming bound. PMID:22505770
Link-Adaptive Distributed Coding for Multisource Cooperation
NASA Astrophysics Data System (ADS)
Cano, Alfonso; Wang, Tairan; Ribeiro, Alejandro; Giannakis, Georgios B.
2007-12-01
Combining multisource cooperation and link-adaptive regenerative techniques, a novel protocol is developed capable of achieving diversity order up to the number of cooperating users and large coding gains. The approach relies on a two-phase protocol. In Phase 1, cooperating sources exchange information-bearing blocks, while in Phase 2, they transmit reencoded versions of the original blocks. Different from existing approaches, participation in the second phase does not require correct decoding of Phase 1 packets. This allows relaying of soft information to the destination, thus increasing coding gains while retaining diversity properties. For any reencoding function the diversity order is expressed as a function of the rank properties of the distributed coding strategy employed. This result is analogous to the diversity properties of colocated multi-antenna systems. Particular cases include repetition coding, distributed complex field coding (DCFC), distributed space-time coding, and distributed error-control coding. Rate, diversity, complexity and synchronization issues are elaborated. DCFC emerges as an attractive choice because it offers high-rate, full spatial diversity, and relaxed synchronization requirements. Simulations confirm analytically established assessments.
Space-time variability of raindrop size distributions along a 2.2 km microwave link path
NASA Astrophysics Data System (ADS)
van Leth, Tommy; Uijlenhoet, Remko; Overeem, Aart; Leijnse, Hidde; Berne, Alexis
2017-04-01
The Wageningen Urban Rainfall Experiment (WURex14-15) was dedicated to address several errors and uncertainties associated with quantitative precipitation estimates from microwave links. The core of the experiment consisted of three co-located microwave links installed between two major buildings on the Wageningen University campus, approximately 2.2 km apart: a 38 GHz commercial microwave link, provided by T-Mobile NL, and 26 GHz and 38 GHz (dual-polarization) research microwave links from RAL. Transmitting and receiving antennas were attached to masts installed on the roofs of the two buildings, about 30 m above the ground. This setup was complemented with a Scintec infrared Large-Aperture Scintillometer, installed over the same path, an automatic rain gauge, as well as 5 Parsivel optical disdrometers positioned at several locations along the path. Temporal sampling of the received signals was performed at a rate of 20 Hz. The setup was being monitored by time-lapse cameras to assess the state of the antennas as well as the atmosphere. Finally, data were available from the KNMI weather radars and an automated weather station situated just outside Wageningen. The experiment has been active between August 2014 and December 2015. We present preliminary results regarding the space-time variability of raindrop size distributions from the Parsivel disdrometers along the 2.2 km microwave link path.
Fuentes-Vallejo, Mauricio
2017-07-24
Dengue is a widely spread vector-borne disease. Dengue cases in the Americas have increased over the last few decades, affecting various urban spaces throughout these continents, including the tourism-oriented city of Girardot, Colombia. Interactions among mosquitoes, pathogens and humans have recently been examined using different temporal and spatial scales in attempts to determine the roles that social and ecological systems play in dengue transmission. The current work characterizes the spatial and temporal behaviours of dengue in Girardot and discusses the potential territorial dynamics related to the distribution of this disease. Based on officially reported dengue cases (2012-2015) corresponding to epidemic (2013) and inter-epidemic years (2012, 2014, 2015), space (Getis-Ord index) and space-time (Kulldorff's scan statistics) analyses were performed. Geocoded dengue cases (n = 2027) were slightly overrepresented by men (52.1%). As expected, the cases were concentrated in the 0- to 15-year-old age group according to the actual trends of Colombia. The incidence rates of dengue during the rainy and dry seasons as well as those for individual years (2012, 2013 and 2014) were significant using the global Getis-Ord index. Local clusters shifted across seasons and years; nevertheless, the incidence rates clustered towards the southwest region of the city under different residential conditions. Space-time clusters shifted from the northeast to the southwest of the city (2012-2014). These clusters represented only 4.25% of the total cases over the same period (n = 1623). A general trend was observed, in which dengue cases increased during the dry seasons, especially between December and February. Despite study limitations related to official dengue records and available fine-scale demographic information, the spatial analysis results were promising from a geography of health perspective. Dengue did not show linear association with poverty or with vulnerable
NASA Astrophysics Data System (ADS)
Passas, Georgios; Freear, Steven; Fawcett, Darren
2010-08-01
Orthogonal frequency division multiplexing (OFDM)-based feed-forward space-time trellis code (FFSTTC) encoders can be synthesised as very high speed integrated circuit hardware description language (VHDL) designs. Evaluation of their FPGA implementation can lead to conclusions that help a designer to decide the optimum implementation, given the encoder structural parameters. VLSI architectures based on 1-bit multipliers and look-up tables (LUTs) are compared in terms of FPGA slices and block RAMs (area), as well as in terms of minimum clock period (speed). Area and speed graphs versus encoder memory order are provided for quadrature phase shift keying (QPSK) and 8 phase shift keying (8-PSK) modulation and two transmit antennas, revealing best implementation under these conditions. The effect of number of modulation bits and transmit antennas on the encoder implementation complexity is also investigated.
NASA Astrophysics Data System (ADS)
Weng, Yi; He, Xuan; Yao, Wang; Pacheco, Michelle C.; Wang, Junyi; Pan, Zhongqi
2017-07-01
In this paper, we explored the performance of space-time block-coding (STBC) assisted multiple-input multiple-output (MIMO) scheme for modal dispersion and mode-dependent loss (MDL) mitigation in spatial-division multiplexed optical communication systems, whereas the weight matrices of frequency-domain equalization (FDE) were updated heuristically using decision-directed recursive least squares (RLS) algorithm for convergence and channel estimation. The proposed STBC-RLS algorithm can achieve 43.6% enhancement on convergence rate over conventional least mean squares (LMS) for quadrature phase-shift keying (QPSK) signals with merely 16.2% increase in hardware complexity. The overall optical signal to noise ratio (OSNR) tolerance can be improved via STBC by approximately 3.1, 4.9, 7.8 dB for QPSK, 16-quadrature amplitude modulation (QAM) and 64-QAM with respective bit-error-rates (BER) and minimum-mean-square-error (MMSE).
Colour cyclic code for Brillouin distributed sensors
NASA Astrophysics Data System (ADS)
Le Floch, Sébastien; Sauser, Florian; Llera, Miguel; Rochat, Etienne
2015-09-01
For the first time, a colour cyclic coding (CCC) is theoretically and experimentally demonstrated for Brillouin optical time-domain analysis (BOTDA) distributed sensors. Compared to traditional intensity-modulated cyclic codes, the code presents an additional gain of √2 while keeping the same number of sequences as for a colour coding. A comparison with a standard BOTDA sensor is realized and validates the theoretical coding gain.
NASA Astrophysics Data System (ADS)
Dorman, L. I.
For great solar flare events we calculate expected gamma-ray fluxes in periods of flare energetic particle (FEP) generation and propagation. We calculate the expected space-time-energy distribution of these particles in the Heliosphere in the periods of FEP events. On the basis of investigations of cosmic ray non-linear interaction with solar wind we determine also the expected space-time distribution of solar wind matter. Then we calculate the expected generation of gamma rays by decay of neutral pions generated in nuclear interactions of FEP with solar wind matter and determine the expected space-time distribution of gamma ray emissivity. Then we calculate the expected time variation of the angle distribution and spectra of gamma ray fluxes. For some simple diffusion models of solar FEP propagation and simple model of non-linear interaction of cosmic rays with solar wind we found expected time evolution of gamma ray flux angle distribution as well as time evolution of gamma ray spectrum. We show that by observations of gamma rays generated by solar FEP interactions with solar wind matter can be obtained important information on FEP time generation and spectrum in the source, on mode of FEP propagation in the interplanetary space, on the character of the non-linear interaction of cosmic rays with solar wind, and on matter distribution in the Heliosphere. This research is made in the frame of the COST 724 program and is partly supported by the grant of INTAS 0810.
A distributed particle simulation code in C++
Forslund, D.W.; Wingate, C.A.; Ford, P.S.; Junkins, J.S.; Pope, S.C.
1992-03-01
Although C++ has been successfully used in a variety of computer science applications, it has just recently begun to be used in scientific applications. We have found that the object-oriented properties of C++ lend themselves well to scientific computations by making maintenance of the code easier, by making the code easier to understand, and by providing a better paradigm for distributed memory parallel codes. We describe here aspects of developing a particle plasma simulation code using object-oriented techniques for use in a distributed computing environment. We initially designed and implemented the code for serial computation and then used the distributed programming toolkit ISIS to run it in parallel. In this connection we describe some of the difficulties presented by using C++ for doing parallel and scientific computation.
A distributed particle simulation code in C++
Forslund, D.W.; Wingate, C.A.; Ford, P.S.; Junkins, J.S.; Pope, S.C.
1992-01-01
Although C++ has been successfully used in a variety of computer science applications, it has just recently begun to be used in scientific applications. We have found that the object-oriented properties of C++ lend themselves well to scientific computations by making maintenance of the code easier, by making the code easier to understand, and by providing a better paradigm for distributed memory parallel codes. We describe here aspects of developing a particle plasma simulation code using object-oriented techniques for use in a distributed computing environment. We initially designed and implemented the code for serial computation and then used the distributed programming toolkit ISIS to run it in parallel. In this connection we describe some of the difficulties presented by using C++ for doing parallel and scientific computation.
Distribution Coding in the Visual Pathway
Sanderson, A. C.; Kozak, W. M.; Calvert, T. W.
1973-01-01
Although a variety of types of spike interval histograms have been reported, little attention has been given to the spike interval distribution as a neural code and to how different distributions are transmitted through neural networks. In this paper we present experimental results showing spike interval histograms recorded from retinal ganglion cells of the cat. These results exhibit a clear correlation between spike interval distribution and stimulus condition at the retinal ganglion cell level. The averaged mean rates of the cells studied were nearly the same in light as in darkness whereas the spike interval histograms were much more regular in light than in darkness. We present theoretical models which illustrate how such a distribution coding at the retinal level could be “interpreted” or recorded at some higher level of the nervous system such as the lateral geniculate nucleus. Interpretation is an essential requirement of a neural code which has often been overlooked in modeling studies. Analytical expressions are derived describing the role of distribution coding in determining the transfer characteristics of a simple interaction model and of a lateral inhibition network. Our work suggests that distribution coding might be interpreted by simply interconnected neural networks such as relay cell networks, in general, and the primary thalamic sensory nuclei in particular. PMID:4697235
Robust entanglement distribution via quantum network coding
NASA Astrophysics Data System (ADS)
Epping, Michael; Kampermann, Hermann; Bruß, Dagmar
2016-10-01
Many protocols of quantum information processing, like quantum key distribution or measurement-based quantum computation, ‘consume’ entangled quantum states during their execution. When participants are located at distant sites, these resource states need to be distributed. Due to transmission losses quantum repeater become necessary for large distances (e.g. ≳ 300 {{km}}). Here we generalize the concept of the graph state repeater to D-dimensional graph states and to repeaters that can perform basic measurement-based quantum computations, which we call quantum routers. This processing of data at intermediate network nodes is called quantum network coding. We describe how a scheme to distribute general two-colourable graph states via quantum routers with network coding can be constructed from classical linear network codes. The robustness of the distribution of graph states against outages of network nodes is analysed by establishing a link to stabilizer error correction codes. Furthermore we show, that for any stabilizer error correction code there exists a corresponding quantum network code with similar error correcting capabilities.
2008-02-01
multiple distributed multiple - input multiple - output ( MIMO ) links without the knowledge of the instantaneous channel state information (CSI...Swindlehurst, Fellow, IEEE Abstract—A space–time optimal power schedule for multiple distributed multiple - input multiple - output ( MIMO ) links without the...and the optimality of different power scheduling approaches. Index Terms— Multiple - input multiple - output ( MIMO
2008-09-01
Layered Space-Time Architecture CSI Channel State Information CSIR Channel State Information at Receiver C-DIV Cooperative Diversity C-STC...is the capability for the relay terminal RMTji to decide, based on channel state information at reception ( CSIR ), whether or not it is helpful for
Distributed transform coding via source-splitting
NASA Astrophysics Data System (ADS)
Yahampath, Pradeepa
2012-12-01
Transform coding (TC) is one of the best known practical methods for quantizing high-dimensional vectors. In this article, a practical approach to distributed TC of jointly Gaussian vectors is presented. This approach, referred to as source-split distributed transform coding (SP-DTC), can be used to easily implement two terminal transform codes for any given rate-pair. The main idea is to apply source-splitting using orthogonal-transforms, so that only Wyner-Ziv (WZ) quantizers are required for compression of transform coefficients. This approach however requires optimizing the bit allocation among dependent sets of WZ quantizers. In order to solve this problem, a low-complexity tree-search algorithm based on analytical models for transform coefficient quantization is developed. A rate-distortion (RD) analysis of SP-DTCs for jointly Gaussian sources is presented, which indicates that these codes can significantly outperform the practical alternative of independent TC of each source, whenever there is a strong correlation between the sources. For practical implementation of SP-DTCs, the idea of using conditional entropy constrained (CEC) quantizers followed by Slepian-Wolf coding is explored. Experimental results obtained with SP-DTC designs based on both CEC scalar quantizers and CEC trellis-coded quantizers demonstrate that actual implementations of SP-DTCs can achieve RD performance close to the analytically predicted limits.
Alexandru, Andrei; Draper, Terrence; Horvath, Ivan; Streuer, Thomas
2011-08-15
Highlights: > We propose a method to compute the polarization for a multi-dimensional random distribution. > We apply the method to the eigenemodes of the Dirac operator in pure glue QCD. > We compute the chiral polarization for these modes and study its scale dependence. > We find that in a finite volume there is a scale where the polarization tendency changes. > We study the continuum limit of this chiral polarization scale. - Abstract: We propose a framework for quantitative evaluation of dynamical tendency for polarization in an arbitrary random variable that can be decomposed into a pair of orthogonal subspaces. The method uses measures based on comparisons of given dynamics to its counterpart with statistically independent components. The formalism of previously considered X-distributions is used to express the aforementioned comparisons, in effect putting the former approach on solid footing. Our analysis leads to the definition of a suitable correlation coefficient with clear statistical meaning. We apply the method to the dynamics induced by pure-glue lattice QCD in local left-right components of overlap Dirac eigenmodes. It is found that, in finite physical volume, there exists a non-zero physical scale in the spectrum of eigenvalues such that eigenmodes at smaller (fixed) eigenvalues exhibit convex X-distribution (positive correlation), while at larger eigenvalues the distribution is concave (negative correlation). This chiral polarization scale thus separates a regime where dynamics enhances chirality relative to statistical independence from a regime where it suppresses it, and gives an objective definition to the notion of 'low' and 'high' Dirac eigenmode. We propose to investigate whether the polarization scale remains non-zero in the infinite volume limit, in which case it would represent a new kind of low energy scale in QCD.
NASA Astrophysics Data System (ADS)
Alexandru, Andrei; Draper, Terrence; Horváth, Ivan; Streuer, Thomas
2011-08-01
We propose a framework for quantitative evaluation of dynamical tendency for polarization in an arbitrary random variable that can be decomposed into a pair of orthogonal subspaces. The method uses measures based on comparisons of given dynamics to its counterpart with statistically independent components. The formalism of previously considered X-distributions is used to express the aforementioned comparisons, in effect putting the former approach on solid footing. Our analysis leads to the definition of a suitable correlation coefficient with clear statistical meaning. We apply the method to the dynamics induced by pure-glue lattice QCD in local left-right components of overlap Dirac eigenmodes. It is found that, in finite physical volume, there exists a non-zero physical scale in the spectrum of eigenvalues such that eigenmodes at smaller (fixed) eigenvalues exhibit convex X-distribution (positive correlation), while at larger eigenvalues the distribution is concave (negative correlation). This chiral polarization scale thus separates a regime where dynamics enhances chirality relative to statistical independence from a regime where it suppresses it, and gives an objective definition to the notion of "low" and "high" Dirac eigenmode. We propose to investigate whether the polarization scale remains non-zero in the infinite volume limit, in which case it would represent a new kind of low energy scale in QCD.
NASA Technical Reports Server (NTRS)
Villarreal, James A.; Shelton, Robert O.
1992-01-01
Concept of space-time neural network affords distributed temporal memory enabling such network to model complicated dynamical systems mathematically and to recognize temporally varying spatial patterns. Digital filters replace synaptic-connection weights of conventional back-error-propagation neural network.
NASA Astrophysics Data System (ADS)
Diakogianni, Georgia; Papadopoulos, Gerassimos; Fokaefs, Anna; Papageorgiou, Antonia; Triantafyllou, Ioanna
2015-04-01
We have compiled a new tsunami catalogue covering the entire European and Mediterranean (EM) region from pre-historical times up to the present. The catalogue is of increased completeness and homogeneity with respect to previous ones containing more than 370 events with reliability assignment to all the events listed. New historical events were inserted, while revised parameters of historical tsunamigenic earthquakes were extensively adopted particularly for the most active region of the eastern Mediterranean. In association to the catalogue, an inventory of tsunami impact was created with the main attributes being the numbers of people killed and injured, the damage to buildings, vessels, cultivated land and to other property. The inventory includes also a record of the tsunami environmental impact, such as soil erosion, geomorphological changes, boulder replacement and tsunami sediment deposits. Data on the tsunami impact were used to assign tsunami intensity in the 12-point Papadopoulos-Imamura (2001) scale for the majority of the events listed. The tsunami impact was studied as for its space and time distribution. In space, the tsunami impact was mapped in terms of tsunami intensity and impact zones were determined. The time distribution of the tsunami impact was examined for each one of the impact zones. Leaving aside large pre-historical tsunamis, such as the one produced by the LBA or Minoan eruption of Thera (Santorini) volcano, due to the lack of certain impact data, it has been found that the main impact comes from extreme, earthquake tsunamigenic events, such the ones of AD 365 in Crete, 551 in Lebanon, 1303 in Crete, 1755 in Lisbon. However, high impact may also occur from events of lower magnitude, such as the 1908 tsunami in Messina straits and the 1956 tsunami in the South Aegean, which underlines the strong dependence of the impact on the community exposure. Another important finding is that the cumulative impact of relatively moderate or even small
Frequency-coded quantum key distribution.
Bloch, Matthieu; McLaughlin, Steven W; Merolla, Jean-Marc; Patois, Frédéric
2007-02-01
We report an intrinsically stable quantum key distribution scheme based on genuine frequency-coded quantum states. The qubits are efficiently processed without fiber interferometers by fully exploiting the nonlinear interaction occurring in electro-optic phase modulators. The system requires only integrated off-the-shelf devices and could be used with a true single-photon source. Preliminary experiments have been performed with weak laser pulses and have demonstrated the feasibility of this new setup.
NASA Astrophysics Data System (ADS)
Ashtekar, Abhay
In general relativity space-time ends at singularities. The big bang is considered as the Beginning and the big crunch, the End. However these conclusions are arrived at by using general relativity in regimes which lie well beyond its physical domain of validity. Examples where detailed analysis is possible show that these singularities are naturally resolved by quantum geometry effects. Quantum space-times can be vastly larger than what Einstein had us believe. These non-trivial space-time extensions enable us to answer of some long standing questions and resolve of some puzzles in fundamental physics. Thus, a century after Minkowski's revolutionary ideas on the nature of space and time, yet another paradigm shift appears to await us in the wings.
Wang, Qiang; Gao, Jin-Bin; Xu, Jing; Huang, Ya-Min; He, Yong; Gao, Yang; Yang, Kun; Qian, Ying-Jun; Fu, Qing; Li, Shi-Zhu; Zhou, Xiao-Nong
2014-04-01
To investigate the distribution features of Oncomelania hupensis infested areas in Gaoyou County so as to formulate surveillance and intervention strategies. A database was established through collecting data of the snail infested areas during 1970-2009 in the County. The data were input into SaTScan 9.2 software for spatial-temporal cluster analysis to determine the spatial and temporal cluster of the snail habitats. The results were displayed by ArcGIS 10.1 software. There were historically 720 snail habitats in the County in 1970-2009 including 521 in plain region with water networks and 199 in lake & marshland region. Those in water networks covered an area of 456.62 ha distributing mainly in the northern towns/townships of the County, and the latters distributed in the Xinmin Beach between Gaoyou Lake and Shaobo Lake, and Qiaojian Beach close to Tianchang County of Anhui Province with an area of 4 495.75 ha. The spatial-temporal cluster analysis revealed that among all the historical snail habitats, there were two prominent spatial-temporal clusters with a relative risk of >3. One cluster appeared in Xinmin Beach in 1983-2002 and another one located in the north of Gaoyou in 1970-1973. Separate analysis was performed by the regions of water network or lake & marshland, indicating 2 clusters in each of the regions. During 1970-2009, 244 snail habitats were newly found in the County with 130 in water network region and 114 in lake & marshland region. Again, the spatial-temporal cluster analysis displayed 2 prominent clusters. By separate analysis, 2 clusters existed in each of the regions. The space-time scan statistics can be applied in detecting the cluster of snail infested areas in two dimensions, which will provide information for guiding specific measures of surveillance and control.
Zhao, Jinzhuo; Li, Li; Qian, Chunyan; Jiang, Rongfang; Song, Weimin
2012-01-01
To observe the ambient fine particle pollution and the trend of its space-time distribution in residential areas in Shanghai, and to explore the effects of vehicle exhaust emission on the ambient fine particle pollution. Two residential areas A and B were selected for monitoring the pollution of fine particles. Area A is a normal residential area and area B is closed to a main road with heavy traffic. Four monitoring sites were set in the distance of 0 m, 50 m, 100 m and 200 m to the roadside and on a place 1.5 - 1.8 m above the ground. The concentration of fine particles in the air were measured in April, July, October 2010 and Jan 2011 for 1l0 days in each month in both areas using SIDEPAK AM510 (TSI, USA) fine particle monitors. The pollution of fine particle was varied in different seasons (spring > winter > autumn > summer) and at different time (with two peaks at 8:00 AM and 19:00 PM, corresponding to the rush hours). The pollution of fine particles is higher in residential area B than that in area A. The concentration of fine particles was reduced with the increase of the distance to the roadside. The level of fine particles in residential areas is comparatively high in Shanghai, and the vehicle exhaust emissions have significant effects on the concentration of fine particles in the atmosphere of residential area.
Distributed single source coding with side information
NASA Astrophysics Data System (ADS)
Vila-Forcen, Jose E.; Koval, Oleksiy; Voloshynovskiy, Sviatoslav V.
2004-01-01
In the paper we advocate image compression technique in the scope of distributed source coding framework. The novelty of the proposed approach is twofold: classical image compression is considered from the positions of source coding with side information and, contrarily to the existing scenarios, where side information is given explicitly, side information is created based on deterministic approximation of local image features. We consider an image in the transform domain as a realization of a source with a bounded codebook of symbols where each symbol represents a particular edge shape. The codebook is image independent and plays the role of auxiliary source. Due to the partial availability of side information at both encoder and decoder we treat our problem as a modification of Berger-Flynn-Gray problem and investigate a possible gain over the solutions when side information is either unavailable or available only at decoder. Finally, we present a practical compression algorithm for passport photo images based on our concept that demonstrates the superior performance in very low bit rate regime.
NASA Astrophysics Data System (ADS)
Chapline, George
It has been shown that a nonlinear Schrödinger equation in 2+1 dimensions equipped with an SU(N) Chern-Simons gauge field can provide an exact description of certain self-dual Einstein spaces in the limit N-=∞. Ricci flat Einstein spaces can then be viewed as arising from a quantum pairing of the classical self-dual and anti-self-dual solutions. In this chapter, we will outline how this theory of empty space-time might be generalized to include matter and vacuum energy by transplanting the nonlinear Schrödinger equation used to construct Einstein spaces to the 25+1-dimensional Lorentzian Leech lattice. If the distinguished 2 spatial dimensions underlying the construction of Einstein spaces are identified with a hexagonal lattice section of the Leech lattice, the wave-function becomes an 11 × 11 matrix that can represent fermion and boson degrees of freedom (DOF) associated with 2-form and Yang-Mills gauge symmetries. The resulting theory of gravity and matter in 3+1 dimensions is not supersymmetric, which provides an entry for a vacuum energy. Indeed, in the case of a Lemaitre cosmological model, the emergent space-time will naturally have a vacuum energy on the order of the observed cosmological constant.
Time coded distribution via broadcasting stations
NASA Technical Reports Server (NTRS)
Leschiutta, S.; Pettiti, V.; Detoma, E.
1979-01-01
The distribution of standard time signals via AM and FM broadcasting stations presents the distinct advantages to offer a wide area coverage and to allow the use of inexpensive receivers, but the signals are radiated a limited number of times per day, are not usually available during the night, and no full and automatic synchronization of a remote clock is possible. As an attempt to overcome some of these problems, a time coded signal with a complete date information is diffused by the IEN via the national broadcasting networks in Italy. These signals are radiated by some 120 AM and about 3000 FM and TV transmitters around the country. In such a way, a time ordered system with an accuracy of a couple of milliseconds is easily achieved.
NASA Astrophysics Data System (ADS)
da Silva Rocha Paz, Igor; Ichiba, Abdellah; Skouri-Plakali, Ilektra; Lee, Jisun; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel
2017-04-01
Climate change and global warming are expected to make precipitation events more frequent, more severe and more local. This may have serious consequences for human health, the environment, cultural heritage, economic activities, utilities and public service providers. Then precipitation risk and water management is a key challenge for densely populated urban areas. Applications derived from high (time and space) resolution observation of precipitations are to make our cities more weather-ready. Finer resolution data available from X-band dual radar measurements enhance engineering tools as used for urban planning policies as well as protection (mitigation/adaptation) strategies to tackle climate-change related weather events. For decades engineering tools have been developed to work conveniently either with very local rain gauge networks, or with mainly C-band weather radars that have gradually been set up for space-time remote sensing of precipitation. Most of the time, the C-band weather radars continue to be calibrated by the existing rain gauge networks. Inhomogeneous distributions of rain gauging networks lead to only a partial information on the rainfall fields. In fact, the statistics of measured rainfall is strongly biased by the fractality of the measuring networks. This fractality needs to be properly taken in to account to retrieve the original properties of the rainfall fields, in spite of the radar data calibration. In this presentation, with the help of multifractal analysis, we first demonstrate that the semi-distributed hydrological models statistically reduce the rainfall fields into rainfall measured by a much scarcer network of virtual rain gauges. For this purpose, we use C-band and X-band radar data. The first has a resolution of 1 km in space and 5 min in time and is in fact a product provided by RHEA SAS after treating the Météo-France C-band radar data. The latter is measured by the radar operated at Ecole des Ponts and has a resolution of
NASA Astrophysics Data System (ADS)
Weng, Yi; He, Xuan; Wang, Junyi; Pan, Zhongqi
2017-01-01
Spatial-division multiplexing (SDM) techniques have been purposed to increase the capacity of optical fiber transmission links by utilizing multicore fibers or few-mode fibers (FMF). The most challenging impairments of SDMbased long-haul optical links mainly include modal dispersion and mode-dependent loss (MDL), whereas MDL arises from inline component imperfections, and breaks modal orthogonality thus degrading the capacity of multiple-inputmultiple- output (MIMO) receivers. To reduce MDL, optical approaches include mode scramblers and specialty fiber designs, yet these methods were burdened with high cost, yet cannot completely remove the accumulated MDL in the link. Besides, space-time trellis codes (STTC) were purposed to lessen MDL, but suffered from high complexity. In this work, we investigated the performance of space-time block-coding (STBC) scheme to mitigate MDL in SDM-based optical communication by exploiting space and delay diversity, whereas weight matrices of frequency-domain equalization (FDE) were updated heuristically using decision-directed recursive-least-squares (RLS) algorithm for convergence and channel estimation. The STBC was evaluated in a six-mode multiplexed system over 30-km FMF via 6×6 MIMO FDE, with modal gain offset 3 dB, core refractive index 1.49, numerical aperture 0.5. Results show that optical-signal-to-noise ratio (OSNR) tolerance can be improved via STBC by approximately 3.1, 4.9, 7.8 dB for QPSK, 16- and 64-QAM with respective bit-error-rates (BER) and minimum-mean-square-error (MMSE). Besides, we also evaluate the complexity optimization of STBC decoding scheme with zero-forcing decision feedback (ZFDF) equalizer by shortening the coding slot length, which is robust to frequency-selective fading channels, and can be scaled up for SDM systems with more dynamic channels.
The weight distribution and randomness of linear codes
NASA Technical Reports Server (NTRS)
Cheung, K.-M.
1989-01-01
Finding the weight distributions of block codes is a problem of theoretical and practical interest. Yet the weight distributions of most block codes are still unknown except for a few classes of block codes. Here, by using the inclusion and exclusion principle, an explicit formula is derived which enumerates the complete weight distribution of an (n,k,d) linear code using a partially known weight distribution. This expression is analogous to the Pless power-moment identities - a system of equations relating the weight distribution of a linear code to the weight distribution of its dual code. Also, an approximate formula for the weight distribution of most linear (n,k,d) codes is derived. It is shown that for a given linear (n,k,d) code over GF(q), the ratio of the number of codewords of weight u to the number of words of weight u approaches the constant Q = q(-)(n-k) as u becomes large. A relationship between the randomness of a linear block code and the minimum distance of its dual code is given, and it is shown that most linear block codes with rigid algebraic and combinatorial structure also display certain random properties which make them similar to random codes with no structure at all.
Distributed source coding using chaos-based cryptosystem
NASA Astrophysics Data System (ADS)
Zhou, Junwei; Wong, Kwok-Wo; Chen, Jianyong
2012-12-01
A distributed source coding scheme is proposed by incorporating a chaos-based cryptosystem in the Slepian-Wolf coding. The punctured codeword generated by the chaos-based cryptosystem results in ambiguity at the decoder side. This ambiguity can be removed by the maximum a posteriori decoding with the help of side information. In this way, encryption and source coding are performed simultaneously. This leads to a simple encoder structure with low implementation complexity. Simulation results show that the encoder complexity is lower than that of existing distributed source coding schemes. Moreover, at small block size, the proposed scheme has a performance comparable to existing distributed source coding schemes.
RHOCUBE: 3D density distributions modeling code
NASA Astrophysics Data System (ADS)
Nikutta, Robert; Agliozzo, Claudia
2016-11-01
RHOCUBE models 3D density distributions on a discrete Cartesian grid and their integrated 2D maps. It can be used for a range of applications, including modeling the electron number density in LBV shells and computing the emission measure. The RHOCUBE Python package provides several 3D density distributions, including a powerlaw shell, truncated Gaussian shell, constant-density torus, dual cones, and spiralling helical tubes, and can accept additional distributions. RHOCUBE provides convenient methods for shifts and rotations in 3D, and if necessary, an arbitrary number of density distributions can be combined into the same model cube and the integration ∫ dz performed through the joint density field.
Joint distributed source-channel coding for 3D videos
NASA Astrophysics Data System (ADS)
Palma, Veronica; Cancellaro, Michela; Neri, Alessandro
2011-03-01
This paper presents a distributed joint source-channel 3D video coding system. Our aim is the design of an efficient coding scheme for stereoscopic video communication over noisy channels that preserves the perceived visual quality while guaranteeing a low computational complexity. The drawback in using stereo sequences is the increased amount of data to be transmitted. Several methods are being used in the literature for encoding stereoscopic video. A significantly different approach respect to traditional video coding has been represented by Distributed Video Coding (DVC), which introduces a flexible architecture with the design of low complex video encoders. In this paper we propose a novel method for joint source-channel coding in a distributed approach. We choose turbo code for our application and study the new setting of distributed joint source channel coding of a video. Turbo code allows to send the minimum amount of data while guaranteeing near channel capacity error correcting performance. In this contribution, the mathematical framework will be fully detailed and tradeoff among redundancy and perceived quality and quality of experience will be analyzed with the aid of numerical experiments.
DUCS—A fully automated code and documentation distribution system
NASA Astrophysics Data System (ADS)
Johnson, A. S.; Saitta, B.; Gervasi, O.; Bower, G. R.; Rothenberg, A.; Waite, A. P.
1990-08-01
The Distributed Updata Control System (DUCS) is a code distribution system developed for the SLD collaboration to distribute code, documentation and news times between remote collaborators and SLAC. The system runs on both VM and VMS systems and is currently running at a total of 18 sites on two different continents, using both BITNET and DECNET connections. Software updates and news items can be submitted from any site where DUCS is installed and are distributed to all other sites. When an update arrives at a remote site it is installed appropriately without any manual intervention. The details of the installation depend on the type of file, but for source code, installation includes compilation and the insertion of the resulting object module into the appropriate library. Whenever an error occurs the error log is returned to the originator of the update. DUCS maintains both development and production code, subdivided into an arbitrary number of sections. A mechanism is provided to move code from the development area to the production area. DUCS also contains many utilities which enable the status of each node to be ascertained and any manual intervention necessary to correct unanticipated conditions to be performed. The system has been running now for nearly three years and has distributed over 20,000 code updates. It is proving a valuable tool for remote collaborators who are now able to participate in code development as easily as if they were at SLAC.
Distributed joint source-channel coding in wireless sensor networks.
Zhu, Xuqi; Liu, Yu; Zhang, Lin
2009-01-01
Considering the fact that sensors are energy-limited and the wireless channel conditions in wireless sensor networks, there is an urgent need for a low-complexity coding method with high compression ratio and noise-resisted features. This paper reviews the progress made in distributed joint source-channel coding which can address this issue. The main existing deployments, from the theory to practice, of distributed joint source-channel coding over the independent channels, the multiple access channels and the broadcast channels are introduced, respectively. To this end, we also present a practical scheme for compressing multiple correlated sources over the independent channels. The simulation results demonstrate the desired efficiency.
Codon Distribution in Error-Detecting Circular Codes
Fimmel, Elena; Strüngmann, Lutz
2016-01-01
In 1957, Francis Crick et al. suggested an ingenious explanation for the process of frame maintenance. The idea was based on the notion of comma-free codes. Although Crick’s hypothesis proved to be wrong, in 1996, Arquès and Michel discovered the existence of a weaker version of such codes in eukaryote and prokaryote genomes, namely the so-called circular codes. Since then, circular code theory has invariably evoked great interest and made significant progress. In this article, the codon distributions in maximal comma-free, maximal self-complementary C3 and maximal self-complementary circular codes are discussed, i.e., we investigate in how many of such codes a given codon participates. As the main (and surprising) result, it is shown that the codons can be separated into very few classes (three, or five, or six) with respect to their frequency. Moreover, the distribution classes can be hierarchically ordered as refinements from maximal comma-free codes via maximal self-complementary C3 codes to maximal self-complementary circular codes. PMID:26999215
Codon Distribution in Error-Detecting Circular Codes.
Fimmel, Elena; Strüngmann, Lutz
2016-03-15
In 1957, Francis Crick et al. suggested an ingenious explanation for the process of frame maintenance. The idea was based on the notion of comma-free codes. Although Crick's hypothesis proved to be wrong, in 1996, Arquès and Michel discovered the existence of a weaker version of such codes in eukaryote and prokaryote genomes, namely the so-called circular codes. Since then, circular code theory has invariably evoked great interest and made significant progress. In this article, the codon distributions in maximal comma-free, maximal self-complementary C³ and maximal self-complementary circular codes are discussed, i.e., we investigate in how many of such codes a given codon participates. As the main (and surprising) result, it is shown that the codons can be separated into very few classes (three, or five, or six) with respect to their frequency. Moreover, the distribution classes can be hierarchically ordered as refinements from maximal comma-free codes via maximal self-complementary C(3) codes to maximal self-complementary circular codes.
Space-time compressive imaging.
Treeaporn, Vicha; Ashok, Amit; Neifeld, Mark A
2012-02-01
Compressive imaging systems typically exploit the spatial correlation of the scene to facilitate a lower dimensional measurement relative to a conventional imaging system. In natural time-varying scenes there is a high degree of temporal correlation that may also be exploited to further reduce the number of measurements. In this work we analyze space-time compressive imaging using Karhunen-Loève (KL) projections for the read-noise-limited measurement case. Based on a comprehensive simulation study, we show that a KL-based space-time compressive imager offers higher compression relative to space-only compressive imaging. For a relative noise strength of 10% and reconstruction error of 10%, we find that space-time compressive imaging with 8×8×16 spatiotemporal blocks yields about 292× compression compared to a conventional imager, while space-only compressive imaging provides only 32× compression. Additionally, under high read-noise conditions, a space-time compressive imaging system yields lower reconstruction error than a conventional imaging system due to the multiplexing advantage. We also discuss three electro-optic space-time compressive imaging architecture classes, including charge-domain processing by a smart focal plane array (FPA). Space-time compressive imaging using a smart FPA provides an alternative method to capture the nonredundant portions of time-varying scenes.
Error resiliency of distributed video coding in wireless video communication
NASA Astrophysics Data System (ADS)
Ye, Shuiming; Ouaret, Mourad; Dufaux, Frederic; Ansorge, Michael; Ebrahimi, Touradj
2008-08-01
Distributed Video Coding (DVC) is a new paradigm in video coding, based on the Slepian-Wolf and Wyner-Ziv theorems. DVC offers a number of potential advantages: flexible partitioning of the complexity between the encoder and decoder, robustness to channel errors due to intrinsic joint source-channel coding, codec independent scalability, and multi-view coding without communications between the cameras. In this paper, we evaluate the performance of DVC in an error-prone wireless communication environment. We also present a hybrid spatial and temporal error concealment approach for DVC. Finally, we perform a comparison with a state-of-the-art AVC/H.264 video coding scheme in the presence of transmission errors.
NASA Technical Reports Server (NTRS)
Steyn, J. J.; Born, U.
1970-01-01
A FORTRAN code was developed for the Univac 1108 digital computer to unfold lithium-drifted germanium semiconductor spectrometers, polyenergetic gamma photon experimental distributions. It was designed to analyze the combination continuous and monoenergetic gamma radiation field of radioisotope volumetric sources. The code generates the detector system response matrix function and applies it to monoenergetic spectral components discretely and to the continuum iteratively. It corrects for system drift, source decay, background, and detection efficiency. Results are presented in digital form for differential and integrated photon number and energy distributions, and for exposure dose.
Streamlined Genome Sequence Compression using Distributed Source Coding
Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel
2014-01-01
We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552
Space-Time Network Codes Utilizing Transform-Based Coding
2010-12-01
1− prn if βrn = 1 prn if βrn = 0 , (17) where prn is the symbol error rate (SER) for detecting xn at Ur. For M- QAM modulation , it can be shown...time, time-division multiple access (TDMA) would be the most commonly-used technique in many applications . However, TDMA is extremely inefficient in...r 6= n, where xn is from an M- QAM constellation X. At the end of this phase, each client node Ur for r = 1, 2, ..., N possesses a set of N symbols
Distributed Joint Source-Channel Coding in Wireless Sensor Networks
Zhu, Xuqi; Liu, Yu; Zhang, Lin
2009-01-01
Considering the fact that sensors are energy-limited and the wireless channel conditions in wireless sensor networks, there is an urgent need for a low-complexity coding method with high compression ratio and noise-resisted features. This paper reviews the progress made in distributed joint source-channel coding which can address this issue. The main existing deployments, from the theory to practice, of distributed joint source-channel coding over the independent channels, the multiple access channels and the broadcast channels are introduced, respectively. To this end, we also present a practical scheme for compressing multiple correlated sources over the independent channels. The simulation results demonstrate the desired efficiency. PMID:22408560
NASA Astrophysics Data System (ADS)
Giménez-Forcada, Elena
2014-09-01
A new method has been developed to recognize and understand the temporal and spatial evolution of seawater intrusion in a coastal alluvial aquifer. The study takes into account that seawater intrusion is a dynamic process, and that seasonal and inter-annual variations in the balance of the aquifer cause changes in groundwater chemistry. Analysis of the main processes, by means of the Hydrochemical Facies Evolution Diagram (HFE-Diagram), provides essential knowledge about the main hydrochemical processes. Subsequently, analysis of the spatial distribution of hydrochemical facies using heatmaps helps to identify the general state of the aquifer with respect to seawater intrusion during different sampling periods. This methodology has been applied to the pilot area of the Vinaroz Plain, on the Mediterranean coast of Spain. The results appear to be very successful for differentiating variations through time in the salinization processes caused by seawater intrusion into the aquifer, distinguishing the phase of seawater intrusion from the phase of recovery, and their respective evolutions. The method shows that hydrochemical variations can be read in terms of the pattern of seawater intrusion, groundwater quality status, aquifer behaviour and hydrodynamic conditions. This leads to a better general understanding of the aquifers and a potential for improvement in the way they are managed.
NASA Astrophysics Data System (ADS)
Wong, Wing-Chun Godwin
This dissertation focused on Kant's conception of physical matter in the Opus postumum. In this work, Kant postulates the existence of an ether which fills the whole of space and time with its moving forces. Kant's arguments for the existence of an ether in the so-called Ubergang have been acutely criticized by commentators. Guyer, for instance, thinks that Kant pushes the technique of transcendental deduction too far in trying to deduce the empirical ether. In defense of Kant, I held that it is not the actual existence of the empirical ether, but the concept of the ether as a space-time filler that is subject to a transcendental deduction. I suggested that Kant is doing three things in the Ubergang: First, he deduces the pure concept of a space-time filler as a conceptual hybrid of the transcendental object and permanent substance to replace the category of substance in the Critique. Then he tries to prove the existence of such a space-time filler as a reworking of the First Analogy. Finally, he takes into consideration the empirical determinations of the ether by adding the concept of moving forces to the space -time filler. In reconstructing Kant's proofs, I pointed out that Kant is absolutely committed to the impossibility of action-at-a-distance. If we add this new principle of no-action-at-a-distance to the Third Analogy, the existence of a space-time filler follows. I argued with textual evidence that Kant's conception of ether satisfies the basic structure of a field: (1) the ether is a material continuum; (2) a physical quantity is definable on each point in the continuum; and (3) the ether provides a medium to support the continuous transmission of action. The thrust of Kant's conception of ether is to provide a holistic ontology for the transition to physics, which can best be understood from a field-theoretical point of view. This is the main thesis I attempted to establish in this dissertation.
A MCTF video coding scheme based on distributed source coding principles
NASA Astrophysics Data System (ADS)
Tagliasacchi, Marco; Tubaro, Stefano
2005-07-01
Motion Compensated Temporal Filtering (MCTF) has proved to be an efficient coding tool in the design of open-loop scalable video codecs. In this paper we propose a MCTF video coding scheme based on lifting where the prediction step is implemented using PRISM (Power efficient, Robust, hIgh compression Syndrome-based Multimedia coding), a video coding framework built on distributed source coding principles. We study the effect of integrating the update step at the encoder or at the decoder side. We show that the latter approach allows to improve the quality of the side information exploited during decoding. We present the analytical results obtained by modeling the video signal along the motion trajectories as a first order auto-regressive process. We show that the update step at the decoder allows to half the contribution of the quantization noise. We also include experimental results with real video data that demonstrate the potential of this approach when the video sequences are coded at low bitrates.
Use of bar codes in inpatient drug distribution.
Meyer, G E; Brandell, R; Smith, J E; Milewski, F J; Brucker, P; Coniglio, M
1991-05-01
The development and operation of a prototype inpatient drug distribution system that uses bar codes is described, and the impact of bar coding on the cassette-filling and verification process is summarized. A prototype pharmacy dispensing site was created to function in parallel with an existing satellite dispensing site that served 78 general medical-care beds. Supplemental labels encoded with an 11-digit unique product identification number, a 5-digit expiration date, and a 6-character lot number were generated and affixed to all unit dose packages dispensed from the prototype pharmacy site. The unit doses were labeled with Code 49 symbology; each label measured 0.8 x 1.25 inches. Each patient cassette was labeled using Code 39 symbology. A cost-benefit model was developed, and the two dispensing systems were compared with respect to (1) time to fill patient cassettes, (2) time to verify patient cassettes, (3) time to process patient charges and credits, (4) time to correct dispensing errors, (5) accuracy of the cassette-filling process, and (6) accuracy of the cassette verification process. Bar-code dispensing and verification saved 1.52 seconds per dose. Additionally, the cassette verification function was shifted from pharmacists to technicians. Estimated per-dose cost of the bar-code system was 2.73 cents. A measurable improvement in the accuracy of filling patient cassettes was documented. The feasibility of using bar codes in unit dose dispensing was demonstrated, and the prototype system was shown to produce cost efficiencies and patient-care benefits.
NASA Astrophysics Data System (ADS)
Field, F.; Goodbun, J.; Watson, V.
Architects have a role to play in interplanetary space that has barely yet been explored. The architectural community is largely unaware of this new territory, for which there is still no agreed method of practice. There is moreover a general confusion, in scientific and related fields, over what architects might actually do there today. Current extra-planetary designs generally fail to explore the dynamic and relational nature of space-time, and often reduce human habitation to a purely functional problem. This is compounded by a crisis over the representation (drawing) of space-time. The present work returns to first principles of architecture in order to realign them with current socio-economic and technological trends surrounding the space industry. What emerges is simultaneously the basis for an ecological space architecture, and the representational strategies necessary to draw it. We explore this approach through a work of design-based research that describes the construction of Ocean; a huge body of water formed by the collision of two asteroids at the Translunar Lagrange Point (L2), that would serve as a site for colonisation, and as a resource to fuel future missions. Ocean is an experimental model for extra-planetary space design and its representation, within the autonomous discipline of architecture.
NASA Technical Reports Server (NTRS)
Braverman, Amy; Nguyen, Hai; Olsen, Edward; Cressie, Noel
2011-01-01
Space-time Data Fusion (STDF) is a methodology for combing heterogeneous remote sensing data to optimally estimate the true values of a geophysical field of interest, and obtain uncertainties for those estimates. The input data sets may have different observing characteristics including different footprints, spatial resolutions and fields of view, orbit cycles, biases, and noise characteristics. Despite these differences all observed data can be linked to the underlying field, and therefore the each other, by a statistical model. Differences in footprints and other geometric characteristics are accounted for by parameterizing pixel-level remote sensing observations as spatial integrals of true field values lying within pixel boundaries, plus measurement error. Both spatial and temporal correlations in the true field and in the observations are estimated and incorporated through the use of a space-time random effects (STRE) model. Once the models parameters are estimated, we use it to derive expressions for optimal (minimum mean squared error and unbiased) estimates of the true field at any arbitrary location of interest, computed from the observations. Standard errors of these estimates are also produced, allowing confidence intervals to be constructed. The procedure is carried out on a fine spatial grid to approximate a continuous field. We demonstrate STDF by applying it to the problem of estimating CO2 concentration in the lower-atmosphere using data from the Atmospheric Infrared Sounder (AIRS) and the Japanese Greenhouse Gasses Observing Satellite (GOSAT) over one year for the continental US.
NASA Technical Reports Server (NTRS)
Braverman, Amy; Nguyen, Hai; Olsen, Edward; Cressie, Noel
2011-01-01
Space-time Data Fusion (STDF) is a methodology for combing heterogeneous remote sensing data to optimally estimate the true values of a geophysical field of interest, and obtain uncertainties for those estimates. The input data sets may have different observing characteristics including different footprints, spatial resolutions and fields of view, orbit cycles, biases, and noise characteristics. Despite these differences all observed data can be linked to the underlying field, and therefore the each other, by a statistical model. Differences in footprints and other geometric characteristics are accounted for by parameterizing pixel-level remote sensing observations as spatial integrals of true field values lying within pixel boundaries, plus measurement error. Both spatial and temporal correlations in the true field and in the observations are estimated and incorporated through the use of a space-time random effects (STRE) model. Once the models parameters are estimated, we use it to derive expressions for optimal (minimum mean squared error and unbiased) estimates of the true field at any arbitrary location of interest, computed from the observations. Standard errors of these estimates are also produced, allowing confidence intervals to be constructed. The procedure is carried out on a fine spatial grid to approximate a continuous field. We demonstrate STDF by applying it to the problem of estimating CO2 concentration in the lower-atmosphere using data from the Atmospheric Infrared Sounder (AIRS) and the Japanese Greenhouse Gasses Observing Satellite (GOSAT) over one year for the continental US.
Achieving H.264-like compression efficiency with distributed video coding
NASA Astrophysics Data System (ADS)
Milani, Simone; Wang, Jiajun; Ramchandran, Kannan
2007-01-01
Recently, a new class of distributed source coding (DSC) based video coders has been proposed to enable low-complexity encoding. However, to date, these low-complexity DSC-based video encoders have been unable to compress as efficiently as motion-compensated predictive coding based video codecs, such as H.264/AVC, due to insufficiently accurate modeling of video data. In this work, we examine achieving H.264-like high compression efficiency with a DSC-based approach without the encoding complexity constraint. The success of H.264/AVC highlights the importance of accurately modeling the highly non-stationary video data through fine-granularity motion estimation. This motivates us to deviate from the popular approach of approaching the Wyner-Ziv bound with sophisticated capacity-achieving channel codes that require long block lengths and high decoding complexity, and instead focus on accurately modeling video data. Such a DSC-based, compression-centric encoder is an important first step towards building a robust DSC-based video coding framework.
Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing
NASA Technical Reports Server (NTRS)
Ozguner, Fusun
1996-01-01
Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.
Distributed magnetic field positioning system using code division multiple access
NASA Technical Reports Server (NTRS)
Prigge, Eric A. (Inventor)
2003-01-01
An apparatus and methods for a magnetic field positioning system use a fundamentally different, and advantageous, signal structure and multiple access method, known as Code Division Multiple Access (CDMA). This signal architecture, when combined with processing methods, leads to advantages over the existing technologies, especially when applied to a system with a large number of magnetic field generators (beacons). Beacons at known positions generate coded magnetic fields, and a magnetic sensor measures a sum field and decomposes it into component fields to determine the sensor position and orientation. The apparatus and methods can have a large `building-sized` coverage area. The system allows for numerous beacons to be distributed throughout an area at a number of different locations. A method to estimate position and attitude, with no prior knowledge, uses dipole fields produced by these beacons in different locations.
Ansari, A.F.; Gay, R.R.; Gitnick, B.J.
1981-07-01
A steady-state core flow distribution code (FIBWR) is described. The ability of the recommended models to predict various pressure drop components and void distribution is shown by comparison to the experimental data. Application of the FIBWR code to the Vermont Yankee Nuclear Power Station is shown by comparison to the plant measured data.
STAR -Space Time Asymmetry Research
NASA Astrophysics Data System (ADS)
van Zoest, Tim; Braxmaier, Claus; Schuldt, Thilo; Allab, Mohammed; Theil, Stephan; Pelivan, Ivanka; Herrmann, Sven; Lümmerzahl, Claus; Peters, Achim; Mühle, Katharina; Wicht, Andreas; Nagel, Moritz; Kovalchuk, Evgeny; Düringshoff, Klaus; Dittus, Hansjürg
STAR is a proposed satellite mission that aims for significantly improved tests of fundamental space-time symmetry and the foundations of special and general relativity. In total STAR comprises a series of five subsequent missions. The STAR1 mission will measure the constancy of the speed of light to one part in 1019 and derive the Kennedy Thorndike (KT) coefficient of the Mansouri-Sexl test theory to 7x10-10 . The KT experiment will be performed by compar-ison of an iodine standard with a highly stable cavity made from ultra low expansion (ULE) ceramics. With an orbital velocity of 7 km/s the sensitivity to a boost dependent violation of Lorentz invariance as modeled by the KT term in the Mansouri Sexl test theory or a Lorentz violating extension of the standard model (SME) will be significantly enhanced as compared to Earth based experiments. The low noise space environment will additionally enhance the measurement precision such that an overall improvement by a factor of 400 over current Earth based experiments is expected.
Practical distributed video coding in packet lossy channels
NASA Astrophysics Data System (ADS)
Qing, Linbo; Masala, Enrico; He, Xiaohai
2013-07-01
Improving error resilience of video communications over packet lossy channels is an important and tough task. We present a framework to optimize the quality of video communications based on distributed video coding (DVC) in practical packet lossy network scenarios. The peculiar characteristics of DVC indeed require a number of adaptations to take full advantage of its intrinsic robustness when dealing with data losses of typical real packet networks. This work proposes a new packetization scheme, an investigation of the best error-correcting codes to use in a noisy environment, a practical rate-allocation mechanism, which minimizes decoder feedback, and an improved side-information generation and reconstruction function. Performance comparisons are presented with respect to a conventional packet video communication using H.264/advanced video coding (AVC). Although currently the H.264/AVC rate-distortion performance in case of no loss is better than state-of-the-art DVC schemes, under practical packet lossy conditions, the proposed techniques provide better performance with respect to an H.264/AVC-based system, especially at high packet loss rates. Thus the error resilience of the proposed DVC scheme is superior to the one provided by H.264/AVC, especially in the case of transmission over packet lossy networks.
Distributed coding/decoding complexity in video sensor networks.
Cordeiro, Paulo J; Assunção, Pedro
2012-01-01
Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.
Distributed Coding/Decoding Complexity in Video Sensor Networks
Cordeiro, Paulo J.; Assunção, Pedro
2012-01-01
Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality. PMID:22736972
Weight distributions for turbo codes using random and nonrandom permutations
NASA Technical Reports Server (NTRS)
Dolinar, S.; Divsalar, D.
1995-01-01
This article takes a preliminary look at the weight distributions achievable for turbo codes using random, nonrandom, and semirandom permutations. Due to the recursiveness of the encoders, it is important to distinguish between self-terminating and non-self-terminating input sequences. The non-self-terminating sequences have little effect on decoder performance, because they accumulate high encoded weight until they are artificially terminated at the end of the block. From probabilistic arguments based on selecting the permutations randomly, it is concluded that the self-terminating weight-2 data sequences are the most important consideration in the design of constituent codes; higher-weight self-terminating sequences have successively decreasing importance. Also, increasing the number of codes and, correspondingly, the number of permutations makes it more and more likely that the bad input sequences will be broken up by one or more of the permuters. It is possible to design nonrandom permutations that ensure that the minimum distance due to weight-2 input sequences grows roughly as the square root of (2N), where N is the block length. However, these nonrandom permutations amplify the bad effects of higher-weight inputs, and as a result they are inferior in performance to randomly selected permutations. But there are 'semirandom' permutations that perform nearly as well as the designed nonrandom permutations with respect to weight-2 input sequences and are not as susceptible to being foiled by higher-weight inputs.
Sparsey™: event recognition via deep hierarchical sparse distributed codes.
Rinkus, Gerard J
2014-01-01
The visual cortex's hierarchical, multi-level organization is captured in many biologically inspired computational vision models, the general idea being that progressively larger scale (spatially/temporally) and more complex visual features are represented in progressively higher areas. However, most earlier models use localist representations (codes) in each representational field (which we equate with the cortical macrocolumn, "mac"), at each level. In localism, each represented feature/concept/event (hereinafter "item") is coded by a single unit. The model we describe, Sparsey, is hierarchical as well but crucially, it uses sparse distributed coding (SDC) in every mac in all levels. In SDC, each represented item is coded by a small subset of the mac's units. The SDCs of different items can overlap and the size of overlap between items can be used to represent their similarity. The difference between localism and SDC is crucial because SDC allows the two essential operations of associative memory, storing a new item and retrieving the best-matching stored item, to be done in fixed time for the life of the model. Since the model's core algorithm, which does both storage and retrieval (inference), makes a single pass over all macs on each time step, the overall model's storage/retrieval operation is also fixed-time, a criterion we consider essential for scalability to the huge ("Big Data") problems. A 2010 paper described a nonhierarchical version of this model in the context of purely spatial pattern processing. Here, we elaborate a fully hierarchical model (arbitrary numbers of levels and macs per level), describing novel model principles like progressive critical periods, dynamic modulation of principal cells' activation functions based on a mac-level familiarity measure, representation of multiple simultaneously active hypotheses, a novel method of time warp invariant recognition, and we report results showing learning/recognition of spatiotemporal patterns.
Sparsey™: event recognition via deep hierarchical sparse distributed codes
Rinkus, Gerard J.
2014-01-01
The visual cortex's hierarchical, multi-level organization is captured in many biologically inspired computational vision models, the general idea being that progressively larger scale (spatially/temporally) and more complex visual features are represented in progressively higher areas. However, most earlier models use localist representations (codes) in each representational field (which we equate with the cortical macrocolumn, “mac”), at each level. In localism, each represented feature/concept/event (hereinafter “item”) is coded by a single unit. The model we describe, Sparsey, is hierarchical as well but crucially, it uses sparse distributed coding (SDC) in every mac in all levels. In SDC, each represented item is coded by a small subset of the mac's units. The SDCs of different items can overlap and the size of overlap between items can be used to represent their similarity. The difference between localism and SDC is crucial because SDC allows the two essential operations of associative memory, storing a new item and retrieving the best-matching stored item, to be done in fixed time for the life of the model. Since the model's core algorithm, which does both storage and retrieval (inference), makes a single pass over all macs on each time step, the overall model's storage/retrieval operation is also fixed-time, a criterion we consider essential for scalability to the huge (“Big Data”) problems. A 2010 paper described a nonhierarchical version of this model in the context of purely spatial pattern processing. Here, we elaborate a fully hierarchical model (arbitrary numbers of levels and macs per level), describing novel model principles like progressive critical periods, dynamic modulation of principal cells' activation functions based on a mac-level familiarity measure, representation of multiple simultaneously active hypotheses, a novel method of time warp invariant recognition, and we report results showing learning/recognition of spatiotemporal
NASA Astrophysics Data System (ADS)
Oikonomou, Th.; Provata, A.
2006-03-01
We study the primary DNA structure of four of the most completely sequenced human chromosomes (including chromosome 19 which is the most dense in coding), using non-extensive statistics. We show that the exponents governing the spatial decay of the coding size distributions vary between 5.2 ≤r ≤5.7 for the short scales and 1.45 ≤q ≤1.50 for the large scales. On the contrary, the exponents governing the spatial decay of the non-coding size distributions in these four chromosomes, take the values 2.4 ≤r ≤3.2 for the short scales and 1.50 ≤q ≤1.72 for the large scales. These results, in particular the values of the tail exponent q, indicate the existence of correlations in the coding and non-coding size distributions with tendency for higher correlations in the non-coding DNA.
Space-Time Urban Air Pollution Forecasts
NASA Astrophysics Data System (ADS)
Russo, A.; Trigo, R. M.; Soares, A.
2012-04-01
Air pollution, like other natural phenomena, may be considered a space-time process. However, the simultaneous integration of time and space is not an easy task to perform, due to the existence of different uncertainties levels and data characteristics. In this work we propose a hybrid method that combines geostatistical and neural models to analyze PM10 time series recorded in the urban area of Lisbon (Portugal) for the 2002-2006 period and to produce forecasts. Geostatistical models have been widely used to characterize air pollution in urban areas, where the pollutant sources are considered diffuse, and also to industrial areas with localized emission sources. It should be stressed however that most geostatistical models correspond basically to an interpolation methodology (estimation, simulation) of a set of variables in a spatial or space-time domain. The temporal prediction of a pollutant usually requires knowledge of the main trends and complex patterns of physical dispersion phenomenon. To deal with low resolution problems and to enhance reliability of predictions, an approach based on neural network short term predictions in the monitoring stations which behave as a local conditioner to a fine grid stochastic simulation model is presented here. After the pollutant concentration is predicted for a given time period at the monitoring stations, we can use the local conditional distributions of observed values, given the predicted value for that period, to perform the spatial simulations for the entire area and consequently evaluate the spatial uncertainty of pollutant concentration. To attain this objective, we propose the use of direct sequential simulations with local distributions. With this approach one succeed to predict the space-time distribution of pollutant concentration that accounts for the time prediction uncertainty (reflecting the neural networks efficiency at each local monitoring station) and the spatial uncertainty as revealed by the spatial
Affine conformal vectors in space-time
NASA Astrophysics Data System (ADS)
Coley, A. A.; Tupper, B. O. J.
1992-05-01
All space-times admitting a proper affine conformal vector (ACV) are found. By using a theorem of Hall and da Costa, it is shown that such space-times either (i) admit a covariantly constant vector (timelike, spacelike, or null) and the ACV is the sum of a proper affine vector and a conformal Killing vector or (ii) the space-time is 2+2 decomposable, in which case it is shown that no ACV can exist (unless the space-time decomposes further). Furthermore, it is proved that all space-times admitting an ACV and a null covariantly constant vector (which are necessarily generalized pp-wave space-times) must have Ricci tensor of Segré type {2,(1,1)}. It follows that, among space-times admitting proper ACV, the Einstein static universe is the only perfect fluid space-time, there are no non-null Einstein-Maxwell space-times, and only the pp-wave space-times are representative of null Einstein-Maxwell solutions. Otherwise, the space-times can represent anisotropic fluids and viscous heat-conducting fluids, but only with restricted equations of state in each case.
Raine, D.J.; Heller, M.
1981-01-01
Analyzing the development of the structure of space-time from the theory of Aristotle to the present day, the present work attempts to sketch a science of relativistic mechanics. The concept of relativity is discussed in relation to the way in which space-time splits up into space and time, and in relation to Mach's principle concerning the relativity of inertia. Particular attention is given to the following topics: Aristotelian dynamics Copernican kinematics Newtonian dynamics the space-time of classical dynamics classical space-time in the presence of gravity the space-time of special relativity the space-time of general relativity solutions and problems in general relativity Mach's principle and the dynamics of space-time theories of inertial mass the integral formation of general relativity and the frontiers of relativity (e.g., unified field theories and quantum gravity).
NASA Astrophysics Data System (ADS)
Raine, D. J.; Heller, M.
Analyzing the development of the structure of space-time from the theory of Aristotle to the present day, the present work attempts to sketch a science of relativistic mechanics. The concept of relativity is discussed in relation to the way in which space-time splits up into space and time, and in relation to Mach's principle concerning the relativity of inertia. Particular attention is given to the following topics: Aristotelian dynamics; Copernican kinematics; Newtonian dynamics; the space-time of classical dynamics; classical space-time in the presence of gravity; the space-time of special relativity; the space-time of general relativity; solutions and problems in general relativity; Mach's principle and the dynamics of space-time; theories of inertial mass; the integral formation of general relativity; and the frontiers of relativity (e.g., unified field theories and quantum gravity).
Distributed reservation-based code division multiple access
NASA Astrophysics Data System (ADS)
Wieselthier, J. E.; Ephremides, A.
1984-11-01
The use of spread spectrum signaling, motivated primarily by its antijamming capabilities in military applications, leads naturally to the use of Code Division Multiple Access (CDMA) techniques that permit the successful simultaneous transmission by a number of users over a wideband channel. In this paper we address some of the major issues that are associated with the design of multiple access protocols for spread spectrum networks. We then propose, analyze, and evaluate a distributed reservation-based multiple access protocol that does in fact exploit CDMA properties. Especially significant is the fact that no acknowledgment or feedback information from the destination is required (thus facilitating communication with a radio-silent mode), nor is any form of coordination among the users necessary.
Visualization of scattering angular distributions with the SAP code
NASA Astrophysics Data System (ADS)
Fernandez, J. E.; Scot, V.; Basile, S.
2010-07-01
SAP (Scattering Angular distribution Plot) is a graphical tool developed at the University of Bologna to compute and plot Rayleigh and Compton differential cross-sections (atomic and electronic), form-factors (FFs) and incoherent scattering functions (SFs) for single elements, compounds and mixture of compounds, for monochromatic excitation in the range of 1-1000 keV. The computation of FFs and SFs may be performed in two ways: (a) by interpolating Hubbell's data from EPDL97 library and (b) by using semi-empirical formulas as described in the text. Two kinds of normalization permit to compare the plots of different magnitudes, by imposing a similar scale. The characteristics of the code SAP are illustrated with one example.
A study of oligonucleotide occurrence distributions in DNA coding segments.
Castrignanò, T; Colosimo, A; Morante, S; Parisi, V; Rossi, G C
1997-02-21
In this paper we present a general strategy designed to study the occurrence frequency distributions of oligonucleotides in DNA coding segments and to deal with the problem of detecting possible patterns of genomic compositional inhomogeneities and disuniformities. Identifying specific tendencies or peculiar deviations in the distributions of the effective occurrence frequencies of oligonucleotides, with respect to what can be a priori expected, is of the greatest importance in biology. Differences between expected and actual distributions may in fact suggest or confirm the existence of specific biological mechanisms related to them. Similarly, a marked deviation in the occurrence frequency of an oligonucleotide may suggest that it belongs to the class of so-called "DNA signal (target) sequences". The approach we have elaborated is innovative in various aspects. Firstly, the analysis of the genomic data is carried out in the light of the observation that the distribution of the four nucleotides along the coding regions of the genoma is biased by the existence of a well-defined "reading frame". Secondly, the "experimental" numbers found by counting the occurrences of the various oligonucleotide sequences are appropriately corrected for the many kinds of mistakes and redundancies present in the available genetic Data Bases. A methodologically significant further improvement of our approach over the existing searching strategies is represented by the fact that, in order to decide whether or not the (corrected) "experimental" value of the occurrence frequency of a given oligonucleotide is within statistical expectations, a measure of the strength of the selective pressure, having acted on it in the course of the evolution, is assigned to the sequence, in a way that takes into account both the value of the "experimental" occurrence frequency of the sequence and the magnitude of the probability that this number might be the result of statistical fluctuations. If the
Probability Distribution Estimation for Autoregressive Pixel-Predictive Image Coding.
Weinlich, Andreas; Amon, Peter; Hutter, Andreas; Kaup, André
2016-03-01
Pixelwise linear prediction using backward-adaptive least-squares or weighted least-squares estimation of prediction coefficients is currently among the state-of-the-art methods for lossless image compression. While current research is focused on mean intensity prediction of the pixel to be transmitted, best compression requires occurrence probability estimates for all possible intensity values. Apart from common heuristic approaches, we show how prediction error variance estimates can be derived from the (weighted) least-squares training region and how a complete probability distribution can be built based on an autoregressive image model. The analysis of image stationarity properties further allows deriving a novel formula for weight computation in weighted least-squares proofing and generalizing ad hoc equations from the literature. For sparse intensity distributions in non-natural images, a modified image model is presented. Evaluations were done in the newly developed C++ framework volumetric, artificial, and natural image lossless coder (Vanilc), which can compress a wide range of images, including 16-bit medical 3D volumes or multichannel data. A comparison with several of the best available lossless image codecs proofs that the method can achieve very competitive compression ratios. In terms of reproducible research, the source code of Vanilc has been made public.
Non-coding RNAs and complex distributed genetic networks
NASA Astrophysics Data System (ADS)
Zhdanov, Vladimir P.
2011-08-01
In eukaryotic cells, the mRNA-protein interplay can be dramatically influenced by non-coding RNAs (ncRNAs). Although this new paradigm is now widely accepted, an understanding of the effect of ncRNAs on complex genetic networks is lacking. To clarify what may happen in this case, we propose a mean-field kinetic model describing the influence of ncRNA on a complex genetic network with a distributed architecture including mutual protein-mediated regulation of many genes transcribed into mRNAs. ncRNA is considered to associate with mRNAs and inhibit their translation and/or facilitate degradation. Our results are indicative of the richness of the kinetics under consideration. The main complex features are found to be bistability and oscillations. One could expect to find kinetic chaos as well. The latter feature has however not been observed in our calculations. In addition, we illustrate the difference in the regulation of distributed networks by mRNA and ncRNA.
A Model of Classical Space-Times.
ERIC Educational Resources Information Center
Maudlin, Tim
1989-01-01
Discusses some historically important reference systems including those by Newton, Leibniz, and Galileo. Provides models illustrating space-time relationship of the reference systems. Describes building models. (YP)
A Model of Classical Space-Times.
ERIC Educational Resources Information Center
Maudlin, Tim
1989-01-01
Discusses some historically important reference systems including those by Newton, Leibniz, and Galileo. Provides models illustrating space-time relationship of the reference systems. Describes building models. (YP)
Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2012-10-15
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.
Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2013-01-01
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method. PMID:23750314
Suppressing feedback in a distributed video coding system by employing real field codes
NASA Astrophysics Data System (ADS)
Louw, Daniel J.; Kaneko, Haruhiko
2013-12-01
Single-view distributed video coding (DVC) is a video compression method that allows for the computational complexity of the system to be shifted from the encoder to the decoder. The reduced encoding complexity makes DVC attractive for use in systems where processing power or energy use at the encoder is constrained, for example, in wireless devices and surveillance systems. One of the biggest challenges in implementing DVC systems is that the required rate must be known at the encoder. The conventional approach is to use a feedback channel from the decoder to control the rate. Feedback channels introduce their own difficulties such as increased latency and buffering requirements, which makes the resultant system unsuitable for some applications. Alternative approaches, which do not employ feedback, suffer from either increased encoder complexity due to performing motion estimation at the encoder, or an inaccurate rate estimate. Inaccurate rate estimates can result in a reduced average rate-distortion performance, as well as unpleasant visual artifacts. In this paper, the authors propose a single-view DVC system that does not require a feedback channel. The consequences of inaccuracies in the rate estimate are addressed by using codes defined over the real field and a decoder employing successive refinement. The result is a codec with performance that is comparable to that of a feedback-based system at low rates without the use of motion estimation at the encoder or a feedback path. The disadvantage of the approach is a reduction in average rate-distortion performance in the high-rate regime for sequences with significant motion.
Emission coordinates in Minkowski space-time
Coll, Bartolome; Ferrando, Joan J.; Morales, Juan A.
2009-05-01
The theory of relativistic positioning systems and their natural associated emission coordinates are essential ingredients in the analysis of navigation systems and astrometry. Here we study emission coordinates in Minkowski space-time. For any choice of the four emitters (arbitrary space-time trajectories) the relation between the corresponding emission coordinates and the inertial ones are explicitly given.
FPGA based digital phase-coding quantum key distribution system
NASA Astrophysics Data System (ADS)
Lu, XiaoMing; Zhang, LiJun; Wang, YongGang; Chen, Wei; Huang, DaJun; Li, Deng; Wang, Shuang; He, DeYong; Yin, ZhenQiang; Zhou, Yu; Hui, Cong; Han, ZhengFu
2015-12-01
Quantum key distribution (QKD) is a technology with the potential capability to achieve information-theoretic security. Phasecoding is an important approach to develop practical QKD systems in fiber channel. In order to improve the phase-coding modulation rate, we proposed a new digital-modulation method in this paper and constructed a compact and robust prototype of QKD system using currently available components in our lab to demonstrate the effectiveness of the method. The system was deployed in laboratory environment over a 50 km fiber and continuously operated during 87 h without manual interaction. The quantum bit error rate (QBER) of the system was stable with an average value of 3.22% and the secure key generation rate is 8.91 kbps. Although the modulation rate of the photon in the demo system was only 200 MHz, which was limited by the Faraday-Michelson interferometer (FMI) structure, the proposed method and the field programmable gate array (FPGA) based electronics scheme have a great potential for high speed QKD systems with Giga-bits/second modulation rate.
NASA Astrophysics Data System (ADS)
Schmitz, Oliver; Soenario, Ivan; Vaartjes, Ilonca; Strak, Maciek; Hoek, Gerard; Brunekreef, Bert; Dijst, Martin; Karssenberg, Derek
2016-04-01
of land, the 4 digit postal code area or neighbourhood of a persons' home, circular areas around the home, and spatial probability distributions of space-time paths during commuting. Personal exposure was estimated by averaging concentrations over these space-time paths, for each individual in a cohort. Preliminary results show considerable differences of a persons' exposure using these various approaches of space-time path aggregation, presumably because air pollution shows large variation over short distances.
Pseudo-Z symmetric space-times
NASA Astrophysics Data System (ADS)
Mantica, Carlo Alberto; Suh, Young Jin
2014-04-01
In this paper, we investigate Pseudo-Z symmetric space-time manifolds. First, we deal with elementary properties showing that the associated form Ak is closed: in the case the Ricci tensor results to be Weyl compatible. This notion was recently introduced by one of the present authors. The consequences of the Weyl compatibility on the magnetic part of the Weyl tensor are pointed out. This determines the Petrov types of such space times. Finally, we investigate some interesting properties of (PZS)4 space-time; in particular, we take into consideration perfect fluid and scalar field space-time, and interesting properties are pointed out, including the Petrov classification. In the case of scalar field space-time, it is shown that the scalar field satisfies a generalized eikonal equation. Further, it is shown that the integral curves of the gradient field are geodesics. A classical method to find a general integral is presented.
Recursive Generation of Space-Times
NASA Astrophysics Data System (ADS)
Marks, Dennis
2015-04-01
Space-times can be generated recursively from a time-like unit basis vector T and a space-like one S. T is unique up to sign, corresponding to particles and antiparticles. S has the form of qubits. Qubits can make quantum transitions, suggesting spontaneous generation of space-time. Recursive generation leads from 2 dimensions to 4, with grades of the resulting algebra corresponding to space-time, spin-area, momentum-energy, and action. Dimensions can be open (like space-time) or closed. A closed time-like dimension has the symmetry of electromagnetism; 3 closed space-like dimensions have the symmetry of the weak force. The 4 open dimensions and the 4 closed dimensions produce an 8-dimensional space with a symmetry that is the product of the Yang regularization of the Heisenberg-Poincaré group and the GUT regularization of the Standard Model. After 8 dimensions, the pattern of real geometric algebras repeats itself, producing a recursive lattice of spontaneously expanding space-time with the physics of the Standard Model at each point of the lattice, implying conservation laws by Noether's theorem. The laws of nature are not preexistent; rather, they are consequences of the uniformity of space-time. The uniformity of space-time is a consequence of its recursive generation.
Plasmonics at the Space-Time Limit
NASA Astrophysics Data System (ADS)
Aeschlimann, Martin
The optical response of metallic nanostructures exhibits fascinating properties: local field interference effects that lead to strong variations of the near field distribution on a subwavelength scale, local field enhancement, and long lasting electronic coherences. Coherent control in general exploits the phase properties of light fields to manipulate coherent processes. Originally developed for molecular systems these concepts have recently been adapted also to nano-optical phenomena. Consequently, the combination of ultrafast laser spectroscopy, i.e. illumination with broadband coherent light sources, and near-field optics, opens a new realm for nonlinear optics on the nanoscale. To circumvent the experimental limitation of optical diffraction we use a photoemission electron microscope (PEEM) that has been proved to be a versatile tool for the investigation of near field properties of nanostructures with a spatial resolution of only a few nanometers and that allows for new spectroscopy techniques with ultrafast time resolution. We introduce a new spectroscopic method that determines nonlinear quantum-mechanical response functions beyond the optical diffraction limit. While in established coherent two-dimensional (2D) spectroscopy a four-wave-mixing response is measured using three ingoing and one outgoing wave, in 2D nanoscopy we employ four ingoing and no outgoing waves. This allows studying a broad range of phenomena not accessible otherwise such as space-time resolved coupling, transport, and Anderson localized photon modes
MEST- avoid next extinction by a space-time effect
NASA Astrophysics Data System (ADS)
Cao, Dayong
2013-03-01
Sun's companion-dark hole seasonal took its dark comets belt and much dark matter to impact near our earth. And some of them probability hit on our earth. So this model kept and triggered periodic mass extinctions on our earth every 25 to 27 million years. After every impaction, many dark comets with very special tilted orbits were arrested and lurked in solar system. When the dark hole-Tyche goes near the solar system again, they will impact near planets. The Tyche, dark comet and Oort Cloud have their space-time center. Because the space-time are frequency and amplitude square of wave. Because the wave (space-time) can make a field, and gas has more wave and fluctuate. So they like dense gas ball and a dark dense field. They can absorb the space-time and wave. So they are ``dark'' like the dark matter which can break genetic codes of our lives by a dark space-time effect. So the upcoming next impaction will cause current ``biodiversity loss.'' The dark matter can change dead plants and animals to coal, oil and natural gas which are used as energy, but break our living environment. According to our experiments, which consciousness can use thought waves remotely to change their systemic model between Electron Clouds and electron holes of P-N Junction and can change output voltages of solar cells by a life information technology and a space-time effect, we hope to find a new method to the orbit of the Tyche to avoid next extinction. (see Dayong Cao, BAPS.2011.APR.K1.17 and BAPS.2012.MAR.P33.14) Support by AEEA
Space-time analysis for reactivity determination in source-driven subcritical systems
NASA Astrophysics Data System (ADS)
Kulik, Viktoriya V.
Increasing worldwide interests in accelerator-driven systems is related to their potential role in transmutation of the spent reactor fuel. Margin of safety expressed in terms of reactivity, measuring proximity to criticality, has to be properly addressed for such systems. Monitoring of reactivity enables us to predict performance of a nuclear system and prevent unforeseen accidents. However, due to the presence of a localized spallation source in an accelerator-driven subcritical system leads to a significantly different neutron flux shape than a source-free fundamental mode in critical systems. As a result, the simple point kinetics approach commonly used for determination of reactivity in critical systems does not account properly for space-time effects in accelerator-driven subcritical systems, yielding inaccurate estimates in reactivity. To overcome this problem and account properly for spatial and spectral effects in reactivity determination, a method directly combining measurements with numerical simulations of the experimental data is developed within a quasi-static formulation. This method provides space-time corrections to a variety of traditional point kinetics techniques and determines the reactivity essentially independent of the detector position, as long as sufficiently accurate information on the reactor configuration is provided. In the dissertation, the space-time corrections are derived for two well-known point kinetics methods: the area-ratio technique and the alpha-method. Numerical simulations performed with the FX2-TH diffusion theory code along with a space-time analysis of MUSE-4 pulsed source experimental data illustrate the applicability of the proposed methods for the determination of significant subcriticality levels in fast and thermal reactor systems. To perform space-time reactivity corrections at reduced computational cost, a modal-local method is developed for source-driven systems and tested with the ERANOS code. This dissertation
Causal fermions in discrete space-time
NASA Astrophysics Data System (ADS)
Farrelly, Terence C.; Short, Anthony J.
2014-01-01
In this paper, we consider fermionic systems in discrete space-time evolving with a strict notion of causality, meaning they evolve unitarily and with a bounded propagation speed. First, we show that the evolution of these systems has a natural decomposition into a product of local unitaries, which also holds if we include bosons. Next, we show that causal evolution of fermions in discrete space-time can also be viewed as the causal evolution of a lattice of qubits, meaning these systems can be viewed as quantum cellular automata. Following this, we discuss some examples of causal fermionic models in discrete space-time that become interesting physical systems in the continuum limit: Dirac fermions in one and three spatial dimensions, Dirac fields, and briefly the Thirring model. Finally, we show that the dynamics of causal fermions in discrete space-time can be efficiently simulated on a quantum computer.
Space-time crystals of trapped ions.
Li, Tongcang; Gong, Zhe-Xuan; Yin, Zhang-Qi; Quan, H T; Yin, Xiaobo; Zhang, Peng; Duan, L-M; Zhang, Xiang
2012-10-19
Spontaneous symmetry breaking can lead to the formation of time crystals, as well as spatial crystals. Here we propose a space-time crystal of trapped ions and a method to realize it experimentally by confining ions in a ring-shaped trapping potential with a static magnetic field. The ions spontaneously form a spatial ring crystal due to Coulomb repulsion. This ion crystal can rotate persistently at the lowest quantum energy state in magnetic fields with fractional fluxes. The persistent rotation of trapped ions produces the temporal order, leading to the formation of a space-time crystal. We show that these space-time crystals are robust for direct experimental observation. We also study the effects of finite temperatures on the persistent rotation. The proposed space-time crystals of trapped ions provide a new dimension for exploring many-body physics and emerging properties of matter.
Space-time topology and quantum gravity.
NASA Astrophysics Data System (ADS)
Friedman, J. L.
Characteristic features are discussed of a theory of quantum gravity that allows space-time with a non-Euclidean topology. The review begins with a summary of the manifolds that can occur as classical vacuum space-times and as space-times with positive energy. Local structures with non-Euclidean topology - topological geons - collapse, and one may conjecture that in asymptotically flat space-times non-Euclidean topology is hiden from view. In the quantum theory, large diffeos can act nontrivially on the space of states, leading to state vectors that transform as representations of the corresponding symmetry group π0(Diff). In particular, in a quantum theory that, at energies E < EPlanck, is a theory of the metric alone, there appear to be ground states with half-integral spin, and in higher-dimensional gravity, with the kinematical quantum numbers of fundamental fermions.
NASA Astrophysics Data System (ADS)
García, José A.; Alvarez, Samantha; Flores, Alejandro; Govezensky, Tzipe; Bobadilla, Juan R.; José, Marco V.
2004-10-01
The genetic code is considered to be universal. In order to test if some statistical properties of the coding bacterial genome were due to inherent properties of the genetic code, we compared the autocorrelation function, the scaling properties and the maximum entropy of the distribution of distances of amino acids in sequences obtained by translating protein-coding regions from the genome of Borrelia burgdorferi, under different genetic codes. Overall our results indicate that these properties are very stable to perturbations made by altering the genetic code. We also discuss the evolutionary likely implications of the present results.
Space--Time from Topos Quantum Theory
NASA Astrophysics Data System (ADS)
Flori, Cecilia
One of the main challenges in theoretical physics in the past 50 years has been to define a theory of quantum gravity, i.e. a theory which consistently combines general relativity and quantum theory in order to define a theory of space-time itself seen as a fluctuating field. As such, a definition of space-time is of paramount importance, but it is precisely the attainment of such a definition which is one of the main stumbling blocks in quantum gravity. One of the striking features of quantum gravity is that although both general relativity and quantum theory treat space-time as a four-dimensional (4D) manifold equipped with a metric, quantum gravity would suggest that, at the microscopic scale, space-time is somewhat discrete. Therefore the continuum structure of space-time suggested by the two main ingredients of quantum gravity seems to be thrown into discussion by quantum gravity itself. This seems quite an odd predicament, but it might suggest that perhaps a different mathematical structure other than a smooth manifold should model space-time. These considerations seem to shed doubts on the use of the continuum in general in a possible theory of quantum gravity. An alternative would be to develop a mathematical formalism for quantum gravity in which no fundamental role is played by the continuum and where a new concept of space-time, not modeled on a differentiable manifold, will emerge. This is precisely one of the aims of the topos theory approach to quantum theory and quantum gravity put forward by Isham, Butterfield, and Doering and subsequently developed by other authors. The aim of this article is to precisely elucidate how such an approach gives rise to a new definition of space-time which might be more appropriate for quantum gravity.
Distributed Estimation, Coding, and Scheduling in Wireless Visual Sensor Networks
ERIC Educational Resources Information Center
Yu, Chao
2013-01-01
In this thesis, we consider estimation, coding, and sensor scheduling for energy efficient operation of wireless visual sensor networks (VSN), which consist of battery-powered wireless sensors with sensing (imaging), computation, and communication capabilities. The competing requirements for applications of these wireless sensor networks (WSN)…
Distributed Estimation, Coding, and Scheduling in Wireless Visual Sensor Networks
ERIC Educational Resources Information Center
Yu, Chao
2013-01-01
In this thesis, we consider estimation, coding, and sensor scheduling for energy efficient operation of wireless visual sensor networks (VSN), which consist of battery-powered wireless sensors with sensing (imaging), computation, and communication capabilities. The competing requirements for applications of these wireless sensor networks (WSN)…
SAMDIST: A Computer Code for Calculating Statistical Distributions for R-Matrix Resonance Parameters
Leal, L.C.
1995-01-01
The: SAMDIST computer code has been developed to calculate distribution of resonance parameters of the Reich-Moore R-matrix type. The program assumes the parameters are in the format compatible with that of the multilevel R-matrix code SAMMY. SAMDIST calculates the energy-level spacing distribution, the resonance width distribution, and the long-range correlation of the energy levels. Results of these calculations are presented in both graphic and tabular forms.
A New Solution of Distributed Disaster Recovery Based on Raptor Code
NASA Astrophysics Data System (ADS)
Deng, Kai; Wang, Kaiyun; Ma, Danyang
For the large cost, low data availability in the condition of multi-node storage and poor capacity of intrusion tolerance of traditional disaster recovery which is based on simple copy, this paper put forward a distributed disaster recovery scheme based on raptor codes. This article introduces the principle of raptor codes, and analyses its coding advantages, and gives a comparative analysis between this solution and traditional solutions through the aspects of redundancy, data availability and capacity of intrusion tolerance. The results show that the distributed disaster recovery solution based on raptor codes can achieve higher data availability as well as better intrusion tolerance capabilities in the premise of lower redundancy.
From Elastic Continua To Space-time
NASA Astrophysics Data System (ADS)
Tartaglia, Angelo; Radicella, Ninfa
2010-06-01
Since the early days of the theory of electromagnetism and of gravity the idea of space, then space-time, as a sort of physical continuum hovered the scientific community. Actually general relativity shows the strong similarity that exists between the geometrical properties of space-time and the ones of a strained elastic continuum. The bridge between geometry and the elastic potential, as well in three as in three plus one dimensions, is the strain tensor, read as the non-trivial part of the metric tensor. On the basis of this remark and exploiting appropriate multidimensional embeddings, it is possible to build a full theory of space-time, allowing to account for the accelerated expansion of the universe. How this can be obtained is the content of the paper. The theory fits the cosmic accelerated expansion data from type Ia supernovae better than the □CDM model.
Dipole antenna in space - Time periodic media.
NASA Technical Reports Server (NTRS)
Elachi, C.
1972-01-01
Study and solution of the problem of dipole radiation in sinusoidally space-time periodic media. The space-time periodicity can be considered as due to a strong pump wave and is expressed as a traveling-wave type change in the dielectric constant or the plasma density. The solution also covers the limit case of a sinusoidally stratified medium. The solution is formulated in a matrix form such that the basic results and diagrams apply, with minor changes, to the different cases studied: electric and magnetic dipole in a dielectric, plasma, and uniaxial plasma. The wave-vector diagram is used extensively in studying and presenting the different properties of the solution: caustics, effect of the disturbance (pump wave) motion, harmonics, radiation outside the allowed cone in a uniaxial plasma. Many dipole radiation patterns are given, and their features are explained physically. Finally, the solution and results obtained are extended to certain generally space-time periodic media.
Space-time framework of internal measurement
NASA Astrophysics Data System (ADS)
Matsuno, Koichiro
1998-07-01
Measurement internal to material bodies is ubiquitous. The internal observer has its own local space-time framework that enables the observer to distinguish, even to a slightest degree, those material bodies fallen into that framework. Internal measurement proceeding among the internal observers come to negotiate a construction of more encompassing local framework of space and time. The construction takes place through friction among the internal observers. Emergent phenomena are related to an occurrence of enlarging the local space-time framework through the frictional negotiation among the material participants serving as the internal observers. Unless such a negotiation is obtained, the internal observers would have to move around in the local space-time frameworks of their own that are mutually incommensurable. Enhancement of material organization as demonstrated in biological evolutionary processes manifests an inexhaustible negotiation for enlarging the local space-time framework available to the internal observers. In contrast, Newtonian space-time framework, that remains absolute and all encompassing, is an asymptote at which no further emergent phenomena could be expected. It is thus ironical to expect something to emerge within the framework of Newtonian absolute space and time. Instead of being a complex and organized configuration of interaction to appear within the global space-time framework, emergent phenomena are a consequence of negotiation among the local space-time frameworks available to internal measurement. Most indicative of the negotiation of local space-time frameworks is emergence of a conscious self grounding upon the reflexive nature of perceptions, that is, a self-consciousness in short, that certainly goes beyond the Kantian transcendental subject. Accordingly, a synthetic discourse on securing consciousness upon the ground of self-consciousness can be developed, though linguistic exposition of consciousness upon self
Pair creation in noncommutative space-time
NASA Astrophysics Data System (ADS)
Hamil, B.; Chetouani, L.
2016-09-01
By taking two interactions, the Volkov plane wave and a constant electromagnetic field, the probability related to the process of pair creation from the vacuum is exactly and analytically determined via the Schwinger method in noncommutative space-time. For the plane wave, it is shown that the probability is simply null and for the electromagnetic wave it is found that the expression of the probability has a similar form to that obtained by Schwinger in a commutative space-time. For a certain critical value of H, the probability is simply equal to 1.
A Cooperative Downloading Method for VANET Using Distributed Fountain Code
Liu, Jianhang; Zhang, Wenbin; Wang, Qi; Li, Shibao; Chen, Haihua; Cui, Xuerong; Sun, Yi
2016-01-01
Cooperative downloading is one of the effective methods to improve the amount of downloaded data in vehicular ad hoc networking (VANET). However, the poor channel quality and short encounter time bring about a high packet loss rate, which decreases transmission efficiency and fails to satisfy the requirement of high quality of service (QoS) for some applications. Digital fountain code (DFC) can be utilized in the field of wireless communication to increase transmission efficiency. For cooperative forwarding, however, processing delay from frequent coding and decoding as well as single feedback mechanism using DFC cannot adapt to the environment of VANET. In this paper, a cooperative downloading method for VANET using concatenated DFC is proposed to solve the problems above. The source vehicle and cooperative vehicles encodes the raw data using hierarchical fountain code before they send to the client directly or indirectly. Although some packets may be lost, the client can recover the raw data, so long as it receives enough encoded packets. The method avoids data retransmission due to packet loss. Furthermore, the concatenated feedback mechanism in the method reduces the transmission delay effectively. Simulation results indicate the benefits of the proposed scheme in terms of increasing amount of downloaded data and data receiving rate. PMID:27754339
A Cooperative Downloading Method for VANET Using Distributed Fountain Code.
Liu, Jianhang; Zhang, Wenbin; Wang, Qi; Li, Shibao; Chen, Haihua; Cui, Xuerong; Sun, Yi
2016-10-12
Cooperative downloading is one of the effective methods to improve the amount of downloaded data in vehicular ad hoc networking (VANET). However, the poor channel quality and short encounter time bring about a high packet loss rate, which decreases transmission efficiency and fails to satisfy the requirement of high quality of service (QoS) for some applications. Digital fountain code (DFC) can be utilized in the field of wireless communication to increase transmission efficiency. For cooperative forwarding, however, processing delay from frequent coding and decoding as well as single feedback mechanism using DFC cannot adapt to the environment of VANET. In this paper, a cooperative downloading method for VANET using concatenated DFC is proposed to solve the problems above. The source vehicle and cooperative vehicles encodes the raw data using hierarchical fountain code before they send to the client directly or indirectly. Although some packets may be lost, the client can recover the raw data, so long as it receives enough encoded packets. The method avoids data retransmission due to packet loss. Furthermore, the concatenated feedback mechanism in the method reduces the transmission delay effectively. Simulation results indicate the benefits of the proposed scheme in terms of increasing amount of downloaded data and data receiving rate.
SADDE (Scaled Absorbed Dose Distribution Evaluator): A code to generate input for VARSKIN
Reece, W.D.; Miller, S.D.; Durham, J.S.
1989-01-01
The VARSKIN computer code has been limited to the isotopes for which the scaled absorbed dose distributions were provided by the Medical Internal Radiation Dose (MIRD) Committee or to data that could be interpolated from isotopes that had similar spectra. This document describes the methodology to calculate the scaled absorbed dose distribution data for any isotope (including emissions by the daughter isotopes) and its implementation by a computer code called SADDE (Scaled Absorbed Dose Distribution Evaluator). The SADDE source code is provided along with input examples and verification calculations. 10 refs., 4 figs.
Space-time modeling of timber prices
Mo Zhou; Joseph Buongriorno
2006-01-01
A space-time econometric model was developed for pine sawtimber timber prices of 21 geographically contiguous regions in the southern United States. The correlations between prices in neighboring regions helped predict future prices. The impulse response analysis showed that although southern pine sawtimber markets were not globally integrated, local supply and demand...
Relativistic positioning in Schwarzschild space-time
NASA Astrophysics Data System (ADS)
Puchades, Neus; Sáez, Diego
2015-04-01
In the Schwarzschild space-time created by an idealized static spherically symmetric Earth, two approaches -based on relativistic positioning- may be used to estimate the user position from the proper times broadcast by four satellites. In the first approach, satellites move in the Schwarzschild space-time and the photons emitted by the satellites follow null geodesics of the Minkowski space-time asymptotic to the Schwarzschild geometry. This assumption leads to positioning errors since the photon world lines are not geodesics of any Minkowski geometry. In the second approach -the most coherent one- satellites and photons move in the Schwarzschild space-time. This approach is a first order one in the dimensionless parameter GM/R (with the speed of light c=1). The two approaches give different inertial coordinates for a given user. The differences are estimated and appropriately represented for users located inside a great region surrounding Earth. The resulting values (errors) are small enough to justify the use of the first approach, which is the simplest and the most manageable one. The satellite evolution mimics that of the GALILEO global navigation satellite system.
Space, Time, Matter:. 1918-2012
NASA Astrophysics Data System (ADS)
Veneziano, Gabriele
2013-12-01
Almost a century has elapsed since Hermann Weyl wrote his famous "Space, Time, Matter" book. After recalling some amazingly premonitory writings by him and Wolfgang Pauli in the fifties, I will try to asses the present status of the problematics they were so much concerned with.
Utilities for master source code distribution: MAX and Friends
NASA Technical Reports Server (NTRS)
Felippa, Carlos A.
1988-01-01
MAX is a program for the manipulation of FORTRAN master source code (MSC). This is a technique by which one maintains one and only one master copy of a FORTRAN program under a program developing system, which for MAX is assumed to be VAX/VMS. The master copy is not intended to be directly compiled. Instead it must be pre-processed by MAX to produce compilable instances. These instances may correspond to different code versions (for example, double precision versus single precision), different machines (for example, IBM, CDC, Cray) or different operating systems (i.e., VAX/VMS versus VAX/UNIX). The advantage os using a master source is more pronounced in complex application programs that are developed and maintained over many years and are to be transported and executed on several computer environments. The version lag problem that plagues many such programs is avoided by this approach. MAX is complemented by several auxiliary programs that perform nonessential functions. The ensemble is collectively known as MAX and Friends. All of these programs, including MAX, are executed as foreign VAX/VMS commands and can easily be hidden in customized VMS command procedures.
Source coding with escort distributions and Rényi entropy bounds
NASA Astrophysics Data System (ADS)
Bercher, J.-F.
2009-08-01
We discuss the interest of escort distributions and Rényi entropy in the context of source coding. We first recall a source coding theorem by Campbell relating a generalized measure of length to the Rényi-Tsallis entropy. We show that the associated optimal codes can be obtained using considerations on escort-distributions. We propose a new family of measure of length involving escort-distributions and we show that these generalized lengths are also bounded below by the Rényi entropy. Furthermore, we obtain that the standard Shannon codes lengths are optimum for the new generalized lengths measures, whatever the entropic index. Finally, we show that there exists in this setting an interplay between standard and escort distributions.
Quantum circuit for optimal eavesdropping in quantum key distribution using phase-time coding
Kronberg, D. A.; Molotkov, S. N.
2010-07-15
A quantum circuit is constructed for optimal eavesdropping on quantum key distribution proto- cols using phase-time coding, and its physical implementation based on linear and nonlinear fiber-optic components is proposed.
Space-time Structure of Temperature Variability
NASA Astrophysics Data System (ADS)
Laepple, Thomas; Kunz, Torben
2017-04-01
The spatial and temporal scales of temperature variability are closely linked. Whereas fast variations such as weather are regional, temperature anomalies on glacial-interglacial cycles appear to be globally coherent. It has been argued that the increase in spatial scales continues across all time scales, but up to now, the space-time structure of variations beyond the decadal scale has mostly remained unexplored. Here, we show first attempts to estimate and interpret the spatial extent of temperature changes at up to millennial time-scales, using instrumental observations, paleoclimate archives and climate model simulations. We further discuss the potential to separate externally forced climate signals from internal variability, by their respective and potentially different space-time structure.
Comparative Similarity in Branching Space-Times
NASA Astrophysics Data System (ADS)
Placek, Tomasz
2010-12-01
My aim in this paper is to investigate the notions of comparative similarity definable in the framework of branching space-times. A notion of this kind is required to give a rigorous Lewis-style semantics of space-time counterfactuals. In turn, the semantical analysis is needed to decide whether the recently proposed proofs of the non-locality of quantum mechanics are correct. From among the three notions of comparative similarity I select two which appear equally good as far as their intuitiveness and algebraic properties are concerned. However, the relations are not transitive, and thus cannot be used in the semantics proposed by Lewis (J. Philos. Log. 2:418-446, 1973), which requires transitivity. Yet they are adequate for the account of Lewis (J. Philos. Log. 10:217-234, 1981).
Energy distribution property and energy coding of a structural neural network
Wang, Ziyin; Wang, Rubin
2014-01-01
Studying neural coding through neural energy is a novel view. In this paper, based on previously proposed single neuron model, the correlation between the energy consumption and the parameters of the cortex networks (amount of neurons, coupling strength, and transform delay) under an oscillational condition were researched. We found that energy distribution varies orderly as these parameters change, and it is closely related to the synchronous oscillation of the neural network. Besides, we compared this method with traditional method of relative coefficient, which shows energy method works equal to or better than the traditional one. It is novel that the synchronous activity and neural network parameters could be researched by assessing energy distribution and consumption. Therefore, the conclusion of this paper will refine the framework of neural coding theory and contribute to our understanding of the coding mechanism of the cerebral cortex. It provides a strong theoretical foundation of a novel neural coding theory—energy coding. PMID:24600382
Energy distribution property and energy coding of a structural neural network.
Wang, Ziyin; Wang, Rubin
2014-01-01
Studying neural coding through neural energy is a novel view. In this paper, based on previously proposed single neuron model, the correlation between the energy consumption and the parameters of the cortex networks (amount of neurons, coupling strength, and transform delay) under an oscillational condition were researched. We found that energy distribution varies orderly as these parameters change, and it is closely related to the synchronous oscillation of the neural network. Besides, we compared this method with traditional method of relative coefficient, which shows energy method works equal to or better than the traditional one. It is novel that the synchronous activity and neural network parameters could be researched by assessing energy distribution and consumption. Therefore, the conclusion of this paper will refine the framework of neural coding theory and contribute to our understanding of the coding mechanism of the cerebral cortex. It provides a strong theoretical foundation of a novel neural coding theory-energy coding.
Space Time Processing, Environmental-Acoustic Effects
1987-08-15
5) In the cases of a harmonic field which is steady or for a random field which is spatially homogeneous and temporally stationary, one can infer...relationships define the acoustic-space-time field for the class of harmonic and random functions which are spatially homogeneous and temporally stationary...When the field is homogeneous and sta- tionary, then (in large average limits) spatial and temporal average values approach the statistically
Hypermotion due to space-time deformation
NASA Astrophysics Data System (ADS)
Fil'Chenkov, Michael; Laptev, Yuri
2016-03-01
A superluminal motion (hypermotion) via M. Alcubierre’s warp drive is considered. Parameters of the warp drive have been estimated. The equations of starship geodesics have been solved. The starship velocity have been shown to exceed the speed of light, with the local velocity relative to the deformed space-time being below it. Hawking’s radiation does not prove to affect the ship interior considerably. Difficulties related to a practical realization of the hypermotion are indicated.
Equivalence of the Regge and Einstein equations for almost flat simplicial space-times
NASA Astrophysics Data System (ADS)
Brewin, Leo
1989-06-01
The theory of distributions is applied to almost flat simplicial space-times. Explicit expressions are given for the first-order defects. It is shown explicitly that the Riemann tensor for an almost flat simplicial space-time contains delta-functions on the bones and derivatives of delta-functions on the 3-dimensional faces of the boundary of the space-time. The latter terms have not previously been seen in the Regge calculus. It is shown that the Regge and Hilbert actions have equal values on almost fiat simplicial space-times and that the Einstein equations lead directly to the Regge field equations.
Asymptotically flat space-times: an enigma
NASA Astrophysics Data System (ADS)
Newman, Ezra T.
2016-07-01
We begin by emphasizing that we are dealing with standard Einstein or Einstein-Maxwell theory—absolutely no new physics has been inserted. The fresh item is that the well-known asymptotically flat solutions of the Einstein-Maxwell theory are transformed to a new coordinate system with surprising and (seemingly) inexplicable results. We begin with the standard description of (Null) asymptotically flat space-times described in conventional Bondi-coordinates. After transforming the variables (mainly the asymptotic Weyl tensor components) to a very special set of Newman-Unti (NU) coordinates, we find a series of relations totally mimicking standard Newtonian classical mechanics and Maxwell theory. The surprising and troubling aspect of these relations is that the associated motion and radiation does not take place in physical space-time. Instead these relations takes place in an unusual inherited complex four-dimensional manifold referred to as H-space that has no immediate relationship with space-time. In fact these relations appear in two such spaces, H-space and its dual space \\bar{H}.
Optical Properties of Quantum Vacuum. Space-Time Engineering
Gevorkyan, A. S.; Gevorkyan, A. A.
2011-03-28
The propagation of electromagnetic waves in the vacuum is considered taking into account quantum fluctuations in the limits of Maxwell-Langevin (ML) type stochastic differential equations. For a model of fluctuations, type of 'white noise', using ML equations a partial differential equation of second order is obtained which describes the quantum distribution of virtual particles in vacuum. It is proved that in order to satisfy observed facts, the Lamb Shift etc, the virtual particles should be quantized in unperturbed vacuum. It is shown that the quantized virtual particles in toto (approximately 86 percent) are condensed on the 'ground state' energy level. It is proved that the extension of Maxwell electrodynamics with inclusion of quantum vacuum fluctuations may be constructed on a 6D space-time continuum, where 4D is Minkowski space-time and 2D is a compactified subspace. In detail is studied of vacuum's refraction indexes under the influence of external electromagnetic fields.
Pseudo-Newtonian Equations for Evolution of Particles and Fluids in Stationary Space-times
NASA Astrophysics Data System (ADS)
Witzany, Vojtěch; Lämmerzahl, Claus
2017-06-01
Pseudo-Newtonian potentials are a tool often used in theoretical astrophysics to capture some key features of a black hole space-time in a Newtonian framework. As a result, one can use Newtonian numerical codes, and Newtonian formalism, in general, in an effective description of important astrophysical processes such as accretion onto black holes. In this paper, we develop a general pseudo-Newtonian formalism, which pertains to the motion of particles, light, and fluids in stationary space-times. In return, we are able to assess the applicability of the pseudo-Newtonian scheme. The simplest and most elegant formulas are obtained in space-times without gravitomagnetic effects, such as the Schwarzschild rather than the Kerr space-time; the quantitative errors are smallest for motion with low binding energy. Included is a ready-to-use set of fluid equations in Schwarzschild space-time in Cartesian and radial coordinates.
High-capacity quantum Fibonacci coding for key distribution
NASA Astrophysics Data System (ADS)
Simon, David S.; Lawrence, Nate; Trevino, Jacob; Dal Negro, Luca; Sergienko, Alexander V.
2013-03-01
Quantum cryptography and quantum key distribution (QKD) have been the most successful applications of quantum information processing, highlighting the unique capability of quantum mechanics, through the no-cloning theorem, to securely share encryption keys between two parties. Here, we present an approach to high-capacity, high-efficiency QKD by exploiting cross-disciplinary ideas from quantum information theory and the theory of light scattering of aperiodic photonic media. We propose a unique type of entangled-photon source, as well as a physical mechanism for efficiently sharing keys. The key-sharing protocol combines entanglement with the mathematical properties of a recursive sequence to allow a realization of the physical conditions necessary for implementation of the no-cloning principle for QKD, while the source produces entangled photons whose orbital angular momenta (OAM) are in a superposition of Fibonacci numbers. The source is used to implement a particular physical realization of the protocol by randomly encoding the Fibonacci sequence onto entangled OAM states, allowing secure generation of long keys from few photons. Unlike in polarization-based protocols, reference frame alignment is unnecessary, while the required experimental setup is simpler than other OAM-based protocols capable of achieving the same capacity and its complexity grows less rapidly with increasing range of OAM used.
NASA Astrophysics Data System (ADS)
Bertolami, Orfeu
Since the nineteenth century, it is known, through the work of Lobatchevski, Riemann, and Gauss, that spaces do not need to have a vanishing curvature. This was for sure a revolution on its own, however, from the point of view of these mathematicians, the space of our day to day experience, the physical space, was still an essentially a priori concept that preceded all experience and was independent of any physical phenomena. Actually, that was also the view of Newton and Kant with respect to time, even though, for these two space-time explorers, the world was Euclidean.
NASA Technical Reports Server (NTRS)
Villarreal, James A.; Shelton, Robert O.
1991-01-01
Introduced here is a novel technique which adds the dimension of time to the well known back propagation neural network algorithm. Cited here are several reasons why the inclusion of automated spatial and temporal associations are crucial to effective systems modeling. An overview of other works which also model spatiotemporal dynamics is furnished. A detailed description is given of the processes necessary to implement the space-time network algorithm. Several demonstrations that illustrate the capabilities and performance of this new architecture are given.
Complex phylogenetic distribution of a non-canonical genetic code in green algae
2010-01-01
Background A non-canonical nuclear genetic code, in which TAG and TAA have been reassigned from stop codons to glutamine, has evolved independently in several eukaryotic lineages, including the ulvophycean green algal orders Dasycladales and Cladophorales. To study the phylogenetic distribution of the standard and non-canonical genetic codes, we generated sequence data of a representative set of ulvophycean green algae and used a robust green algal phylogeny to evaluate different evolutionary scenarios that may account for the origin of the non-canonical code. Results This study demonstrates that the Dasycladales and Cladophorales share this alternative genetic code with the related order Trentepohliales and the genus Blastophysa, but not with the Bryopsidales, which is sister to the Dasycladales. This complex phylogenetic distribution whereby all but one representative of a single natural lineage possesses an identical deviant genetic code is unique. Conclusions We compare different evolutionary scenarios for the complex phylogenetic distribution of this non-canonical genetic code. A single transition to the non-canonical code followed by a reversal to the canonical code in the Bryopsidales is highly improbable due to the profound genetic changes that coincide with codon reassignment. Multiple independent gains of the non-canonical code, as hypothesized for ciliates, are also unlikely because the same deviant code has evolved in all lineages. Instead we favor a stepwise acquisition model, congruent with the ambiguous intermediate model, whereby the non-canonical code observed in these green algal orders has a single origin. We suggest that the final steps from an ambiguous intermediate situation to a non-canonical code have been completed in the Trentepohliales, Dasycladales, Cladophorales and Blastophysa but not in the Bryopsidales. We hypothesize that in the latter lineage an initial stage characterized by translational ambiguity was not followed by final
Syndrome Surveillance Using Parametric Space-Time Clustering
KOCH, MARK W.; MCKENNA, SEAN A.; BILISOLY, ROGER L.
2002-11-01
As demonstrated by the anthrax attack through the United States mail, people infected by the biological agent itself will give the first indication of a bioterror attack. Thus, a distributed information system that can rapidly and efficiently gather and analyze public health data would aid epidemiologists in detecting and characterizing emerging diseases, including bioterror attacks. We propose using clusters of adverse health events in space and time to detect possible bioterror attacks. Space-time clusters can indicate exposure to infectious diseases or localized exposure to toxins. Most space-time clustering approaches require individual patient data. To protect the patient's privacy, we have extended these approaches to aggregated data and have embedded this extension in a sequential probability ratio test (SPRT) framework. The real-time and sequential nature of health data makes the SPRT an ideal candidate. The result of space-time clustering gives the statistical significance of a cluster at every location in the surveillance area and can be thought of as a ''health-index'' of the people living in this area. As a surrogate to bioterrorism data, we have experimented with two flu data sets. For both databases, we show that space-time clustering can detect a flu epidemic up to 21 to 28 days earlier than a conventional periodic regression technique. We have also tested using simulated anthrax attack data on top of a respiratory illness diagnostic category. Results show we do very well at detecting an attack as early as the second or third day after infected people start becoming severely symptomatic.
Shechtman, Eli; Caspi, Yaron; Irani, Michal
2005-04-01
We propose a method for constructing a video sequence of high space-time resolution by combining information from multiple low-resolution video sequences of the same dynamic scene. Super-resolution is performed simultaneously in time and in space. By "temporal super-resolution," we mean recovering rapid dynamic events that occur faster than regular frame-rate. Such dynamic events are not visible (or else are observed incorrectly) in any of the input sequences, even if these are played in "slow-motion." The spatial and temporal dimensions are very different in nature, yet are interrelated. This leads to interesting visual trade-offs in time and space and to new video applications. These include: 1) treatment of spatial artifacts (e.g., motion-blur) by increasing the temporal resolution and 2) combination of input sequences of different space-time resolutions (e.g., NTSC, PAL, and even high quality still images) to generate a high quality video sequence. We further analyze and compare characteristics of temporal super-resolution to those of spatial super-resolution. These include: How many video cameras are needed to obtain increased resolution? What is the upper bound on resolution improvement via super-resolution? What is the temporal analogue to the spatial "ringing" effect?
Casimir energy in Kerr space-time
NASA Astrophysics Data System (ADS)
Sorge, F.
2014-10-01
We investigate the vacuum energy of a scalar massless field confined in a Casimir cavity moving in a circular equatorial orbit in the exact Kerr space-time geometry. We find that both the orbital motion of the cavity and the underlying space-time geometry conspire in lowering the absolute value of the (renormalized) Casimir energy ⟨ɛvac⟩ren , as measured by a comoving observer, with respect to whom the cavity is at rest. This, in turn, causes a weakening in the attractive force between the Casimir plates. In particular, we show that the vacuum energy density ⟨ɛvac⟩ren→0 when the orbital path of the Casimir cavity comes close to the corotating or counter-rotating circular null orbits (possibly geodesic) allowed by the Kerr geometry. Such an effect could be of some astrophysical interest on relevant orbits, such as the Kerr innermost stable circular orbits, being potentially related to particle confinement (as in some interquark models). The present work generalizes previous results obtained by several authors in the weak field approximation.
Space-time formulation for finite element modeling of superconductors
Ashworth, Stephen P; Grilli, Francesco; Sirois, Frederic; Laforest, Marc
2008-01-01
In this paper we present a new model for computing the current density and field distributions in superconductors by means of a periodic space-time formulation for finite elements (FE). By considering a space dimension as time, we can use a static model to solve a time dependent problem. This allows overcoming one of the major problems of FE modeling of superconductors: the length of simulations, even for relatively simple cases. We present our first results and compare them to those obtained with a 'standard' time-dependent method and with analytical solutions.
A hypocentral version of the space-time ETAS model
NASA Astrophysics Data System (ADS)
Guo, Yicun; Zhuang, Jiancang; Zhou, Shiyong
2015-10-01
The space-time Epidemic-Type Aftershock Sequence (ETAS) model is extended by incorporating the depth component of earthquake hypocentres. The depths of the direct offspring produced by an earthquake are assumed to be independent of the epicentre locations and to follow a beta distribution, whose shape parameter is determined by the depth of the parent event. This new model is verified by applying it to the Southern California earthquake catalogue. The results show that the new model fits data better than the original epicentre ETAS model and that it provides the potential for modelling and forecasting seismicity with higher resolutions.
Double conformal space-time algebra
NASA Astrophysics Data System (ADS)
Easter, Robert Benjamin; Hitzer, Eckhard
2017-01-01
The Double Conformal Space-Time Algebra (DCSTA) is a high-dimensional 12D Geometric Algebra G 4,8that extends the concepts introduced with the Double Conformal / Darboux Cyclide Geometric Algebra (DCGA) G 8,2 with entities for Darboux cyclides (incl. parabolic and Dupin cyclides, general quadrics, and ring torus) in spacetime with a new boost operator. The base algebra in which spacetime geometry is modeled is the Space-Time Algebra (STA) G 1,3. Two Conformal Space-Time subalgebras (CSTA) G 2,4 provide spacetime entities for points, flats (incl. worldlines), and hyperbolics, and a complete set of versors for their spacetime transformations that includes rotation, translation, isotropic dilation, hyperbolic rotation (boost), planar reflection, and (pseudo)spherical inversion in rounds or hyperbolics. The DCSTA G 4,8 is a doubling product of two G 2,4 CSTA subalgebras that inherits doubled CSTA entities and versors from CSTA and adds new bivector entities for (pseudo)quadrics and Darboux (pseudo)cyclides in spacetime that are also transformed by the doubled versors. The "pseudo" surface entities are spacetime hyperbolics or other surface entities using the time axis as a pseudospatial dimension. The (pseudo)cyclides are the inversions of (pseudo)quadrics in rounds or hyperbolics. An operation for the directed non-uniform scaling (anisotropic dilation) of the bivector general quadric entities is defined using the boost operator and a spatial projection. DCSTA allows general quadric surfaces to be transformed in spacetime by the same complete set of doubled CSTA versor (i.e., DCSTA versor) operations that are also valid on the doubled CSTA point entity (i.e., DCSTA point) and the other doubled CSTA entities. The new DCSTA bivector entities are formed by extracting values from the DCSTA point entity using specifically defined inner product extraction operators. Quadric surface entities can be boosted into moving surfaces with constant velocities that display the length
Development of MCNPX-ESUT computer code for simulation of neutron/gamma pulse height distribution
NASA Astrophysics Data System (ADS)
Abolfazl Hosseini, Seyed; Vosoughi, Naser; Zangian, Mehdi
2015-05-01
In this paper, the development of the MCNPX-ESUT (MCNPX-Energy Engineering of Sharif University of Technology) computer code for simulation of neutron/gamma pulse height distribution is reported. Since liquid organic scintillators like NE-213 are well suited and routinely used for spectrometry in mixed neutron/gamma fields, this type of detectors is selected for simulation in the present study. The proposed algorithm for simulation includes four main steps. The first step is the modeling of the neutron/gamma particle transport and their interactions with the materials in the environment and detector volume. In the second step, the number of scintillation photons due to charged particles such as electrons, alphas, protons and carbon nuclei in the scintillator material is calculated. In the third step, the transport of scintillation photons in the scintillator and lightguide is simulated. Finally, the resolution corresponding to the experiment is considered in the last step of the simulation. Unlike the similar computer codes like SCINFUL, NRESP7 and PHRESP, the developed computer code is applicable to both neutron and gamma sources. Hence, the discrimination of neutron and gamma in the mixed fields may be performed using the MCNPX-ESUT computer code. The main feature of MCNPX-ESUT computer code is that the neutron/gamma pulse height simulation may be performed without needing any sort of post processing. In the present study, the pulse height distributions due to a monoenergetic neutron/gamma source in NE-213 detector using MCNPX-ESUT computer code is simulated. The simulated neutron pulse height distributions are validated through comparing with experimental data (Gohil et al. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 664 (2012) 304-309.) and the results obtained from similar computer codes like SCINFUL, NRESP7 and Geant4. The simulated gamma pulse height distribution for a 137Cs
Fractal Signals & Space-Time Cartoons
NASA Astrophysics Data System (ADS)
Oetama, -Hc, Jakob, , Dr; Maksoed, Wh-
2016-03-01
In ``Theory of Scale Relativity'', 1991- L. Nottale states whereas ``scale relativity is a geometrical & fractal space-time theory''. It took in comparisons to ``a unified, wavelet based framework for efficiently synthetizing, analyzing ∖7 processing several broad classes of fractal signals''-Gregory W. Wornell:``Signal Processing with Fractals'', 1995. Furthers, in Fig 1.1. a simple waveform from statistically scale-invariant random process [ibid.,h 3 ]. Accompanying RLE Technical Report 566 ``Synthesis, Analysis & Processing of Fractal Signals'' as well as from Wornell, Oct 1991 herewith intended to deducts =a Δt + (1 - β Δ t) ...in Petersen, et.al: ``Scale invariant properties of public debt growth'',2010 h. 38006p2 to [1/{1- (2 α (λ) /3 π) ln (λ/r)}depicts in Laurent Nottale,1991, h 24. Acknowledgment devotes to theLates HE. Mr. BrigadierGeneral-TNI[rtd].Prof. Ir. HANDOJO.
Special geometry and space-time signature
NASA Astrophysics Data System (ADS)
Sabra, W. A.
2017-10-01
We construct N = 2 four and five-dimensional supergravity theories coupled to vector multiplets in various space-time signatures (t , s), where t and s refer, respectively, to the number of time and spatial dimensions. The five-dimensional supergravity theories, t + s = 5, are constructed by investigating the integrability conditions arising from Killing spinor equations. The five-dimensional supergravity theories can also be obtained by reducing Hull's eleven-dimensional supergravities on a Calabi-Yau threefold. The dimensional reductions of the five-dimensional supergravities on space and time-like circles produce N = 2 four-dimensional supergravity theories with signatures (t - 1 , s) and (t , s - 1) exhibiting projective special (para)-Kähler geometry.
Circular motion in NUT space-time
NASA Astrophysics Data System (ADS)
Jefremov, Paul I.; Perlick, Volker
2016-12-01
We consider circular motion in the NUT (Newman-Unti-Tamburino) space-time. Among other things, we determine the location of circular time-like geodesic orbits, in particular of the innermost stable circular orbit (ISCO) and of the marginally bound circular orbit. Moreover, we discuss the von Zeipel cylinders with respect to the stationary observers and with respect to the zero angular momentum observers (ZAMOs). We also investigate the relation of von Zeipel cylinders to inertial forces, in particular in the ultra-relativistic limit. Finally, we generalise the construction of thick accretion tori (‘Polish doughnuts’) which are well known on the Schwarzschild or Kerr background to the case of the NUT metric. We argue that, in principle, an NUT source could be distinguished from a Schwarzschild or Kerr source by observing the features of circular matter flows in its neighbourhood.
Comeron, J. M.; Aguade, M.
1996-01-01
The Xdh (rosy) region of Drosophila subobscura has been sequenced and compared to the homologous region of D. pseudoobscura and D. melanogaster. Estimates of the numbers of synonymous substitutions per site (Ks) confirm that Xdh has a high synonymous substitution rate. The distributions of both nonsynonymous and synonymous substitutions along the coding region were found to be heterogeneous. Also, no relationship has been detected between Ks estimates and codon usage bias along the gene, in contrast with the generally observed relationship among genes. This heterogeneous distribution of synonymous substitutions along the Xdh gene, which is expression-level independent, could be explained by a differential selection pressure on synonymous sites along the coding region acting on mRNA secondary structure. The synonymous rate in the Xdh coding region is lower in the D. subobscura than in the D. pseudoobscura lineage, whereas the reverse is true for the Adh gene. PMID:8913749
Hybrid decode-amplify-forward (HDAF) scheme in distributed Alamouti-coded cooperative network
NASA Astrophysics Data System (ADS)
Gurrala, Kiran Kumar; Das, Susmita
2015-05-01
In this article, a signal-to-noise ratio (SNR)-based hybrid decode-amplify-forward scheme in a distributed Alamouti-coded cooperative network is proposed. Considering a flat Rayleigh fading channel environment, the MATLAB simulation and analysis are carried out. In the cooperative scheme, two relays are employed, where each relay is transmitting each row Alamouti code. The selection of SNR threshold depends on the target rate information. The closed form expressions of symbol error rate (SER), the outage probability and average channel capacity with tight upper bounds are derived and compared with the simulation done in MATLAB environment. Furthermore, the impact of relay location on the SER performance is analysed. It is observed that the proposed hybrid relaying technique outperforms the individual amplify and forward and decode and forward ones in the distributed Alamouti-coded cooperative network.
Spatial and Space-Time Correlations in Systems of Subpopulations with Genetic Drift and Migration
Epperson, B. K.
1993-01-01
The geographic distribution of genetic variation is an important theoretical and experimental component of population genetics. Previous characterizations of genetic structure of populations have used measures of spatial variance and spatial correlations. Yet a full understanding of the causes and consequences of spatial structure requires complete characterization of the underlying space-time system. This paper examines important interactions between processes and spatial structure in systems of subpopulations with migration and drift, by analyzing correlations of gene frequencies over space and time. We develop methods for studying important features of the complete set of space-time correlations of gene frequencies for the first time in population genetics. These methods also provide a new alternative for studying the purely spatial correlations and the variance, for models with general spatial dimensionalities and migration patterns. These results are obtained by employing theorems, previously unused in population genetics, for space-time autoregressive (STAR) stochastic spatial time series. We include results on systems with subpopulation interactions that have time delay lags (temporal orders) greater than one. We use the space-time correlation structure to develop novel estimators for migration rates that are based on space-time data (samples collected over space and time) rather than on purely spatial data, for real systems. We examine the space-time and spatial correlations for some specific stepping stone migration models. One focus is on the effects of anisotropic migration rates. Partial space-time correlation coefficients can be used for identifying migration patterns. Using STAR models, the spatial, space-time, and partial space-time correlations together provide a framework with an unprecedented level of detail for characterizing, predicting and contrasting space-time theoretical distributions of gene frequencies, and for identifying features such as
Offset Manchester coding for Rayleigh noise suppression in carrier-distributed WDM-PONs
NASA Astrophysics Data System (ADS)
Xu, Jing; Yu, Xiangyu; Lu, Weichao; Qu, Fengzhong; Deng, Ning
2015-07-01
We propose a novel offset Manchester coding in upstream to simultaneously realize Rayleigh noise suppression and differential detection in a carrier-distributed wavelength division multiplexed passive optical network. Error-free transmission of 2.5-Gb/s upstream signals over 50-km standard single mode fiber is experimentally demonstrated, with a 7-dB enhanced tolerance to Rayleigh noise.
NASA Astrophysics Data System (ADS)
Ezquerro, L.; Moretti, M.; Liesa, C. L.; Luzón, A.; Pueyo, E. L.; Simón, J. L.
2016-10-01
This work describes soft-sediment deformation structures (clastic dykes, load structures, diapirs, slumps, nodulizations or mudcracks) identified in three sections (Concud, Ramblillas and Masada Cociero) in the Iberian Range, Spain. These sections were logged from boreholes and outcrops in Upper Pliocene-Lower Pleistocene deposits of the Teruel-Concud Residual Basin, close to de Concud normal fault. Timing of the succession and hence of seismic and non-seismic SSDSs, covering a time span between 3.6 and 1.9 Ma, has been constrained from previous biostratigraphic and magnetostratigraphic information, then substantially refined from a new magnetostratigraphic study at Masada Cociero profile. Non-seismic SSDSs are relatively well-correlated between sections, while seismic ones are poorly correlated except for several clusters of structures. Between 29 and 35 seismic deformed levels have been computed for the overall stratigraphic succession. Factors controlling the lateral and vertical distribution of SSDSs are their seismic or non-seismic origin, the distance to the seismogenic source (Concud Fault), the sedimentary facies involved in deformation and the observation conditions (borehole core vs. natural outcrop). In the overall stratigraphic section, seismites show an apparent recurrence period of 56 to 108 ka. Clustering of seismic SSDSs levels within a 91-ka-long interval records a period of high paleoseismic activity with an apparent recurrence time of 4.8 to 6.1 ka, associated with increasing sedimentation rate and fault activity. Such activity pattern of the Concud Fault for the Late Pliocene-Early Pliocene, with alternating periods of faster and slower slip, is similar to that for the most recent Quaternary (last ca. 74 ka BP). Concerning the research methods, time occurrence patterns recognized for peaks of paleoseismic activity from SSDSs in boreholes are similar to those inferred from primary evidence in trenches. Consequently, apparent recurrence periods
Beyond Archimedean Space-Time Structure
NASA Astrophysics Data System (ADS)
Rosinger, Elemér E.; Khrennikov, Andrei
2011-03-01
It took two millennia after Euclid and until in the early 1880s, when we went beyond the ancient axiom of parallels, and inaugurated geometries of curved spaces. In less than one more century, General Relativity followed. At present, physical thinking is still beheld by the yet deeper and equally ancient Archimedean assumption. In view of that, it is argued with some rather easily accessible mathematical support that Theoretical Physics may at last venture into the non-Archimedean realms. In this introductory paper we stress two fundamental consequences of the non-Archimedean approach to Theoretical Physics: one of them for quantum theory and another for relativity theory. From the non-Archimedean viewpoint, the assumption of the existence of minimal quanta of light (of the fixed frequency) is an artifact of the present Archimedean mathematical basis of quantum mechanics. In the same way the assumption of the existence of the maximal velocity, the velocity of light, is a feature of the real space-time structure which is fundamentally Archimedean. Both these assumptions are not justified in corresponding non-Archimedean models.
Deformed space-time transformations in Mercury
NASA Astrophysics Data System (ADS)
Cardone, F.; Albertini, G.; Bassani, D.; Cherubini, G.; Guerriero, E.; Mignani, R.; Monti, M.; Petrucci, A.; Ridolfi, F.; Rosada, A.; Rosetto, F.; Sala, V.; Santoro, E.; Spera, G.
2017-09-01
A mole of Mercury was suitably treated by ultrasound in order to generate in it the same conditions of local Lorentz invariance violation that were generated in a sonicated cylindrical bar of AISI 304 steel and that are the cause of neutron emission during the sonication. After 3 min, part of the mercury turned into a solid material which turned out to contain isotopes having a different mass (higher and lower) with respect to the isotopes already present in the initial material (mercury). These transformations in the atomic weight without gamma production above the background are brought about during Deformed Space-Time reactions. We present the results of the analyses performed on samples taken from the transformation product. The analyses have been done in two groups, the first one using five different analytical techniques: ICP-OES, XRF, ESEM-EDS, ICP-MS, INAA. In the second group of analyses, we used only two techniques: INAA and ICP-MS. The second group of analyses confirmed the occurring of the transformations in mercury.
Beyond Archimedean Space-Time Structure
Rosinger, Elemer E.; Khrennikov, Andrei
2011-03-28
It took two millennia after Euclid and until in the early 1880s, when we went beyond the ancient axiom of parallels, and inaugurated geometries of curved spaces. In less than one more century, General Relativity followed. At present, physical thinking is still beheld by the yet deeper and equally ancient Archimedean assumption. In view of that, it is argued with some rather easily accessible mathematical support that Theoretical Physics may at last venture into the non-Archimedean realms. In this introductory paper we stress two fundamental consequences of the non-Archimedean approach to Theoretical Physics: one of them for quantum theory and another for relativity theory. From the non-Archimedean viewpoint, the assumption of the existence of minimal quanta of light (of the fixed frequency) is an artifact of the present Archimedean mathematical basis of quantum mechanics. In the same way the assumption of the existence of the maximal velocity, the velocity of light, is a feature of the real space-time structure which is fundamentally Archimedean. Both these assumptions are not justified in corresponding non-Archimedean models.
Space-Time, Relativity, and Cosmology
NASA Astrophysics Data System (ADS)
Wudka, Jose
2006-07-01
Space-Time, Relativity and Cosmology provides a historical introduction to modern relativistic cosmology and traces its historical roots and evolution from antiquity to Einstein. The topics are presented in a non-mathematical manner, with the emphasis on the ideas that underlie each theory rather than their detailed quantitative consequences. A significant part of the book focuses on the Special and General theories of relativity. The tests and experimental evidence supporting the theories are explained together with their predictions and their confirmation. Other topics include a discussion of modern relativistic cosmology, the consequences of Hubble's observations leading to the Big Bang hypothesis, and an overview of the most exciting research topics in relativistic cosmology. This textbook is intended for introductory undergraduate courses on the foundations of modern physics. It is also accessible to advanced high school students, as well as non-science majors who are concerned with science issues.• Uses a historical perspective to describe the evolution of modern ideas about space and time • The main arguments are described using a completely non-mathematical approach • Ideal for physics undergraduates and high-school students, non-science majors and general readers
Performance and Application of Parallel OVERFLOW Codes on Distributed and Shared Memory Platforms
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Rizk, Yehia M.
1999-01-01
The presentation discusses recent studies on the performance of the two parallel versions of the aerodynamics CFD code, OVERFLOW_MPI and _MLP. Developed at NASA Ames, the serial version, OVERFLOW, is a multidimensional Navier-Stokes flow solver based on overset (Chimera) grid technology. The code has recently been parallelized in two ways. One is based on the explicit message-passing interface (MPI) across processors and uses the _MPI communication package. This approach is primarily suited for distributed memory systems and workstation clusters. The second, termed the multi-level parallel (MLP) method, is simple and uses shared memory for all communications. The _MLP code is suitable on distributed-shared memory systems. For both methods, the message passing takes place across the processors or processes at the advancement of each time step. This procedure is, in effect, the Chimera boundary conditions update, which is done in an explicit "Jacobi" style. In contrast, the update in the serial code is done in more of the "Gauss-Sidel" fashion. The programming efforts for the _MPI code is more complicated than for the _MLP code; the former requires modification of the outer and some inner shells of the serial code, whereas the latter focuses only on the outer shell of the code. The _MPI version offers a great deal of flexibility in distributing grid zones across a specified number of processors in order to achieve load balancing. The approach is capable of partitioning zones across multiple processors or sending each zone and/or cluster of several zones into a single processor. The message passing across the processors consists of Chimera boundary and/or an overlap of "halo" boundary points for each partitioned zone. The MLP version is a new coarse-grain parallel concept at the zonal and intra-zonal levels. A grouping strategy is used to distribute zones into several groups forming sub-processes which will run in parallel. The total volume of grid points in each
Performance and Application of Parallel OVERFLOW Codes on Distributed and Shared Memory Platforms
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Rizk, Yehia M.
1999-01-01
The presentation discusses recent studies on the performance of the two parallel versions of the aerodynamics CFD code, OVERFLOW_MPI and _MLP. Developed at NASA Ames, the serial version, OVERFLOW, is a multidimensional Navier-Stokes flow solver based on overset (Chimera) grid technology. The code has recently been parallelized in two ways. One is based on the explicit message-passing interface (MPI) across processors and uses the _MPI communication package. This approach is primarily suited for distributed memory systems and workstation clusters. The second, termed the multi-level parallel (MLP) method, is simple and uses shared memory for all communications. The _MLP code is suitable on distributed-shared memory systems. For both methods, the message passing takes place across the processors or processes at the advancement of each time step. This procedure is, in effect, the Chimera boundary conditions update, which is done in an explicit "Jacobi" style. In contrast, the update in the serial code is done in more of the "Gauss-Sidel" fashion. The programming efforts for the _MPI code is more complicated than for the _MLP code; the former requires modification of the outer and some inner shells of the serial code, whereas the latter focuses only on the outer shell of the code. The _MPI version offers a great deal of flexibility in distributing grid zones across a specified number of processors in order to achieve load balancing. The approach is capable of partitioning zones across multiple processors or sending each zone and/or cluster of several zones into a single processor. The message passing across the processors consists of Chimera boundary and/or an overlap of "halo" boundary points for each partitioned zone. The MLP version is a new coarse-grain parallel concept at the zonal and intra-zonal levels. A grouping strategy is used to distribute zones into several groups forming sub-processes which will run in parallel. The total volume of grid points in each
Multiple description distributed image coding with side information for mobile wireless transmission
NASA Astrophysics Data System (ADS)
Wu, Min; Song, Daewon; Chen, Chang Wen
2005-03-01
Multiple description coding (MDC) is a source coding technique that involves coding the source information into multiple descriptions, and then transmitting them over different channels in packet network or error-prone wireless environment to achieve graceful degradation if parts of descriptions are lost at the receiver. In this paper, we proposed a multiple description distributed wavelet zero tree image coding system for mobile wireless transmission. We provide two innovations to achieve an excellent error resilient capability. First, when MDC is applied to wavelet subband based image coding, it is possible to introduce correlation between the descriptions in each subband. We consider using such a correlation as well as potentially error corrupted description as side information in the decoding to formulate the MDC decoding as a Wyner Ziv decoding problem. If only part of descriptions is lost, however, their correlation information is still available, the proposed Wyner Ziv decoder can recover the description by using the correlation information and the error corrupted description as side information. Secondly, in each description, single bitstream wavelet zero tree coding is very vulnerable to the channel errors. The first bit error may cause the decoder to discard all subsequent bits whether or not the subsequent bits are correctly received. Therefore, we integrate the multiple description scalar quantization (MDSQ) with the multiple wavelet tree image coding method to reduce error propagation. We first group wavelet coefficients into multiple trees according to parent-child relationship and then code them separately by SPIHT algorithm to form multiple bitstreams. Such decomposition is able to reduce error propagation and therefore improve the error correcting capability of Wyner Ziv decoder. Experimental results show that the proposed scheme not only exhibits an excellent error resilient performance but also demonstrates graceful degradation over the packet
Space-Time Transfinite Interpolation of Volumetric Material Properties.
Sanchez, Mathieu; Fryazinov, Oleg; Adzhiev, Valery; Comninos, Peter; Pasko, Alexander
2015-02-01
The paper presents a novel technique based on extension of a general mathematical method of transfinite interpolation to solve an actual problem in the context of a heterogeneous volume modelling area. It deals with time-dependent changes to the volumetric material properties (material density, colour, and others) as a transformation of the volumetric material distributions in space-time accompanying geometric shape transformations such as metamorphosis. The main idea is to represent the geometry of both objects by scalar fields with distance properties, to establish in a higher-dimensional space a time gap during which the geometric transformation takes place, and to use these scalar fields to apply the new space-time transfinite interpolation to volumetric material attributes within this time gap. The proposed solution is analytical in its nature, does not require heavy numerical computations and can be used in real-time applications. Applications of this technique also include texturing and displacement mapping of time-variant surfaces, and parametric design of volumetric microstructures.
Space-time prospective surveillance based on Knox local statistics.
Piroutek, Aline; Assunção, Renato; Paiva, Thaís
2014-07-20
We studied a surveillance system to prospectively monitor the emergence of space-time clusters in point pattern of disease events. Its aim is to detect a cluster as soon as possible after its emergence, and it is also desired to keep the rate of false alarms at a controlled level. The method is a modification from a previous proposal based on a local version of the Knox statistic and which examined a retrospective surveillance scenario, looking for the earliest time in the past that change could have been deemed to occur. We modify this method to take into account the prospective case, being able then to fix the serious difficulties found by other authors. We evaluated the surveillance system in several scenarios, including without and with emerging clusters, checking distributional assumptions, and assessing performance impacts of different emergence times, shapes, extent, and intensity of the emerging clusters. Our conclusion is that the space-time surveillance system based on local Knox statistics is very efficient in its statistical properties, and it is appealing to epidemiologists and public health officials because it is simple to use and easily understandable. This makes it a promising candidate to practical use by public health official agencies. Copyright © 2014 John Wiley & Sons, Ltd.
Non-contact assessment of melanin distribution via multispectral temporal illumination coding
NASA Astrophysics Data System (ADS)
Amelard, Robert; Scharfenberger, Christian; Wong, Alexander; Clausi, David A.
2015-03-01
Melanin is a pigment that is highly absorptive in the UV and visible electromagnetic spectra. It is responsible for perceived skin tone, and protects against harmful UV effects. Abnormal melanin distribution is often an indicator for melanoma. We propose a novel approach for non-contact melanin distribution via multispectral temporal illumination coding to estimate the two-dimensional melanin distribution based on its absorptive characteristics. In the proposed system, a novel multispectral, cross-polarized, temporally-coded illumination sequence is synchronized with a camera to measure reflectance under both multispectral and ambient illumination. This allows us to eliminate the ambient illumination contribution from the acquired reflectance measurements, and also to determine the melanin distribution in an observed region based on the spectral properties of melanin using the Beer-Lambert law. Using this information, melanin distribution maps can be generated for objective, quantitative assessment of skin type of individuals. We show that the melanin distribution map correctly identifies areas with high melanin densities (e.g., nevi).
Side information and noise learning for distributed video coding using optical flow and clustering.
Luong, Huynh Van; Rakêt, Lars Lau; Huang, Xin; Forchhammer, Søren
2012-12-01
Distributed video coding (DVC) is a coding paradigm that exploits the source statistics at the decoder side to reduce the complexity at the encoder. The coding efficiency of DVC critically depends on the quality of side information generation and accuracy of noise modeling. This paper considers transform domain Wyner-Ziv (TDWZ) coding and proposes using optical flow to improve side information generation and clustering to improve the noise modeling. The optical flow technique is exploited at the decoder side to compensate for weaknesses of block-based methods, when using motion-compensation to generate side information frames. Clustering is introduced to capture cross band correlation and increase local adaptivity in the noise modeling. This paper also proposes techniques to learn from previously decoded WZ frames. Different techniques are combined by calculating a number of candidate soft side information for low density parity check accumulate decoding. The proposed decoder side techniques for side information and noise learning (SING) are integrated in a TDWZ scheme. On test sequences, the proposed SING codec robustly improves the coding efficiency of TDWZ DVC. For WZ frames using a GOP size of 2, up to 4-dB improvement or an average (Bjøntegaard) bit-rate savings of 37% is achieved compared with DISCOVER.
Examination of nanoparticle dispersion using a novel GPU based radial distribution function code
NASA Astrophysics Data System (ADS)
Rosch, Thomas; Wade, Matthew; Phelan, Frederick
We have developed a novel GPU-based code that rapidly calculates radial distribution function (RDF) for an entire system, with no cutoff, ensuring accuracy. Built on top of this code, we have developed tools to calculate the second virial coefficient (B2) and the structure factor from the RDF, two properties that are directly related to the dispersion of nanoparticles in nancomposite systems. We validate the RDF calculations by comparison with previously published results, and also show how our code, which takes into account bonding in polymeric systems, enables more accurate predictions of g(r) than current state of the art GPU-based RDF codes currently available for these systems. In addition, our code reduces the computational time by approximately an order of magnitude compared to CPU-based calculations. We demonstrate the application of our toolset by the examination of a coarse-grained nanocomposite system and show how different surface energies between particle and polymer lead to different dispersion states, and effect properties such as viscosity, yield strength, elasticity, and thermal conductivity.
Temperature and entropy of Schwarzschild de Sitter space-time
NASA Astrophysics Data System (ADS)
Shankaranarayanan, S.
2003-04-01
In the light of recent interest in quantum gravity in de Sitter space, we investigate semiclassical aspects of four-dimensional Schwarzschild de Sitter space-time using the method of complex paths. The standard semiclassical techniques (such as Bogoliubov coefficients and Euclidean field theory) have been useful to study quantum effects in space-times with single horizons; however, none of these approaches seem to work for Schwarzschild de Sitter space-time or, in general, for space-times with multiple horizons. We extend the method of complex paths to space-times with multiple horizons and obtain the spectrum of particles produced in these space-times. We show that the temperature of radiation in these space-times is proportional to the effective surface gravity—the inverse harmonic sum of surface gravity of each horizon. For the Schwarzschild de Sitter space-time, we apply the method of complex paths to three different coordinate systems—spherically symmetric, Painlevé, and Lemaître. We show that the equilibrium temperature in Schwarzschild de Sitter space-time is the harmonic mean of cosmological and event horizon temperatures. We obtain Bogoliubov coefficients for space-times with multiple horizons by analyzing the mode functions of the quantum fields near the horizons. We propose a new definition of entropy for space-times with multiple horizons, analogous to the entropic definition for space-times with a single horizon. We define entropy for these space-times to be inversely proportional to the square of the effective surface gravity. We show that this definition of entropy for Schwarzschild de Sitter space-time satisfies the D-bound conjecture.
A distributed coding approach for stereo sequences in the tree structured Haar transform domain
NASA Astrophysics Data System (ADS)
Cancellaro, M.; Carli, M.; Neri, A.
2009-02-01
In this contribution, a novel method for distributed video coding for stereo sequences is proposed. The system encodes independently the left and right frames of the stereoscopic sequence. The decoder exploits the side information to achieve the best reconstruction of the correlated video streams. In particular, a syndrome coder approach based on a lifted Tree Structured Haar wavelet scheme has been adopted. The experimental results show the effectiveness of the proposed scheme.
DISTRIBUTED CONTAINER FAILURE MODELS FOR THE DUST-MS COMPUTER CODE.
SULLIVAN,T.; DE LEMOS,F.
2001-02-24
Improvements to the DUST-MS computer code have been made that permit simulation of distributed container failure rates. The new models permit instant failure of all containers within a computational volume, uniform failure of these containers over time, or a normal distribution in container failures. Incorporation of a distributed failure model requires wasteform releases to be calculated using a convolution integral. In addition, the models permit a unique time of emplacement for each modeled container and allow a fraction of the containers to fail at emplacement. Implementation of these models, verification testing, and an example problem comparing releases from a wasteform with a two-species decay chain as a function of failure distribution are presented in the paper.
Event-by-event study of space-time dynamics in flux-tube fragmentation
NASA Astrophysics Data System (ADS)
Wong, Cheuk-Yin
2017-07-01
In the semi-classical description of the flux-tube fragmentation process for hadron production and hadronization in high-energy {e}+{e}- annihilations and pp collisions, the rapidity-space-time ordering and the local conservation laws of charge, flavor, and momentum provide a set of powerful tools that may allow the reconstruction of the space-time dynamics of quarks and mesons in exclusive measurements of produced hadrons, on an event-by-event basis. We propose procedures to reconstruct the space-time dynamics from event-by-event exclusive hadron data to exhibit explicitly the ordered chain of hadrons produced in a flux tube fragmentation. As a supplementary tool, we infer the average space-time coordinates of the q-\\bar{q} pair production vertices from the {π }- rapidity distribution data obtained by the NA61/SHINE Collaboration in pp collisions at \\sqrt{s}=6.3 to 17.3 GeV.
Space-time transformation sky brightness at a horizontal position of the sun
NASA Astrophysics Data System (ADS)
Galileiskii, Viktor P.; Elizarov, Alexey I.; Kokarev, Dmitrii V.; Morozov, Aleksandr M.
2015-11-01
This report discusses some simulation results of the angular distribution of brightness of the sky in the case of molecular scattering in the atmosphere for the benefit of the study of space-time changes of this distribution during the civil twilight.
Holsclaw, Tracy; Hallgren, Kevin A; Steyvers, Mark; Smyth, Padhraic; Atkins, David C
2015-12-01
Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased Type I and Type II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in online supplemental materials.
Holsclaw, Tracy; Hallgren, Kevin A.; Steyvers, Mark; Smyth, Padhraic; Atkins, David C.
2015-01-01
Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non-normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased type-I and type-II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally-technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in supplementary materials. PMID:26098126
Domain structure of black hole space-times
Harmark, Troels
2009-07-15
We introduce the domain structure for stationary black hole space-times. The domain structure lives on the submanifold of fixed points of the Killing vector fields. Depending on which Killing vector field has fixed points the submanifold is naturally divided into domains. The domain structure provides invariants of the space-time, both topological and continuous. It is defined for any space-time dimension and any number of Killing vector fields. We examine the domain structure for asymptotically flat space-times and find a canonical form for the metric of such space-times. The domain structure generalizes the rod structure introduced for space-times with D-2 commuting Killing vector fields. We analyze in detail the domain structure for Minkowski space, the Schwarzschild-Tangherlini black hole and the Myers-Perry black hole in six and seven dimensions. Finally, we consider the possible domain structures for asymptotically flat black holes in six and seven dimensio0008.
Spatial correlation-based side information refinement for distributed video coding
NASA Astrophysics Data System (ADS)
Taieb, Mohamed Haj; Chouinard, Jean-Yves; Wang, Demin
2013-12-01
Distributed video coding (DVC) architecture designs, based on distributed source coding principles, have benefitted from significant progresses lately, notably in terms of achievable rate-distortion performances. However, a significant performance gap still remains when compared to prediction-based video coding schemes such as H.264/AVC. This is mainly due to the non-ideal exploitation of the video sequence temporal correlation properties during the generation of side information (SI). In fact, the decoder side motion estimation provides only an approximation of the true motion. In this paper, a progressive DVC architecture is proposed, which exploits the spatial correlation of the video frames to improve the motion-compensated temporal interpolation (MCTI). Specifically, Wyner-Ziv (WZ) frames are divided into several spatially correlated groups that are then sent progressively to the receiver. SI refinement (SIR) is performed as long as these groups are being decoded, thus providing more accurate SI for the next groups. It is shown that the proposed progressive SIR method leads to significant improvements over the Discover DVC codec as well as other SIR schemes recently introduced in the literature.
ETRANS: an energy transport system optimization code for distributed networks of solar collectors
Barnhart, J.S.
1980-09-01
The optimization code ETRANS was developed at the Pacific Northwest Laboratory to design and estimate the costs associated with energy transport systems for distributed fields of solar collectors. The code uses frequently cited layouts for dish and trough collectors and optimizes them on a section-by-section basis. The optimal section design is that combination of pipe diameter and insulation thickness that yields the minimum annualized system-resultant cost. Among the quantities included in the costing algorithm are (1) labor and materials costs associated with initial plant construction, (2) operating expenses due to daytime and nighttime heat losses, and (3) operating expenses due to pumping power requirements. Two preliminary series of simulations were conducted to exercise the code. The results indicate that transport system costs for both dish and trough collector fields increase with field size and receiver exit temperature. Furthermore, dish collector transport systems were found to be much more expensive to build and operate than trough transport systems. ETRANS itself is stable and fast-running and shows promise of being a highly effective tool for the analysis of distributed solar thermal systems.
Complementarity between entanglement-assisted and quantum distributed random access code
NASA Astrophysics Data System (ADS)
Hameedi, Alley; Saha, Debashis; Mironowicz, Piotr; Pawłowski, Marcin; Bourennane, Mohamed
2017-05-01
Collaborative communication tasks such as random access codes (RACs) employing quantum resources have manifested great potential in enhancing information processing capabilities beyond the classical limitations. The two quantum variants of RACs, namely, quantum random access code (QRAC) and the entanglement-assisted random access code (EARAC), have demonstrated equal prowess for a number of tasks. However, there do exist specific cases where one outperforms the other. In this article, we study a family of 3 →1 distributed RACs [J. Bowles, N. Brunner, and M. Pawłowski, Phys. Rev. A 92, 022351 (2015), 10.1103/PhysRevA.92.022351] and present its general construction of both the QRAC and the EARAC. We demonstrate that, depending on the function of inputs that is sought, if QRAC achieves the maximal success probability then EARAC fails to do so and vice versa. Moreover, a tripartite Bell-type inequality associated with the EARAC variants reveals the genuine multipartite nonlocality exhibited by our protocol. We conclude with an experimental realization of the 3 →1 distributed QRAC that achieves higher success probabilities than the maximum possible with EARACs for a number of tasks.
Space Time Clustering and the Permutation Moments of Quadratic Form.
Zhou, Yi-Hui; Mayhew, Gregory; Sun, Zhibin; Xu, Xiaolin; Zou, Fei; Wright, Fred A
2013-01-01
The Mantel and Knox space-time clustering statistics are popular tools to establish transmissibility of a disease and detect outbreaks. The most commonly used null distributional approximations may provide poor fits, and researchers often resort to direct sampling from the permutation distribution. However, the exact first four moments for these statistics are available, and Pearson distributional approximations are often effective. Thus, our first goal is to clarify the literature and to make these tools more widely available. In addition, by rewriting terms in the statistics we obtain the exact first four permutation moments for the most commonly used quadratic form statistics, which need not be positive definite. The extension of this work to quadratic forms greatly expands the utility of density approximations for these problems, including for high-dimensional applications, where the statistics must be extreme in order to exceed stringent testing thresholds. We demonstrate the methods using examples from the investigation of disease transmission in cattle, the association of a gene expression pathway with breast cancer survival, regional genetic association with cystic fibrosis lung disease, and hypothesis testing for smoothed local linear regression.
Quinlan, D; Barany, G; Panas, T
2007-08-30
Many forms of security analysis on large scale applications can be substantially automated but the size and complexity can exceed the time and memory available on conventional desktop computers. Most commercial tools are understandably focused on such conventional desktop resources. This paper presents research work on the parallelization of security analysis of both source code and binaries within our Compass tool, which is implemented using the ROSE source-to-source open compiler infrastructure. We have focused on both shared and distributed memory parallelization of the evaluation of rules implemented as checkers for a wide range of secure programming rules, applicable to desktop machines, networks of workstations and dedicated clusters. While Compass as a tool focuses on source code analysis and reports violations of an extensible set of rules, the binary analysis work uses the exact same infrastructure but is less well developed into an equivalent final tool.
Partially Key Distribution with Public Key Cryptosystem Based on Error Control Codes
NASA Astrophysics Data System (ADS)
Tavallaei, Saeed Ebadi; Falahati, Abolfazl
Due to the low level of security in public key cryptosystems based on number theory, fundamental difficulties such as "key escrow" in Public Key Infrastructure (PKI) and a secure channel in ID-based cryptography, a new key distribution cryptosystem based on Error Control Codes (ECC) is proposed . This idea is done by some modification on McEliece cryptosystem. The security of ECC cryptosystem obtains from the NP-Completeness of block codes decoding. The capability of generating public keys with variable lengths which is suitable for different applications will be provided by using ECC. It seems that usage of these cryptosystems because of decreasing in the security of cryptosystems based on number theory and increasing the lengths of their keys would be unavoidable in future.
Inferential multi-spectral image compression based on distributed source coding
NASA Astrophysics Data System (ADS)
Wu, Xian-yun; Li, Yun-song; Wu, Cheng-ke; Kong, Fan-qiang
2008-08-01
Based on the analyses of the interferential multispectral imagery(IMI), a new compression algorithm based on distributed source coding is proposed. There are apparent push motions between the IMI sequences, the relative shift between two images is detected by the block match algorithm at the encoder. Our algorithm estimates the rate of each bitplane with the estimated side information frame. then our algorithm adopts a ROI coding algorithm, in which the rate-distortion lifting procedure is carried out in rate allocation stage. Using our algorithm, the FBC can be removed from the traditional scheme. The compression algorithm developed in the paper can obtain up to 3dB's gain comparing with JPEG2000 and significantly reduce the complexity and storage consumption comparing with 3D-SPIHT at the cost of slight degrade in PSNR.
NASA Astrophysics Data System (ADS)
Lai, Hong; Orgun, Mehmet A.; Pieprzyk, Josef; Li, Jing; Luo, Mingxing; Xiao, Jinghua; Xiao, Fuyuan
2016-11-01
We propose an approach that achieves high-capacity quantum key distribution using Chebyshev-map values corresponding to Lucas numbers coding. In particular, we encode a key with the Chebyshev-map values corresponding to Lucas numbers and then use k-Chebyshev maps to achieve consecutive and flexible key expansion and apply the pre-shared classical information between Alice and Bob and fountain codes for privacy amplification to solve the security of the exchange of classical information via the classical channel. Consequently, our high-capacity protocol does not have the limitations imposed by orbital angular momentum and down-conversion bandwidths, and it meets the requirements for longer distances and lower error rates simultaneously.
LineCast: line-based distributed coding and transmission for broadcasting satellite images.
Wu, Feng; Peng, Xiulian; Xu, Jizheng
2014-03-01
In this paper, we propose a novel coding and transmission scheme, called LineCast, for broadcasting satellite images to a large number of receivers. The proposed LineCast matches perfectly with the line scanning cameras that are widely adopted in orbit satellites to capture high-resolution images. On the sender side, each captured line is immediately compressed by a transform-domain scalar modulo quantization. Without syndrome coding, the transmission power is directly allocated to quantized coefficients by scaling the coefficients according to their distributions. Finally, the scaled coefficients are transmitted over a dense constellation. This line-based distributed scheme features low delay, low memory cost, and low complexity. On the receiver side, our proposed line-based prediction is used to generate side information from previously decoded lines, which fully utilizes the correlation among lines. The quantized coefficients are decoded by the linear least square estimator from the received data. The image line is then reconstructed by the scalar modulo dequantization using the generated side information. Since there is neither syndrome coding nor channel coding, the proposed LineCast can make a large number of receivers reach the qualities matching their channel conditions. Our theoretical analysis shows that the proposed LineCast can achieve Shannon's optimum performance by using a high-dimensional modulo-lattice quantization. Experiments on satellite images demonstrate that it achieves up to 1.9-dB gain over the state-of-the-art 2D broadcasting scheme and a gain of more than 5 dB over JPEG 2000 with forward error correction.
Space-time correlations in urban sprawl.
Hernando, A; Hernando, R; Plastino, A
2014-02-06
Understanding demographic and migrational patterns constitutes a great challenge. Millions of individual decisions, motivated by economic, political, demographic, rational and/or emotional reasons underlie the high complexity of demographic dynamics. Significant advances in quantitatively understanding such complexity have been registered in recent years, as those involving the growth of cities but many fundamental issues still defy comprehension. We present here compelling empirical evidence of a high level of regularity regarding time and spatial correlations in urban sprawl, unravelling patterns about the inertia in the growth of cities and their interaction with each other. By using one of the world's most exhaustive extant demographic data basis--that of the Spanish Government's Institute INE, with records covering 111 years and (in 2011) 45 million people, distributed among more than 8000 population nuclei--we show that the inertia of city growth has a characteristic time of 15 years, and its interaction with the growth of other cities has a characteristic distance of 80 km. Distance is shown to be the main factor that entangles two cities (60% of total correlations). The power of our current social theories is thereby enhanced.
NASA Astrophysics Data System (ADS)
Zhuravleva, G. N.; Nagornova, I. V.; Kondratov, A. P.; Bablyuk, E. B.; Varepo, L. G.
2017-08-01
A research and modelling of weatherability and environmental durability of multilayer polymer insulation of both cable and pipelines with printed barcodes or color identification information were performed. It was proved that interlayer printing of identification codes in distribution pipelines insulation coatings provides high marking stability to light and atmospheric condensation. This allows to carry out their distant damage control. However, microbiological fouling of upper polymer layer hampers the distant damage pipelines identification. The color difference values and density changes of PE and PVC printed insolation due to weather and biological factors were defined.
Space-Time Characteristics of Rainfall Diurnal Variations
NASA Technical Reports Server (NTRS)
Yang, Song; Kummerow, Chris; Olson, Bill; Smith, Eric A.; Einaudi, Franco (Technical Monitor)
2001-01-01
The space-time features of rainfall diurnal variation of precipitation are systematically investigated by using the Tropical Rainfall Measuring Mission (TRMM) precipitation products retrieved from TRMM microwave imager (TMI), precipitation radar (PR) and TMI/PR combined algorithms. Results demonstrate that diurnal variability of precipitation is obvious over tropical regions. The dominant feature of rainfall diurnal cycle over, ocean is that there is consistent rainfall peak in early morning, while there is a consistent rainfall peak in mid-late afternoon over land. The seasonal variation on intensity of rainfall diurnal cycle is clearly evidenced. Horizontal distributions of rainfall diurnal variations indicate that there is a clearly early-morning peak with a secondary peak in the middle-late afternoon in ocean rainfall at latitudes dominated by large-scale convergence and deep convection. There is also an analogous early-morning peak in land rainfall along with a stronger afternoon peak forced by surface heating. Amplitude analysis shows that the patterns and its evolution of rainfall diurnal cycle are very close to rainfall distribution pattern and its evolution. These results indicate that rainfall diurnal variations are strongly associated with large-scale convective systems and climate weather systems. Phase studies clearly present the regional and seasonal features of rainfall diurnal activities. Further studies on convective and stratiform rainfall show different characteristics of diurnal cycles. Their spatial and temporal variations of convective and stratiform rainfall indicate that mechanisms for rainfall diurnal variations vary with time and space.
Entropy of Movement Outcome in Space-Time.
Lai, Shih-Chiung; Hsieh, Tsung-Yu; Newell, Karl M
2015-07-01
Information entropy of the joint spatial and temporal (space-time) probability of discrete movement outcome was investigated in two experiments as a function of different movement strategies (space-time, space, and time instructional emphases), task goals (point-aiming and target-aiming) and movement speed-accuracy constraints. The variance of the movement spatial and temporal errors was reduced by instructional emphasis on the respective spatial or temporal dimension, but increased on the other dimension. The space-time entropy was lower in targetaiming task than the point aiming task but did not differ between instructional emphases. However, the joint probabilistic measure of spatial and temporal entropy showed that spatial error is traded for timing error in tasks with space-time criteria and that the pattern of movement error depends on the dimension of the measurement process. The unified entropy measure of movement outcome in space-time reveals a new relation for the speed-accuracy.
Scripcaru, Gianina; Mateus, Ceu; Nunes, Carla
2017-05-10
The aim of this study is to identify the distribution by municipalities of adverse drug events (ADE) in Portugal, including adverse drug reactions (ADR) and accidental poisoning by drugs (AP), on municipality/years ADE rate clustering. Also we identify areas with different trends in time. We used a national dataset of public hospital discharges in Continental Portugal from 2004 to 2013. Events were identified based on codes: from E930 to E949.9 (ADR) and from E850 to E858.9 (AP). Space-time clustering and spatial variation in temporal trends methods were applied in three different time-periods: globally, by year and grouped in 2 classes (periods of 5 years). A total of 9,320,076 patients were discharged within this period, with 133,688 patients (1.46%) having at least one ADE, 4% of them related with AP. Critical space-time identified clusters (p < 0.001) were the municipalities from Lisbon metropolitan area and Centro region area. The global rate increased at a 7.8% mean annual percentage change, with high space-time heterogeneity and variation in time trends clusters (p < 0.001). For whole period, 2004-2013, all clusters presented increasing trends. However when analyzed by period of 5 years we identified two clusters with decreasing trends in time in 2004-2008. The impact of ADE is huge, with widely variations within country and in time, and represents an increasing challenge. Future research using individual and contextual risk factors are urgently needed to understand this spatiotemporal variability in order to promote local tailored and updated actions of prevention.
NASA Astrophysics Data System (ADS)
Ioan, M.-R.
2016-08-01
In ionizing radiation related experiments, precisely knowing of the involved parameters it is a very important task. Some of these experiments are involving the use of electromagnetic ionizing radiation such are gamma rays and X rays, others make use of energetic charged or not charged small dimensions particles such are protons, electrons, neutrons and even, in other cases, larger accelerated particles such are helium or deuterium nuclei are used. In all these cases the beam used to hit an exposed target must be previously collimated and precisely characterized. In this paper, a novel method to determine the distribution of the collimated beam involving Matlab coding is proposed. The method was implemented by using of some Pyrex glass test samples placed in the beam where its distribution and dimension must be determined, followed by taking high quality pictures of them and then by digital processing the resulted images. By this method, information regarding the doses absorbed in the exposed samples volume are obtained too.
Satellites, space, time and the African trypanosomiases.
Rogers, D J
2000-01-01
The human and animal trypanosomiases of Africa provide unique challenges to epidemiologists because of the spatial and temporal scales over which variation in transmission takes place. This chapter describes how our descriptions of the different components of transmission, from the parasites to the affected hosts, eventually developed to include geographical dimensions. It then briefly mentions two key analytical techniques used in the application of multi-temporal remotely sensed imagery to the interpretation of field data; temporal Fourier analysis for data reduction, and a variety of discriminant analytical techniques to describe the distribution and abundance of vectors and diseases. Satellite data may be used both for biological, process-based models and for statistical descriptions of vector populations and disease transmission. Examples are given of models for the tsetse Glossina morsitans in the Yankari Game Reserve, Nigeria, and in The Gambia. In both sites the satellite derived index of Land Surface Temperature (LST) is the best correlate of monthly mortality rates and is used to drive tsetse population models. The Gambia model is then supplemented with a disease transmission component; the mean infection rates of the vectors and of local cattle are satisfactorily described by the model, as are the seasonal variations of infection in the cattle. High and low spatial resolution satellite data have been used in a number of statistical studies of land cover types and tsetse habitats. In addition multi-temporal data may be related to both the incidence and prevalence of trypanosomiasis. Analysis of past and recent animal and human trypanosomiasis data from south-east Uganda supports the suggestion of the importance of cattle as a reservoir of the human disease in this area; mean infection prevalences in both human and animal hosts rise and fall in a similar fashion over the same range of increasing vegetation index values. Monthly sleeping sickness case data
Distribution of SR protein exonic splicing enhancer motifs in human protein-coding genes.
Wang, Jinhua; Smith, Philip J; Krainer, Adrian R; Zhang, Michael Q
2005-01-01
Exonic splicing enhancers (ESEs) are pre-mRNA cis-acting elements required for splice-site recognition. We previously developed a web-based program called ESEfinder that scores any sequence for the presence of ESE motifs recognized by the human SR proteins SF2/ASF, SRp40, SRp55 and SC35 (http://rulai.cshl.edu/tools/ESE/). Using ESEfinder, we have undertaken a large-scale analysis of ESE motif distribution in human protein-coding genes. Significantly higher frequencies of ESE motifs were observed in constitutive internal protein-coding exons, compared with both their flanking intronic regions and with pseudo exons. Statistical analysis of ESE motif frequency distributions revealed a complex relationship between splice-site strength and increased or decreased frequencies of particular SR protein motifs. Comparison of constitutively and alternatively spliced exons demonstrated slightly weaker splice-site scores, as well as significantly fewer ESE motifs, in the alternatively spliced group. Our results underline the importance of ESE-mediated SR protein function in the process of exon definition, in the context of both constitutive splicing and regulated alternative splicing.
Wan, Jan; Xiong, Naixue; Zhang, Wei; Zhang, Qinchao; Wan, Zheng
2012-01-01
The reliability of wireless sensor networks (WSNs) can be greatly affected by failures of sensor nodes due to energy exhaustion or the influence of brutal external environment conditions. Such failures seriously affect the data persistence and collection efficiency. Strategies based on network coding technology for WSNs such as LTCDS can improve the data persistence without mass redundancy. However, due to the bad intermediate performance of LTCDS, a serious ‘cliff effect’ may appear during the decoding period, and source data are hard to recover from sink nodes before sufficient encoded packets are collected. In this paper, the influence of coding degree distribution strategy on the ‘cliff effect’ is observed and the prioritized data storage and dissemination algorithm PLTD-ALPHA is presented to achieve better data persistence and recovering performance. With PLTD-ALPHA, the data in sensor network nodes present a trend that their degree distribution increases along with the degree level predefined, and the persistent data packets can be submitted to the sink node according to its degree in order. Finally, the performance of PLTD-ALPHA is evaluated and experiment results show that PLTD-ALPHA can greatly improve the data collection performance and decoding efficiency, while data persistence is not notably affected. PMID:23235451
NASA Astrophysics Data System (ADS)
Li, Hanshan; Lei, Zhiyong
2013-01-01
To improve projectile coordinate measurement precision in fire measurement system, this paper introduces the optical fiber coding fire measurement method and principle, sets up their measurement model, and analyzes coordinate errors by using the differential method. To study the projectile coordinate position distribution, using the mathematical statistics hypothesis method to analyze their distributing law, firing dispersion and probability of projectile shooting the object center were put under study. The results show that exponential distribution testing is relatively reasonable to ensure projectile position distribution on the given significance level. Through experimentation and calculation, the optical fiber coding fire measurement method is scientific and feasible, which can gain accurate projectile coordinate position.
Flexible space-time process for seismic data
NASA Astrophysics Data System (ADS)
Adelfio, G.; Chiodi, M.
2009-04-01
Introduction Point processes are well studied objects in probability theory and a powerful tool in statistics for modeling and analyzing the distribution of real phenomena, such as the seismic one. Point processes can be specified mathematically in several ways, for instance, by considering the joint distributions of the counts of points in arbitrary sets or defining a complete intensity function. The conditional intensity function is a function of the point history and it is itself a stochastic process depending on the past up to timet. In this paper some techniques to estimate the intensity function of space-time point processes are developed, by following semi-parametric approaches and diagnostic methods to assess their goodness of fit. In particular, because of its particularly adaptive properties to anomalous behavior in data, in this paper a nonparametric estimation approach is used to interpret dependence features of seismic activity of a given area of observation; to justify the estimation approach a diagnostic method for space-time point processes is also revised. Flexible modeling and diagnostics for point processes The definition of effective stochastic models to adequately describe the seismic activity of a fixed area is of great interest in seismology, since a reliable description of earthquakes occurrence might suggest useful ideas on the mechanism of a such complex phenomena. A number of statistical models have been proposed for representing the intensity function of earthquakes. The simpler models assume that earthquakes occur in space and time according to a stationary point process, such that conditional rate becomes a constant. In seismology, however, the stationarity hypothesis might be acceptable only with respect to time, because epicenters usually display a substantial degree of spatial heterogeneity and clustering. Description of seismic events often requires the definition of more complex models than stationary Poisson process and the
Gravitation theory in a fractal space-time
Agop, M.; Gottlieb, I.
2006-05-15
Assimilating the physical space-time with a fractal, a general theory is built. For a fractal dimension D=2, the virtual geodesics of this space-time implies a generalized Schroedinger type equation. Subsequently, a geometric formulation of the gravitation theory on a fractal space-time is given. Then, a connection is introduced on a tangent bundle, the connection coefficients, the Riemann curvature tensor and the Einstein field equation are calculated. It results, by means of a dilation operator, the equivalence of this model with quantum Einstein gravity.
Network Analyses for Space-Time High Frequency Wind Data
NASA Astrophysics Data System (ADS)
Laib, Mohamed; Kanevski, Mikhail
2017-04-01
Recently, network science has shown an important contribution to the analysis, modelling and visualization of complex time series. Numerous existing methods have been proposed for constructing networks. This work studies spatio-temporal wind data by using networks based on the Granger causality test. Furthermore, a visual comparison is carried out with several frequencies of data and different size of moving window. The main attention is paid to the temporal evolution of connectivity intensity. The Hurst exponent is applied on the provided time series in order to explore if there is a long connectivity memory. The results explore the space time structure of wind data and can be applied to other environmental data. The used dataset presents a challenging case study. It consists of high frequency (10 minutes) wind data from 120 measuring stations in Switzerland, for a time period of 2012-2013. The distribution of stations covers different geomorphological zones and elevation levels. The results are compared with the Person correlation network as well.
Unraveling the distributed neural code of facial identity through spatiotemporal pattern analysis.
Nestor, Adrian; Plaut, David C; Behrmann, Marlene
2011-06-14
Face individuation is one of the most impressive achievements of our visual system, and yet uncovering the neural mechanisms subserving this feat appears to elude traditional approaches to functional brain data analysis. The present study investigates the neural code of facial identity perception with the aim of ascertaining its distributed nature and informational basis. To this end, we use a sequence of multivariate pattern analyses applied to functional magnetic resonance imaging (fMRI) data. First, we combine information-based brain mapping and dynamic discrimination analysis to locate spatiotemporal patterns that support face classification at the individual level. This analysis reveals a network of fusiform and anterior temporal areas that carry information about facial identity and provides evidence that the fusiform face area responds with distinct patterns of activation to different face identities. Second, we assess the information structure of the network using recursive feature elimination. We find that diagnostic information is distributed evenly among anterior regions of the mapped network and that a right anterior region of the fusiform gyrus plays a central role within the information network mediating face individuation. These findings serve to map out and characterize a cortical system responsible for individuation. More generally, in the context of functionally defined networks, they provide an account of distributed processing grounded in information-based architectures.
Biscalar and Bivector Green's Functions in de Sitter Space Time
Narlikar, J. V.
1970-01-01
Biscalar and bivector Green's functions of wave equations are calculated explicitly in de Sitter space time. The calculation is performed by considering the electromagnetic field generated by the spontaneous creation of an electric charge. PMID:16591816
Judging the Space/Time Case in Parliamentary Debate.
ERIC Educational Resources Information Center
Williams, David E.; And Others
1996-01-01
Discusses criteria for judging space/time cases in parliamentary debate and comments on the controversy with regard to issues of appropriateness and adjudication. Presents four short responses to the points raised in this article. (PA)
Judging the Space/Time Case in Parliamentary Debate.
ERIC Educational Resources Information Center
Williams, David E.; And Others
1996-01-01
Discusses criteria for judging space/time cases in parliamentary debate and comments on the controversy with regard to issues of appropriateness and adjudication. Presents four short responses to the points raised in this article. (PA)
Code of Federal Regulations, 2010 CFR
2010-04-01
... trust personalty? No. All trust personalty will be distributed in accordance with the American Indian... 25 Indians 1 2010-04-01 2010-04-01 false May a tribe include provisions in its tribal probate code regarding the distribution and descent of trust personalty? 18.104 Section 18.104 Indians BUREAU OF...
Code of Federal Regulations, 2011 CFR
2011-04-01
... trust personalty? No. All trust personalty will be distributed in accordance with the American Indian... 25 Indians 1 2011-04-01 2011-04-01 false May a tribe include provisions in its tribal probate code regarding the distribution and descent of trust personalty? 18.104 Section 18.104 Indians BUREAU OF...
Code of Federal Regulations, 2014 CFR
2014-04-01
... trust personalty? No. All trust personalty will be distributed in accordance with the American Indian... 25 Indians 1 2014-04-01 2014-04-01 false May a tribe include provisions in its tribal probate code regarding the distribution and descent of trust personalty? 18.104 Section 18.104 Indians BUREAU OF INDIAN...
Code of Federal Regulations, 2012 CFR
2012-04-01
... trust personalty? No. All trust personalty will be distributed in accordance with the American Indian... 25 Indians 1 2012-04-01 2011-04-01 true May a tribe include provisions in its tribal probate code regarding the distribution and descent of trust personalty? 18.104 Section 18.104 Indians BUREAU OF INDIAN...
Code of Federal Regulations, 2013 CFR
2013-04-01
... trust personalty? No. All trust personalty will be distributed in accordance with the American Indian... 25 Indians 1 2013-04-01 2013-04-01 false May a tribe include provisions in its tribal probate code regarding the distribution and descent of trust personalty? 18.104 Section 18.104 Indians BUREAU OF INDIAN...
Space-time adaptive hierarchical model reduction for parabolic equations.
Perotto, Simona; Zilio, Alessandro
Surrogate solutions and surrogate models for complex problems in many fields of science and engineering represent an important recent research line towards the construction of the best trade-off between modeling reliability and computational efficiency. Among surrogate models, hierarchical model (HiMod) reduction provides an effective approach for phenomena characterized by a dominant direction in their dynamics. HiMod approach obtains 1D models naturally enhanced by the inclusion of the effect of the transverse dynamics. HiMod reduction couples a finite element approximation along the mainstream with a locally tunable modal representation of the transverse dynamics. In particular, we focus on the pointwise HiMod reduction strategy, where the modal tuning is performed on each finite element node. We formalize the pointwise HiMod approach in an unsteady setting, by resorting to a model discontinuous in time, continuous and hierarchically reduced in space (c[M([Formula: see text])G(s)]-dG(q) approximation). The selection of the modal distribution and of the space-time discretization is automatically performed via an adaptive procedure based on an a posteriori analysis of the global error. The final outcome of this procedure is a table, named HiMod lookup diagram, that sets the time partition and, for each time interval, the corresponding 1D finite element mesh together with the associated modal distribution. The results of the numerical verification confirm the robustness of the proposed adaptive procedure in terms of accuracy, sensitivity with respect to the goal quantity and the boundary conditions, and the computational saving. Finally, the validation results in the groundwater experimental setting are promising. The extension of the HiMod reduction to an unsteady framework represents a crucial step with a view to practical engineering applications. Moreover, the results of the validation phase confirm that HiMod approximation is a viable approach.
Effect of error distribution in channel coding failure on MPEG wireless transmission
NASA Astrophysics Data System (ADS)
Robert, P. M.; Darwish, Ahmed M.; Reed, Jeffrey H.
1998-12-01
This paper examines the interaction between digital video and channel coding in a wireless communication system. Digital video is a high-bandwidth, computationally intensive application. The recent allocation of large tracks of spectrum by the FCC has made possible the design and implementation of personal wireless digital video devices for several applications, from personal communications to surveillance. A simulation tool was developed to explore the video/channel coding relationship. This tool simulates a packet-based digital wireless transmission in various noise and interference environments. The basic communications system models the DAVIC (Digital Audio-Visual Council) layout for the LMDS (Local Multipoint Distribution Service) system and includes several error control algorithms and a packetizing algorithm that is MPEG-compliant. The Bit-Error-Rate (BER) is a basic metric used in digital communications system design. This work presents simulation results that prove that BER is not a sufficient metric to predict video quality based on channel parameters. Evidence will be presented to show that the relative positioning of bit errors, regardless of absolute positioning and the relative occurrence of these bit error bursts are the main factors that must be observed in a physical layer to design a digital video wireless system.
Quadratic bulk viscosity and the topology of space time.
NASA Astrophysics Data System (ADS)
Wolf, C.
1997-12-01
By considering a homogeneous isotropic universe admitting quadratic bulk viscosity the author shows that if the bulk viscosity coefficient is large the effective topology of space time attains an antiintuitive interpretation in the sense that a positive curvature space time is ever-expanding. This is true for all cosmologies studied except in the case of small quadratic bulk viscosity (3γ+1-kβ ≥ 0, 3γ+1 > 0).
Liang, Wen-Ye; Wang, Shuang; Li, Hong-Wei; Yin, Zhen-Qiang; Chen, Wei; Yao, Yao; Huang, Jing-Zheng; Guo, Guang-Can; Han, Zheng-Fu
2014-01-01
We have demonstrated a proof-of-principle experiment of reference-frame-independent phase coding quantum key distribution (RFI-QKD) over an 80-km optical fiber. After considering the finite-key bound, we still achieve a distance of 50 km. In this scenario, the phases of the basis states are related by a slowly time-varying transformation. Furthermore, we developed and realized a new decoy state method for RFI-QKD systems with weak coherent sources to counteract the photon-number-splitting attack. With the help of a reference-frame-independent protocol and a Michelson interferometer with Faraday rotator mirrors, our system is rendered immune to the slow phase changes of the interferometer and the polarization disturbances of the channel, making the procedure very robust. PMID:24402550
Liang, Wen-Ye; Wang, Shuang; Li, Hong-Wei; Yin, Zhen-Qiang; Chen, Wei; Yao, Yao; Huang, Jing-Zheng; Guo, Guang-Can; Han, Zheng-Fu
2014-01-09
We have demonstrated a proof-of-principle experiment of reference-frame-independent phase coding quantum key distribution (RFI-QKD) over an 80-km optical fiber. After considering the finite-key bound, we still achieve a distance of 50 km. In this scenario, the phases of the basis states are related by a slowly time-varying transformation. Furthermore, we developed and realized a new decoy state method for RFI-QKD systems with weak coherent sources to counteract the photon-number-splitting attack. With the help of a reference-frame-independent protocol and a Michelson interferometer with Faraday rotator mirrors, our system is rendered immune to the slow phase changes of the interferometer and the polarization disturbances of the channel, making the procedure very robust.
NASA Astrophysics Data System (ADS)
Liang, Wen-Ye; Wang, Shuang; Li, Hong-Wei; Yin, Zhen-Qiang; Chen, Wei; Yao, Yao; Huang, Jing-Zheng; Guo, Guang-Can; Han, Zheng-Fu
2014-01-01
We have demonstrated a proof-of-principle experiment of reference-frame-independent phase coding quantum key distribution (RFI-QKD) over an 80-km optical fiber. After considering the finite-key bound, we still achieve a distance of 50 km. In this scenario, the phases of the basis states are related by a slowly time-varying transformation. Furthermore, we developed and realized a new decoy state method for RFI-QKD systems with weak coherent sources to counteract the photon-number-splitting attack. With the help of a reference-frame-independent protocol and a Michelson interferometer with Faraday rotator mirrors, our system is rendered immune to the slow phase changes of the interferometer and the polarization disturbances of the channel, making the procedure very robust.
The distribution and mutagenesis of short coding INDELs from 1,128 whole exomes.
Challis, Danny; Antunes, Lilian; Garrison, Erik; Banks, Eric; Evani, Uday S; Muzny, Donna; Poplin, Ryan; Gibbs, Richard A; Marth, Gabor; Yu, Fuli
2015-02-28
Identifying insertion/deletion polymorphisms (INDELs) with high confidence has been intrinsically challenging in short-read sequencing data. Here we report our approach for improving INDEL calling accuracy by using a machine learning algorithm to combine call sets generated with three independent methods, and by leveraging the strengths of each individual pipeline. Utilizing this approach, we generated a consensus exome INDEL call set from a large dataset generated by the 1000 Genomes Project (1000G), maximizing both the sensitivity and the specificity of the calls. This consensus exome INDEL call set features 7,210 INDELs, from 1,128 individuals across 13 populations included in the 1000 Genomes Phase 1 dataset, with a false discovery rate (FDR) of about 7.0%. In our study we further characterize the patterns and distributions of these exonic INDELs with respect to density, allele length, and site frequency spectrum, as well as the potential mutagenic mechanisms of coding INDELs in humans.
Photoplus: auxiliary information for printed images based on distributed source coding
NASA Astrophysics Data System (ADS)
Samadani, Ramin; Mukherjee, Debargha
2008-01-01
A printed photograph is difficult to reuse because the digital information that generated the print may no longer be available. This paper describes a mechanism for approximating the original digital image by combining a scan of the printed photograph with small amounts of digital auxiliary information kept together with the print. The auxiliary information consists of a small amount of digital data to enable accurate registration and color-reproduction, followed by a larger amount of digital data to recover residual errors and lost frequencies by distributed Wyner-Ziv coding techniques. Approximating the original digital image enables many uses, including making good quality reprints from the original print, even when they are faded many years later. In essence, the print itself becomes the currency for archiving and repurposing digital images, without requiring computer infrastructure.
Röske, Kerstin; Foecking, Mark F; Yooseph, Shibu; Glass, John I; Calcutt, Michael J; Wise, Kim S
2010-07-13
-genomic shuffling. We describe novel features of PARCELs (Palindromic Amphipathic Repeat Coding ELements), a set of widely distributed repeat protein domains and coding sequences that were likely acquired through HGT by diverse unicellular microbes, further mobilized and diversified within genomes, and co-opted for expression in the membrane proteome of some taxa. Disseminated by multiple gene-centric vehicles, ORFs harboring these elements enhance accessory gene pools as part of the "mobilome" connecting genomes of various clades, in taxa sharing common niches.
Neutrons Flux Distributions of the Pu-Be Source and its Simulation by the MCNP-4B Code
NASA Astrophysics Data System (ADS)
Faghihi, F.; Mehdizadeh, S.; Hadad, K.
Neutron Fluence rate of a low intense Pu-Be source is measured by Neutron Activation Analysis (NAA) of 197Au foils. Also, the neutron fluence rate distribution versus energy is calculated using the MCNP-4B code based on ENDF/B-V library. Theoretical simulation as well as our experimental performance are a new experience for Iranians to make reliability with the code for further researches. In our theoretical investigation, an isotropic Pu-Be source with cylindrical volume distribution is simulated and relative neutron fluence rate versus energy is calculated using MCNP-4B code. Variation of the fast and also thermal neutrons fluence rate, which are measured by NAA method and MCNP code, are compared.
NASA Astrophysics Data System (ADS)
Lai, Hong; Luo, Ming-Xing; Zhan, Cheng; Pieprzyk, Josef; Orgun, Mehmet A.
2017-09-01
We propose an improved coding method of quantum key distribution protocols based on a recently proposed (QKD) protocol using Fibonacci-valued OAM entangled states. To be exact, we define a new class of Fibonacci-matrix coding and Fibonacci-matrix representation and show how they can be used to extend and improve the original protocols. Compared with the original protocols, our protocol not only greatly improves the encoding efficiency but also has verifiability.
Ricci collineation vectors in fluid space-times
Tsamparlis, M. ); Mason, D.P. )
1990-07-01
The properties of fluid space-times that admit a Ricci collineation vector (RCV) parallel to the fluid unit four-velocity vector {ital u}{sup {ital a}} are briefly reviewed. These properties are expressed in terms of the kinematic quantities of the timelike congruence generated by {ital u}{sup {ital a}}. The cubic equation derived by Oliver and Davis (Ann. Inst. Henri Poincare {bold 30}, 339 (1979)) for the equation of state {ital p}={ital p}({mu}) of a perfect fluid space-time that admits an RCV, which does not degenerate to a Killing vector, is solved for physically realistic fluids. Necessary and sufficient conditions for a fluid space-time to admit a spacelike RCV parallel to a unit vector {ital n}{sup {ital a}} orthogonal to {ital u}{sup {ital a}} are derived in terms of the expansion, shear, and rotation of the spacelike congruence generated by {ital n}{sup {ital a}}. Perfect fluid space-times are studied in detail and analogues of the results for timelike RCVs parallel to {ital u}{sup {ital a}} are obtained. Properties of imperfect fluid space-times for which the energy flux vector {ital q}{sup {ital a}} vanishes and {ital n}{sup {ital a}} is a spacelike eigenvector of the anisotropic stress tensor {pi}{sub {ital ab}} are derived. Fluid space-times with anisotropic pressure are discussed as a special case of imperfect fluid space-times for which {ital n}{sup {ital a}} is an eigenvector of {pi}{sub {ital ab}}.
A Copula Based Space-Time Rainfall Simulation Model
NASA Astrophysics Data System (ADS)
Aghakouchak, A.; Bárdossy, A.; Habib, E.
2008-05-01
Stochastically generated rainfall data are used as input to hydrological and meteorological models to assess model uncertainties and climate variability in water resources systems. Currently, there are very well defined methods to generate time series of rainfall data for a single point. However, hydrological and meteorological modeling over large scales requires high resolution rainfall data to capture temporal and spatial variability of rainfall that is proven to affect the quality of hydrological predictions (Osborn and Reynolds, 1963; Osborn and Keppel, 1966; Rodda, 1967; Dawdy and Bergman, 1969, Seliga et al., 1992; Corradini and Singh, 1985; Obled et al., 1994; Troutman, 1983; Hamlin, 1983; Faures et al., 1995; Shah et al., 1996, Goodrich et al., 1995). In this paper a copula base space-time rainfall simulation model is introduced for simulation of two-dimensional rainfall field based on observed radar data. In contrast with most rainfall simulation techniques, which describe the spatial dependence structure of rainfall fields with a covariance function or a variogram, we introduce spatial dependence without the influence of the marginal distribution using copula. Radar data of the state of Baden-Württemberg in Germany with temporal resolution of 5min and spatial resolution of 1 km2 are used in this study. Gaussian copula and a number of non-Gaussian copulas are used to describe the dependency structure of radar rainfall data. For each radar image, realizations of radar rainfall patters are simulated. The simulation technique used in this work preserves the spatial dependence structure as well and temporal variability of simulated fields similar to the observed radar data. Each simulated realization is then used as input to a hydrological model resulting in an ensemble of predicted runoff hydrographs. The main conclusions are: (a) copula techniques can be used to describe the spatial dependence structure or rainfall fields instead of a simple covariance
Space-Time Correlations and Dynamic Coupling in Turbulent Flows
NASA Astrophysics Data System (ADS)
He, Guowei; Jin, Guodong; Yang, Yue
2017-01-01
Space-time correlation is a staple method for investigating the dynamic coupling of spatial and temporal scales of motion in turbulent flows. In this article, we review the space-time correlation models in both the Eulerian and Lagrangian frames of reference, which include the random sweeping and local straining models for isotropic and homogeneous turbulence, Taylor's frozen-flow model and the elliptic approximation model for turbulent shear flows, and the linear-wave propagation model and swept-wave model for compressible turbulence. We then focus on how space-time correlations are used to develop time-accurate turbulence models for the large-eddy simulation of turbulence-generated noise and particle-laden turbulence. We briefly discuss their applications to two-point closures for Kolmogorov's universal scaling of energy spectra and to the reconstruction of space-time energy spectra from a subset of spatial and temporal signals in experimental measurements. Finally, we summarize the current understanding of space-time correlations and conclude with future issues for the field.
Symmetries of asymptotically flat electrovacuum space-times and radiation
NASA Astrophysics Data System (ADS)
Bičák, J.; Pravdová, A.
1998-11-01
Symmetries compatible with asymptotic flatness and admitting gravitational and electromagnetic radiation are studied by using the Bondi-Sachs-van der Burg formalism. It is shown that in axially symmetric electrovacuum space-times in which at least locally a smooth null infinity in the sense of Penrose exists, the only second allowable symmetry is either the translational symmetry or the boost symmetry. Translationally invariant space-times with, in general, a straight "cosmic string" along the axis of symmetry are nonradiative although they can have a nonvanishing news function. The boost-rotation symmetric space-times are radiative. They describe "uniformly accelerated charged particles" or black holes which in general may also be rotating—the axial and an additional Killing vector are not assumed to be hypersurface orthogonal. The general functional forms of both gravitational and electromagnetic news functions, and of the mass aspect and total mass of asymptotically flat boost-rotation symmetric space-times at null infinity are obtained. The expressions for the mass are new even in the case of vacuum boost-rotation symmetric space-times with hypersurface orthogonal Killing vectors. In the Appendices some errors appearing in previous works are corrected.
Exploratory Space-Time Analyses of Rift Valley Fever in South Africa in 2008–2011
Métras, Raphaëlle; Porphyre, Thibaud; Pfeiffer, Dirk U.; Kemp, Alan; Thompson, Peter N.
2012-01-01
Background Rift Valley fever (RVF) is a zoonotic arbovirosis for which the primary hosts are domestic livestock (cattle, sheep and goats). RVF was first described in South Africa in 1950–1951. Mechanisms for short and long distance transmission have been hypothesised, but there is little supporting evidence. Here we describe RVF occurrence and spatial distribution in South Africa in 2008–11, and investigate the presence of a contagious process in order to generate hypotheses on the different mechanisms of transmission. Methodology/Principal Findings A total of 658 cases were extracted from World Animal Health Information Database. Descriptive statistics, epidemic curves and maps were produced. The space-time K-function was used to test for evidence of space-time interaction. Five RVF outbreak waves (one in 2008, two in 2009, one in 2010 and one in 2011) of varying duration, location and size were reported. About 70% of cases (n = 471) occurred in 2010, when the epidemic was almost country-wide. No strong evidence of space-time interaction was found for 2008 or the second wave in 2009. In the first wave of 2009, a significant space-time interaction was detected for up to one month and over 40 km. In 2010 and 2011 a significant intense, short and localised space-time interaction (up to 3 days and 15 km) was detected, followed by one of lower intensity (up to 2 weeks and 35 to 90 km). Conclusions/Significance The description of the spatiotemporal patterns of RVF in South Africa between 2008 and 2011 supports the hypothesis that during an epidemic, disease spread may be supported by factors other than active vector dispersal. Limitations of under-reporting and space-time K-function properties are discussed. Further spatial analyses and data are required to explain factors and mechanisms driving RVF spread. PMID:22953020
Exploratory space-time analyses of Rift Valley Fever in South Africa in 2008-2011.
Métras, Raphaëlle; Porphyre, Thibaud; Pfeiffer, Dirk U; Kemp, Alan; Thompson, Peter N; Collins, Lisa M; White, Richard G
2012-01-01
Rift Valley fever (RVF) is a zoonotic arbovirosis for which the primary hosts are domestic livestock (cattle, sheep and goats). RVF was first described in South Africa in 1950-1951. Mechanisms for short and long distance transmission have been hypothesised, but there is little supporting evidence. Here we describe RVF occurrence and spatial distribution in South Africa in 2008-11, and investigate the presence of a contagious process in order to generate hypotheses on the different mechanisms of transmission. A total of 658 cases were extracted from World Animal Health Information Database. Descriptive statistics, epidemic curves and maps were produced. The space-time K-function was used to test for evidence of space-time interaction. Five RVF outbreak waves (one in 2008, two in 2009, one in 2010 and one in 2011) of varying duration, location and size were reported. About 70% of cases (n = 471) occurred in 2010, when the epidemic was almost country-wide. No strong evidence of space-time interaction was found for 2008 or the second wave in 2009. In the first wave of 2009, a significant space-time interaction was detected for up to one month and over 40 km. In 2010 and 2011 a significant intense, short and localised space-time interaction (up to 3 days and 15 km) was detected, followed by one of lower intensity (up to 2 weeks and 35 to 90 km). The description of the spatiotemporal patterns of RVF in South Africa between 2008 and 2011 supports the hypothesis that during an epidemic, disease spread may be supported by factors other than active vector dispersal. Limitations of under-reporting and space-time K-function properties are discussed. Further spatial analyses and data are required to explain factors and mechanisms driving RVF spread.
FLRW cosmology in Weyl-integrable space-time
Gannouji, Radouane; Nandan, Hemwati; Dadhich, Naresh E-mail: hntheory@yahoo.co.in
2011-11-01
We investigate the Weyl space-time extension of general relativity (GR) for studying the FLRW cosmology through focusing and defocusing of the geodesic congruences. We have derived the equations of evolution for expansion, shear and rotation in the Weyl space-time. In particular, we consider the Starobinsky modification, f(R) = R+βR{sup 2}−2Λ, of gravity in the Einstein-Palatini formalism, which turns out to reduce to the Weyl integrable space-time (WIST) with the Weyl vector being a gradient. The modified Raychaudhuri equation takes the form of the Hill-type equation which is then analysed to study the formation of the caustics. In this model, it is possible to have a Big Bang singularity free cyclic Universe but unfortunately the periodicity turns out to be extremely short.
Quantum field theory on a cosmological, quantum space-time
Ashtekar, Abhay; Kaminski, Wojciech; Lewandowski, Jerzy
2009-03-15
In loop quantum cosmology, Friedmann-LeMaitre-Robertson-Walker space-times arise as well-defined approximations to specific quantum geometries. We initiate the development of a quantum theory of test scalar fields on these quantum geometries. Emphasis is on the new conceptual ingredients required in the transition from classical space-time backgrounds to quantum space-times. These include a ''relational time''a la Leibniz, the emergence of the Hamiltonian operator of the test field from the quantum constraint equation, and ramifications of the quantum fluctuations of the background geometry on the resulting dynamics. The familiar quantum field theory on classical Friedmann-LeMaitre-Robertson-Walker models arises as a well-defined reduction of this more fundamental theory.
A potential foundation for emergent space-time
NASA Astrophysics Data System (ADS)
Knuth, Kevin H.; Bahreyni, Newshaw
2014-11-01
We present a novel derivation of both the Minkowski metric and Lorentz transformations from the consistent quantification of a causally ordered set of events with respect to an embedded observer. Unlike past derivations, which have relied on assumptions such as the existence of a 4-dimensional manifold, symmetries of space-time, or the constant speed of light, we demonstrate that these now familiar mathematics can be derived as the unique means to consistently quantify a network of events. This suggests that space-time need not be physical, but instead the mathematics of space and time emerges as the unique way in which an observer can consistently quantify events and their relationships to one another. The result is a potential foundation for emergent space-time.
The mortality rates and the space-time patterns of John Snow's cholera epidemic map.
Shiode, Narushige; Shiode, Shino; Rod-Thatcher, Elodie; Rana, Sanjay; Vinten-Johansen, Peter
2015-06-17
Snow's work on the Broad Street map is widely known as a pioneering example of spatial epidemiology. It lacks, however, two significant attributes required in contemporary analyses of disease incidence: population at risk and the progression of the epidemic over time. Despite this has been repeatedly suggested in the literature, no systematic investigation of these two aspects was previously carried out. Using a series of historical documents, this study constructs own data to revisit Snow's study to examine the mortality rate at each street location and the space-time pattern of the cholera outbreak. This study brings together records from a series of historical documents, and prepares own data on the estimated number of residents at each house location as well as the space-time data of the victims, and these are processed in GIS to facilitate the spatial-temporal analysis. Mortality rates and the space-time pattern in the victims' records are explored using Kernel Density Estimation and network-based Scan Statistic, a recently developed method that detects significant concentrations of records such as the date and place of victims with respect to their distance from others along the street network. The results are visualised in a map form using a GIS platform. Data on mortality rates and space-time distribution of the victims were collected from various sources and were successfully merged and digitised, thus allowing the production of new map outputs and new interpretation of the 1854 cholera outbreak in London, covering more cases than Snow's original report and also adding new insights into their space-time distribution. They confirmed that areas in the immediate vicinity of the Broad Street pump indeed suffered from excessively high mortality rates, which has been suspected for the past 160 years but remained unconfirmed. No distinctive pattern was found in the space-time distribution of victims' locations. The high mortality rates identified around the
Space-Time Diffeomorphisms in Noncommutative Gauge Theories
NASA Astrophysics Data System (ADS)
Rosenbaum, Marcos; Vergara, J. David; Juarez, L. Román
2008-07-01
In previous work [Rosenbaum M. et al., J. Phys. A: Math. Theor. 40 (2007), 10367-10382] we have shown how for canonical parametrized field theories, where space-time is placed on the same footing as the other fields in the theory, the representation of space-time diffeomorphisms provides a very convenient scheme for analyzing the induced twisted deformation of these diffeomorphisms, as a result of the space-time noncommutativity. However, for gauge field theories (and of course also for canonical geometrodynamics) where the Poisson brackets of the constraints explicitely depend on the embedding variables, this Poisson algebra cannot be connected directly with a representation of the complete Lie algebra of space-time diffeomorphisms, because not all the field variables turn out to have a dynamical character [Isham C.J., Kuchar K.V., Ann. Physics 164 (1985), 288-315, 316-333]. Nonetheless, such an homomorphic mapping can be rec! uperated by first modifying the original action and then adding additional constraints in the formalism in order to retrieve the original theory, as shown by Kuchar and Stone for the case of the parametrized Maxwell field in [Kuchar K.V., Stone S.L., Classical Quantum Gravity 4 (1987), 319-328]. Making use of a combination of all of these ideas, we are therefore able to apply our canonical reparametrization approach in order to derive the deformed Lie algebra of the noncommutative space-time diffeomorphisms as well as to consider how gauge transformations act on the twisted algebras of gauge and particle fields. Thus, hopefully, adding clarification on some outstanding issues in the literature concerning the symmetries for gauge theories in noncommutative space-times.
Quantum Detectors in Generic Non Flat FLRW Space-Times
NASA Astrophysics Data System (ADS)
Rabochaya, Yevgeniya; Zerbini, Sergio
2016-05-01
We discuss a quantum field theoretical approach, in which a quantum probe is used to investigate the properties of generic non-flat FLRW space-times. The probe is identified with a conformally coupled massless scalar field defined on a space-time with horizon and the procedure to investigate the local properties is realized by the use of Unruh-DeWitt detector and by the evaluation of the regularized quantum fluctuations. In the case of de Sitter space, the coordinate independence of our results is checked, and the Gibbons-Hawking temperature is recovered. A possible generalization to the electromagnetic probe is also briefly indicated.
Probing dense granular materials by space-time dependent perturbations.
Kondic, L; Dybenko, O M; Behringer, R P
2009-04-01
The manner in which signals propagate through dense granular systems in both space and time is not well understood. In order to probe this process, we carry out discrete element simulations of the system response to excitations where we control the driving frequency and wavelength independently. Fourier analysis shows that properties of the signal depend strongly on the space-time scales of the perturbation. The features of the response provide a test bed for models that predict statistical and continuum space-time properties. We illustrate this connection between microscale physics and macroscale behavior by comparing the system response to a simple elastic model with damping.
Modal and temporal logics for abstract space-time structures
NASA Astrophysics Data System (ADS)
Uckelman, Sara L.; Uckelman, Joel
In the fourth century BC, the Greek philosopher Diodoros Chronos gave a temporal definition of necessity. Because it connects modality and temporality, this definition is of interest to philosophers working within branching time or branching space-time models. This definition of necessity can be formalized and treated within a logical framework. We give a survey of the several known modal and temporal logics of abstract space-time structures based on the real numbers and the integers, considering three different accessibility relations between spatio-temporal points.
Very Special Relativity and Noncommutative Space-Time
NASA Astrophysics Data System (ADS)
Sheikh-Jabbari, M. M.; Tureanu, A.
The Very Special Relativity (VSR) introduced by Cohen and Glashow [16] has a robust mathematical realization on noncommutative space-time, in particular on noncommutative Moyal plane, with light-like noncommutativity [35]. The realization is essentially connected to the twisted Poincaré algebra and its role as symmetry of noncommutative space-time and the corresponding quantum field theories [11, 12]. In our setting the VSR invariant theories are specified with a single deformation parameter, the noncommutativity scaleΛ_{NC} Preliminary analysis with the available data leads to Λ_{NC} ≥ 1 - 10 Te V
Constructing infrared finite propagators in inflating space-time
Rajaraman, Arvind; Kumar, Jason; Leblond, Louis
2010-07-15
The usual (Bunch-Davies) Feynman propagator of a massless field is not well defined in an expanding universe due to the presence of infrared divergences. We propose a new propagator which yields IR finite answers to any correlation function. The key point is that in a de Sitter space-time there is an ambiguity in the zero mode of the propagator. This ambiguity can be used to cancel the apparent divergences which arise in some loop calculations in eternally (or semieternally) inflating space-time. We refer to this process as zero-mode modification. The residual ambiguity is fixed by observational measurement.
On distributed memory MPI-based parallelization of SPH codes in massive HPC context
NASA Astrophysics Data System (ADS)
Oger, G.; Le Touzé, D.; Guibert, D.; de Leffe, M.; Biddiscombe, J.; Soumagne, J.; Piccinali, J.-G.
2016-03-01
Most of particle methods share the problem of high computational cost and in order to satisfy the demands of solvers, currently available hardware technologies must be fully exploited. Two complementary technologies are now accessible. On the one hand, CPUs which can be structured into a multi-node framework, allowing massive data exchanges through a high speed network. In this case, each node is usually comprised of several cores available to perform multithreaded computations. On the other hand, GPUs which are derived from the graphics computing technologies, able to perform highly multi-threaded calculations with hundreds of independent threads connected together through a common shared memory. This paper is primarily dedicated to the distributed memory parallelization of particle methods, targeting several thousands of CPU cores. The experience gained clearly shows that parallelizing a particle-based code on moderate numbers of cores can easily lead to an acceptable scalability, whilst a scalable speedup on thousands of cores is much more difficult to obtain. The discussion revolves around speeding up particle methods as a whole, in a massive HPC context by making use of the MPI library. We focus on one particular particle method which is Smoothed Particle Hydrodynamics (SPH), one of the most widespread today in the literature as well as in engineering.
Real-time distributed video coding for 1K-pixel visual sensor networks
NASA Astrophysics Data System (ADS)
Hanca, Jan; Deligiannis, Nikos; Munteanu, Adrian
2016-07-01
Many applications in visual sensor networks (VSNs) demand the low-cost wireless transmission of video data. In this context, distributed video coding (DVC) has proven its potential to achieve state-of-the-art compression performance while maintaining low computational complexity of the encoder. Despite their proven capabilities, current DVC solutions overlook hardware constraints, and this renders them unsuitable for practical implementations. This paper introduces a DVC architecture that offers highly efficient wireless communication in real-world VSNs. The design takes into account the severe computational and memory constraints imposed by practical implementations on low-resolution visual sensors. We study performance-complexity trade-offs for feedback-channel removal, propose learning-based techniques for rate allocation, and investigate various simplifications of side information generation yielding real-time decoding. The proposed system is evaluated against H.264/AVC intra, Motion-JPEG, and our previously designed DVC prototype for low-resolution visual sensors. Extensive experimental results on various data show significant improvements in multiple configurations. The proposed encoder achieves real-time performance on a 1k-pixel visual sensor mote. Real-time decoding is performed on a Raspberry Pi single-board computer or a low-end notebook PC. To the best of our knowledge, the proposed codec is the first practical DVC deployment on low-resolution VSNs.
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry
1998-01-01
This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.
The spatial distribution of fixed mutations within genes coding for proteins
NASA Technical Reports Server (NTRS)
Holmquist, R.; Goodman, M.; Conroy, T.; Czelusniak, J.
1983-01-01
An examination has been conducted of the extensive amino acid sequence data now available for five protein families - the alpha crystallin A chain, myoglobin, alpha and beta hemoglobin, and the cytochromes c - with the goal of estimating the true spatial distribution of base substitutions within genes that code for proteins. In every case the commonly used Poisson density failed to even approximate the experimental pattern of base substitution. For the 87 species of beta hemoglobin examined, for example, the probability that the observed results were from a Poisson process was the minuscule 10 to the -44th. Analogous results were obtained for the other functional families. All the data were reasonably, but not perfectly, described by the negative binomial density. In particular, most of the data were described by one of the very simple limiting forms of this density, the geometric density. The implications of this for evolutionary inference are discussed. It is evident that most estimates of total base substitutions between genes are badly in need of revision.
The spatial distribution of fixed mutations within genes coding for proteins
NASA Technical Reports Server (NTRS)
Holmquist, R.; Goodman, M.; Conroy, T.; Czelusniak, J.
1983-01-01
An examination has been conducted of the extensive amino acid sequence data now available for five protein families - the alpha crystallin A chain, myoglobin, alpha and beta hemoglobin, and the cytochromes c - with the goal of estimating the true spatial distribution of base substitutions within genes that code for proteins. In every case the commonly used Poisson density failed to even approximate the experimental pattern of base substitution. For the 87 species of beta hemoglobin examined, for example, the probability that the observed results were from a Poisson process was the minuscule 10 to the -44th. Analogous results were obtained for the other functional families. All the data were reasonably, but not perfectly, described by the negative binomial density. In particular, most of the data were described by one of the very simple limiting forms of this density, the geometric density. The implications of this for evolutionary inference are discussed. It is evident that most estimates of total base substitutions between genes are badly in need of revision.
Accelerating the discovery of space-time patterns of infectious diseases using parallel computing.
Hohl, Alexander; Delmelle, Eric; Tang, Wenwu; Casas, Irene
2016-11-01
Infectious diseases have complex transmission cycles, and effective public health responses require the ability to monitor outbreaks in a timely manner. Space-time statistics facilitate the discovery of disease dynamics including rate of spread and seasonal cyclic patterns, but are computationally demanding, especially for datasets of increasing size, diversity and availability. High-performance computing reduces the effort required to identify these patterns, however heterogeneity in the data must be accounted for. We develop an adaptive space-time domain decomposition approach for parallel computation of the space-time kernel density. We apply our methodology to individual reported dengue cases from 2010 to 2011 in the city of Cali, Colombia. The parallel implementation reaches significant speedup compared to sequential counterparts. Density values are visualized in an interactive 3D environment, which facilitates the identification and communication of uneven space-time distribution of disease events. Our framework has the potential to enhance the timely monitoring of infectious diseases. Copyright © 2016 Elsevier Ltd. All rights reserved.
Event-by-Event Study of Space-Time Dynamics in Flux-Tube Fragmentation
Wong, Cheuk-Yin
2017-05-25
In the semi-classical description of the flux-tube fragmentation process for hadron production and hadronization in high-energymore » $e^+e^-$ annihilations and $pp$ collisions, the rapidity-space-time ordering and the local conservation laws of charge, flavor, and momentum provide a set of powerful tools that may allow the reconstruction of the space-time dynamics of quarks and mesons in exclusive measurements of produced hadrons, on an event-by-event basis. We propose procedures to reconstruct the space-time dynamics from event-by-event exclusive hadron data to exhibit explicitly the ordered chain of hadrons produced in a flux tube fragmentation. As a supplementary tool, we infer the average space-time coordinates of the $q$-$$\\bar q$$ pair production vertices from the $$\\pi^-$$ rapidity distribution data obtained by the NA61/SHINE Collaboration in $pp$ collisions at $$\\sqrt{s}$$ = 6.3 to 17.3 GeV.« less
Inaccuracy, Uncertainty and the Space-Time Permutation Scan Statistic
Malizia, Nicholas
2013-01-01
The space-time permutation scan statistic (STPSS) is designed to identify hot (and cool) spots of space-time interaction within patterns of spatio-temporal events. While the method has been adopted widely in practice, there has been little consideration of the effect inaccurate and/or incomplete input data may have on its results. Given the pervasiveness of inaccuracy, uncertainty and incompleteness within spatio-temporal datasets and the popularity of the method, this issue warrants further investigation. Here, a series of simulation experiments using both synthetic and real-world data are carried out to better understand how deficiencies in the spatial and temporal accuracy as well as the completeness of the input data may affect results of the STPSS. The findings, while specific to the parameters employed here, reveal a surprising robustness of the method's results in the face of these deficiencies. As expected, the experiments illustrate that greater degradation of input data quality leads to greater variability in the results. Additionally, they show that weaker signals of space-time interaction are those most affected by the introduced deficiencies. However, in stark contrast to previous investigations into the impact of these input data problems on global tests of space-time interaction, this local metric is revealed to be only minimally affected by the degree of inaccuracy and incompleteness introduced in these experiments. PMID:23408930
Inaccuracy, uncertainty and the space-time permutation scan statistic.
Malizia, Nicholas
2013-01-01
The space-time permutation scan statistic (STPSS) is designed to identify hot (and cool) spots of space-time interaction within patterns of spatio-temporal events. While the method has been adopted widely in practice, there has been little consideration of the effect inaccurate and/or incomplete input data may have on its results. Given the pervasiveness of inaccuracy, uncertainty and incompleteness within spatio-temporal datasets and the popularity of the method, this issue warrants further investigation. Here, a series of simulation experiments using both synthetic and real-world data are carried out to better understand how deficiencies in the spatial and temporal accuracy as well as the completeness of the input data may affect results of the STPSS. The findings, while specific to the parameters employed here, reveal a surprising robustness of the method's results in the face of these deficiencies. As expected, the experiments illustrate that greater degradation of input data quality leads to greater variability in the results. Additionally, they show that weaker signals of space-time interaction are those most affected by the introduced deficiencies. However, in stark contrast to previous investigations into the impact of these input data problems on global tests of space-time interaction, this local metric is revealed to be only minimally affected by the degree of inaccuracy and incompleteness introduced in these experiments.
The movement speed-accuracy relation in space-time.
Hsieh, Tsung-Yu; Liu, Yeou-Teh; Mayer-Kress, Gottfried; Newell, Karl M
2013-02-01
Two experiments investigated a new approach to decomposing the contributions of spatial and temporal constraints to an integrated single space-time performance score in the movement speed-accuracy relation of a line drawing task. The mean and variability of the space-time performance error score were lowest when the task space and time constraint contributions to the performance score were comparable (i.e., middle range of velocities). As the contribution of either space or time to the performance score became increasingly asymmetrical at lower and higher average velocities, the mean performance error score and its variability increased with a greater trade-off between spatial and temporal movement properties. The findings revealed a new U-shaped space-time speed-accuracy function for performance outcome in tasks that have both spatial and temporal demands. The traditional speed-accuracy functions for spatial error and temporal error considered independently map to this integrated space-time movement speed-accuracy function. Copyright © 2013 Elsevier B.V. All rights reserved.
Strong interaction model in DFR noncommutative space-time
NASA Astrophysics Data System (ADS)
Abreu, Everton M. C.; Neves, Mario Junior
2017-06-01
The Doplicher-Fredenhagen-Roberts (DFR) framework for noncommutative (NC) space-times is considered as an alternative approach to describe the physics of quantum gravity, for instance. In this formalism, the NC parameter, i.e. 𝜃μν, is promoted to a coordinate of a new extended space-time. Consequently, we have field theory in a space-time with spatial extra-dimensions. This new coordinate has a canonical momentum associated, where the effects of a new physics can emerge in the fields propagation along the extra-dimension. In this paper, we introduce the gauge invariance in the DFR NC space-time by the composite symmetry U⋆(N) × U⋆(N). We present the non-Abelian gauge symmetry in DFR formalism and the consequences of this symmetry in the presence of such extra-dimension. The gauge symmetry in this DFR scenario can reveal new gauge fields attached to 𝜃-extra-dimension. As application, we construct a unification of Strong Interaction with the electromagnetism and a Higgs model to give masses to the NC bosons. We estimate their masses by using some experimental constraints of QCD.
Fermions in a Kerr-Newman space-time
Dariescu, M.A.; Dariescu, C.; Gottlieb, I.
1995-10-01
The aim of this paper is to put the U(I)-gauge theory of fermions in the space-time described by a Kerr-Newman metric. The field equations have rather complicated expressions essentially different from the Minkowskian spacetime.
Confinement from gluodynamics in curved space-time
Gaete, Patricio; Spallucci, Euro
2008-01-15
We determine the static potential for a heavy quark-antiquark pair from gluodynamics in curved space-time. Our calculation is done within the framework of the gauge-invariant, path-dependent, variables formalism. The potential energy is the sum of a Yukawa and a linear potential, leading to the confinement of static charges.
Space-Time Fractional DKP Equation and Its Solution
NASA Astrophysics Data System (ADS)
Bouzid, N.; Merad, M.
2017-05-01
In this paper, a fractional Hamiltonian formulation for Duffin-Kemmer-Petiau' (DKP) fields is presented and, as done in the framework of the Lagrangian formalism, the fractional DKP equation is deduced. The space-time fractional DKP equation is then solved for both scalar and vectorial cases. The wave functions obtained are expressed in terms of Mittag-Leffler function.
Parabosonic string and space-time non-commutativity
Seridi, M. A.; Belaloui, N.
2012-06-27
We investigate the para-quantum extension of the bosonic strings in a non-commutative space-time. We calculate the trilinear relations between the mass-center variables and the modes and we derive the Virasoro algebra where a new anomaly term due to the non-commutativity is obtained.
New theory of space-time and gravitation: Yilmaz's approach
Mizobuchi, Y.
1985-07-01
Space-time under the influence of a gravitational field is dealt with in accordance with the theory of Huseyin Yilmaz. The space-time structure is defined with the concept of local operational procedure of measurement as in the special theory of relativity. The local signal velocity of light is equal to the constant c as it is measured. In static cases, this new theory of space-time physics gives an exponential form for the metric. It is an exponential function of the gravitational potential difference. This functional structure shows a transitive group property of the metric, which is preserved in general cases. The gravitational field is quite analogous to other familiar fields in physics except for the curved space-time structure; it obeys a general Poisson equation and has a stress-energy tensor. The existence of the stress-energy tensor ensures the conservation law of energy-momentum and gives rise to the field-theoretic equation of motion. Thus the new theory has both relativistic and field-theoretic aspects.
Hermitian realizations of κ-Minkowski space-time
NASA Astrophysics Data System (ADS)
Kovačević, Domagoj; Meljanac, Stjepan; Samsarov, Andjelo; Škoda, Zoran
2015-01-01
General realizations, star products and plane waves for κ-Minkowski space-time are considered. Systematic construction of general Hermitian realization is presented, with special emphasis on noncommutative plane waves and Hermitian star product. Few examples are elaborated and possible physical applications are mentioned.
Joint space-time geostatistical model for air quality surveillance
NASA Astrophysics Data System (ADS)
Russo, A.; Soares, A.; Pereira, M. J.
2009-04-01
Air pollution and peoples' generalized concern about air quality are, nowadays, considered to be a global problem. Although the introduction of rigid air pollution regulations has reduced pollution from industry and power stations, the growing number of cars on the road poses a new pollution problem. Considering the characteristics of the atmospheric circulation and also the residence times of certain pollutants in the atmosphere, a generalized and growing interest on air quality issues led to research intensification and publication of several articles with quite different levels of scientific depth. As most natural phenomena, air quality can be seen as a space-time process, where space-time relationships have usually quite different characteristics and levels of uncertainty. As a result, the simultaneous integration of space and time is not an easy task to perform. This problem is overcome by a variety of methodologies. The use of stochastic models and neural networks to characterize space-time dispersion of air quality is becoming a common practice. The main objective of this work is to produce an air quality model which allows forecasting critical concentration episodes of a certain pollutant by means of a hybrid approach, based on the combined use of neural network models and stochastic simulations. A stochastic simulation of the spatial component with a space-time trend model is proposed to characterize critical situations, taking into account data from the past and a space-time trend from the recent past. To identify near future critical episodes, predicted values from neural networks are used at each monitoring station. In this paper, we describe the design of a hybrid forecasting tool for ambient NO2 concentrations in Lisbon, Portugal.
2011-09-30
channel interference mitigation for underwater acoustic MIMO-OFDM. 3) Turbo equalization for OFDM modulated physical layer network coding. 4) Blind CFO...Localization and tracking of underwater physical systems. 7) NAMS: A networked acoustic modem system for underwater applications . 8) OFDM receiver design in...3) Turbo Equalization for OFDM Modulated Physical Layer Network Coding. We have investigated a practical orthogonal frequency division multiplexing
Trajectory Data Analyses for Pedestrian Space-time Activity Study
Qi, Feng; Du, Fei
2013-01-01
It is well recognized that human movement in the spatial and temporal dimensions has direct influence on disease transmission1-3. An infectious disease typically spreads via contact between infected and susceptible individuals in their overlapped activity spaces. Therefore, daily mobility-activity information can be used as an indicator to measure exposures to risk factors of infection. However, a major difficulty and thus the reason for paucity of studies of infectious disease transmission at the micro scale arise from the lack of detailed individual mobility data. Previously in transportation and tourism research detailed space-time activity data often relied on the time-space diary technique, which requires subjects to actively record their activities in time and space. This is highly demanding for the participants and collaboration from the participants greatly affects the quality of data4. Modern technologies such as GPS and mobile communications have made possible the automatic collection of trajectory data. The data collected, however, is not ideal for modeling human space-time activities, limited by the accuracies of existing devices. There is also no readily available tool for efficient processing of the data for human behavior study. We present here a suite of methods and an integrated ArcGIS desktop-based visual interface for the pre-processing and spatiotemporal analyses of trajectory data. We provide examples of how such processing may be used to model human space-time activities, especially with error-rich pedestrian trajectory data, that could be useful in public health studies such as infectious disease transmission modeling. The procedure presented includes pre-processing, trajectory segmentation, activity space characterization, density estimation and visualization, and a few other exploratory analysis methods. Pre-processing is the cleaning of noisy raw trajectory data. We introduce an interactive visual pre-processing interface as well as an
Trajectory data analyses for pedestrian space-time activity study.
Qi, Feng; Du, Fei
2013-02-25
It is well recognized that human movement in the spatial and temporal dimensions has direct influence on disease transmission(1-3). An infectious disease typically spreads via contact between infected and susceptible individuals in their overlapped activity spaces. Therefore, daily mobility-activity information can be used as an indicator to measure exposures to risk factors of infection. However, a major difficulty and thus the reason for paucity of studies of infectious disease transmission at the micro scale arise from the lack of detailed individual mobility data. Previously in transportation and tourism research detailed space-time activity data often relied on the time-space diary technique, which requires subjects to actively record their activities in time and space. This is highly demanding for the participants and collaboration from the participants greatly affects the quality of data(4). Modern technologies such as GPS and mobile communications have made possible the automatic collection of trajectory data. The data collected, however, is not ideal for modeling human space-time activities, limited by the accuracies of existing devices. There is also no readily available tool for efficient processing of the data for human behavior study. We present here a suite of methods and an integrated ArcGIS desktop-based visual interface for the pre-processing and spatiotemporal analyses of trajectory data. We provide examples of how such processing may be used to model human space-time activities, especially with error-rich pedestrian trajectory data, that could be useful in public health studies such as infectious disease transmission modeling. The procedure presented includes pre-processing, trajectory segmentation, activity space characterization, density estimation and visualization, and a few other exploratory analysis methods. Pre-processing is the cleaning of noisy raw trajectory data. We introduce an interactive visual pre-processing interface as well as an
Quantum gravity effects in Myers-Perry space-times
NASA Astrophysics Data System (ADS)
Litim, Daniel F.; Nikolakopoulos, Konstantinos
2014-04-01
We study quantum gravity effects for Myers-Perry black holes assuming that the leading contributions arise from the renormalization group evolution of Newton's coupling. Provided that gravity weakens following the asymptotic safety conjecture, we find that quantum effects lift a degeneracy of higher-dimensional black holes, and dominate over kinematical ones induced by rotation, particularly for small black hole mass, large angular momentum, and higher space-time dimensionality. Quantum-corrected space-times display inner and outer horizons, and show the existence of a black hole of smallest mass in any dimension. Ultra-spinning solutions no longer persist. Thermodynamic properties including temperature, specific heat, the Komar integrals, and aspects of black hole mechanics are studied as well. Observing a softening of the ring singularity, we also discuss the validity of classical energy conditions.
Chiral fermions on 2D curved space-times
NASA Astrophysics Data System (ADS)
Loran, Farhang
2017-06-01
The theory of free Majorana-Weyl spinors is the prototype of conformal field theory in two dimensions in which the gravitational anomaly and the Weyl anomaly obstruct extending the flat space-time results to curved backgrounds. In this paper, we investigate a quantization scheme in which the short distance singularity in the two-point function of chiral fermions on a two-dimensional curved space-time is given by the Green’s function corresponding to the classical field equation. We compute the singular term in the Green’s function explicitly and observe that the short distance limit is not well-defined in general. We identify constraints on the geometry which are necessary to resolve this problem. On such special backgrounds, the theory has locally c = 1 2 conformal symmetry.
Space-time evolution and CMB anisotropies from quantum gravity
NASA Astrophysics Data System (ADS)
Hamada, Ken-Ji; Horata, Shinichi; Yukawa, Tetsuyuki
2006-12-01
We propose an evolutional scenario of the universe which starts from quantum states with conformal invariance, passing through the inflationary era, and then makes a transition to the conventional Einstein space-time. The space-time dynamics is derived from the renormalizable higher-derivative quantum gravity on the basis of a conformal gravity in four dimensions. Based on the linear perturbation theory in the inflationary background, we simulate evolutions of gravitational scalar, vector, and tensor modes, and evaluate the spectra at the transition point located at the beginning of the big bang. The obtained spectra cover the range of the primordial spectra for explaining the anisotropies in the homogeneous cosmic microwave background.
Space-time evolution and CMB anisotropies from quantum gravity
Hamada, Ken-ji; Horata, Shinichi; Yukawa, Tetsuyuki
2006-12-15
We propose an evolutional scenario of the universe which starts from quantum states with conformal invariance, passing through the inflationary era, and then makes a transition to the conventional Einstein space-time. The space-time dynamics is derived from the renormalizable higher-derivative quantum gravity on the basis of a conformal gravity in four dimensions. Based on the linear perturbation theory in the inflationary background, we simulate evolutions of gravitational scalar, vector, and tensor modes, and evaluate the spectra at the transition point located at the beginning of the big bang. The obtained spectra cover the range of the primordial spectra for explaining the anisotropies in the homogeneous cosmic microwave background.
Measuring Space-Time Geometry over the Ages
Stebbins, Albert; /Fermilab
2012-05-01
Theorists are often told to express things in the 'observational plane'. One can do this for space-time geometry, considering 'visual' observations of matter in our universe by a single observer over time, with no assumptions about isometries, initial conditions, nor any particular relation between matter and geometry, such as Einstein's equations. Using observables as coordinates naturally leads to a parametrization of space-time geometry in terms of other observables, which in turn prescribes an observational program to measure the geometry. Under the assumption of vorticity-free matter flow we describe this observational program, which includes measurements of gravitational lensing, proper motion, and redshift drift. Only 15% of the curvature information can be extracted without long time baseline observations, and this increases to 35% with observations that will take decades. The rest would likely require centuries of observations. The formalism developed is exact, non-perturbative, and more general than the usual cosmological analysis.
Space-time rainfall downscaling with multifractal models
NASA Astrophysics Data System (ADS)
Deidda, Roberto
2010-05-01
It is well know that rainfall fields display fluctuations in space and time that increase as the scale of observation decreases. Multifractal theory represents a solid base to characterize scale-invariance properties observed in rainfall fields as well as to develop downscaling models able to reproduce observed statistics. The availability of such downscaling tools allows forecasting of floods in small basins by coupling meteorological and hydrological models working on different space-time grid resolution. In this talk multifractal theory will be reviewed highlighting the most relevant aspects for rainfall downscaling (e.g., the concept of scale-invariance in rainfall fields displaying space-time self-similarity or self-affinity, the role of orography). The main results of the scale-invariance analysis of rainfall retrieved by remote sensors will be discussed. Finally the application of multifractal models for rainfall downscaling will be presented and some new ideas for ensemble verification will be argued.
Multifractal models for space-time rainfall downscaling
NASA Astrophysics Data System (ADS)
Deidda, Roberto
2013-04-01
It is well know that rainfall fields display fluctuations in space and time that increase as the scale of observation decreases. Multifractal theory represents a solid base to characterize scale-invariance properties observed in rainfall fields as well as to develop downscaling models able to reproduce observed statistics. The availability of such downscaling tools allows forecasting of floods in small basins by coupling meteorological and hydrological models working on different space-time grid resolution. In this talk multifractal theory will be reviewed highlighting the most relevant aspects for rainfall downscaling (e.g., the concept of scale-invariance in rainfall fields displaying space-time self-similarity or self-affinity, the role of orography). The main results of the scale-invariance analysis of rainfall retrieved by remote sensors will be discussed. Finally the application of multifractal models for rainfall downscaling will be presented and some new ideas for ensemble verification will be argued.
Space time ETAS models and an improved extension
NASA Astrophysics Data System (ADS)
Ogata, Yosihiko; Zhuang, Jiancang
2006-02-01
For sensitive detection of anomalous seismicity such as quiescence and activation in a given region, we need a suitable statistical reference model that represents a normal seismic activity in the region. The regional occurrence rate of the earthquakes is modeled as a function of previous activity, the specific form of which is based on empirical laws in time and space such as the modified Omori formula and the Utsu-Seki scaling law of aftershock area against magnitude, respectively. This manuscript summarizes the development of the epidemic type aftershock sequence (ETAS) model and proposes an extended version of the best fitted space-time model that was suggested in Ogata [Ogata, Y., 1998. Space-time point-process models for earthquake occurrences, Ann. Inst. Statist. Math., 50: 379-402.]. This model indicates significantly better fit to seismicity in various regions in and around Japan.
Convexity and the Euclidean Metric of Space-Time
NASA Astrophysics Data System (ADS)
Kalogeropoulos, Nikolaos
2017-02-01
We address the question about the reasons why the "Wick-rotated", positive-definite, space-time metric obeys the Pythagorean theorem. An answer is proposed based on the convexity and smoothness properties of the functional spaces purporting to provide the kinematic framework of approaches to quantum gravity. We employ moduli of convexity and smoothness which are eventually extremized by Hilbert spaces. We point out the potential physical significance that functional analytical dualities play in this framework. Following the spirit of the variational principles employed in classical and quantum Physics, such Hilbert spaces dominate in a generalized functional integral approach. The metric of space-time is induced by the inner product of such Hilbert spaces.
The Verriest Lecture: color lessons from space, time and motion.
Shevell, Steven K
2012-02-01
The appearance of a chromatic stimulus depends on more than the wavelengths composing it. The scientific literature has countless examples showing that spatial and temporal features of light influence the colors we see. Studying chromatic stimuli that vary over space, time, or direction of motion has a further benefit beyond predicting color appearance: the unveiling of otherwise concealed neural processes of color vision. Spatial or temporal stimulus variation uncovers multiple mechanisms of brightness and color perception at distinct levels of the visual pathway. Spatial variation in chromaticity and luminance can change perceived three-dimensional shape, an example of chromatic signals that affect a percept other than color. Chromatic objects in motion expose the surprisingly weak link between the chromaticity of objects and their physical direction of motion, and the role of color in inducing an illusory motion direction. Space, time, and motion-color's colleagues-reveal the richness of chromatic neural processing. © 2012 Optical Society of America
The Verriest Lecture: Color lessons from space, time, and motion
Shevell, Steven K.
2012-01-01
The appearance of a chromatic stimulus depends on more than the wavelengths composing it. The scientific literature has countless examples showing that spatial and temporal features of light influence the colors we see. Studying chromatic stimuli that vary over space, time or direction of motion has a further benefit beyond predicting color appearance: the unveiling of otherwise concealed neural processes of color vision. Spatial or temporal stimulus variation uncovers multiple mechanisms of brightness and color perception at distinct levels of the visual pathway. Spatial variation in chromaticity and luminance can change perceived three-dimensional shape, an example of chromatic signals that affect a percept other than color. Chromatic objects in motion expose the surprisingly weak link between the chromaticity of objects and their physical direction of motion, and the role of color in inducing an illusory motion direction. Space, time and motion – color’s colleagues – reveal the richness of chromatic neural processing. PMID:22330398
From computation to black holes and space-time foam.
Ng, Y J
2001-04-02
We show that quantum mechanics and general relativity limit the speed nu of a simple computer (such as a black hole) and its memory space I to I(nu2) less, similar(t(-2))P, where t(P) is the Planck time. We also show that the lifetime of a simple clock and its precision are similarly limited. These bounds and the holographic bound originate from the same physics that governs the quantum fluctuations of space-time. We further show that these physical bounds are realized for black holes, yielding the correct Hawking black hole lifetime, and that space-time undergoes much larger quantum fluctuations than conventional wisdom claims-almost within range of detection with modern gravitational-wave interferometers.
Micro-Macro Duality and Space-Time Emergence
Ojima, Izumi
2011-03-28
The microscopic origin of space-time geometry is explained on the basis of an emergence process associated with the condensation of infinite number of microscopic quanta responsible for symmetry breakdown, which implements the basic essence of 'Quantum-Classical Correspondence' and of the forcing method in physical and mathematical contexts, respectively. From this viewpoint, the space-time dependence of physical quantities arises from the 'logical extension' to change 'constant objects' into 'variable objects' by tagging the order parameters associated with the condensation onto ''constant objects''; the logical direction here from a value y to a domain variable x(to materialize the basic mechanism behind the Gel'fand isomorphism) is just opposite to that common in the usual definition of a function f : x->f(x) from its domain variable x to a value y = f(x).
Cerenkov and transition radiation in space-time periodic media.
NASA Technical Reports Server (NTRS)
Elachi, C.
1972-01-01
The solution to the problem of determining the radiation emitted by a uniformly moving charged particle in a sinusoidally space-time periodic medium is obtained. The space-time periodicity can be considered as due to a strong pump wave and is expressed as a traveling-wave-type change in the dielectric constant or the plasma density. The solution covers also the limiting case of sinusoidally stratified media. The expression and spectrum of the radiated electromagnetic field are determined for different media: dielectric, isotropic and uniaxial plasma. Depending on the nature of the medium and the velocity of the particle, the radiated field is of the Cerenkov and/or transition type. The Brillouin diagram is used extensively in understanding and determining the nature, extent, and spectrum of the different modes of radiation, and a focusing effect is also studied.
k-Inflation in noncommutative space-time
NASA Astrophysics Data System (ADS)
Feng, Chao-Jun; Li, Xin-Zhou; Liu, Dao-Jun
2015-02-01
The power spectra of the scalar and tensor perturbations in the noncommutative k-inflation model are calculated in this paper. In this model, all the modes created when the stringy space-time uncertainty relation is satisfied, and they are generated inside the sound/Hubble horizon during inflation for the scalar/tensor perturbations. It turns out that a linear term describing the noncommutative space-time effect contributes to the power spectra of the scalar and tensor perturbations. Confronting the general noncommutative k-inflation model with latest results from Planck and BICEP2, and taking and as free parameters, we find that it is well consistent with observations. However, for the two specific models, i.e. the tachyon and DBI inflation models, it is found that the DBI model is not favored, while the tachyon model lies inside the contour, when the e-folding number is assumed to be around.
Effect of Heat on Space-Time Correlations in Jets
NASA Technical Reports Server (NTRS)
Bridges, James
2006-01-01
Measurements of space-time correlations of velocity, acquired in jets from acoustic Mach number 0.5 to 1.5 and static temperature ratios up to 2.7 are presented and analyzed. Previous reports of these experiments concentrated on the experimental technique and on validating the data. In the present paper the dataset is analyzed to address the question of how space-time correlations of velocity are different in cold and hot jets. The analysis shows that turbulent kinetic energy intensities, lengthscales, and timescales are impacted by the addition of heat, but by relatively small amounts. This contradicts the models and assumptions of recent aeroacoustic theory trying to predict the noise of hot jets. Once the change in jet potential core length has been factored out, most one- and two-point statistics collapse for all hot and cold jets.
Constructing AN Inhomogeneous Braneworld Through Space-Time Matching
NASA Astrophysics Data System (ADS)
Giang, Dan; Dyer, Charles C.
We attempt to construct the braneworld analog of the cheese slice universe, an inhomogeneous cosmology constructed from alternating layers of Kasner and FLRW space-times. This construction is possible in four dimensions and we find that the energy conditions can be satisfied in the braneworld context. However, an extension into the bulk becomes more problematic. We use a 3 + 1 + 1 decomposition inspired by the ADM decompositions to show that structure is required in the bulk to support an inhomogeneous brane.
Space-time adaptive wavelet methods for parabolic evolution problems
NASA Astrophysics Data System (ADS)
Schwab, Christoph; Stevenson, Rob
2009-09-01
With respect to space-time tensor-product wavelet bases, parabolic initial boundary value problems are equivalently formulated as bi-infinite matrix problems. Adaptive wavelet methods are shown to yield sequences of approximate solutions which converge at the optimal rate. In case the spatial domain is of product type, the use of spatial tensor product wavelet bases is proved to overcome the so-called curse of dimensionality, i.e., the reduction of the convergence rate with increasing spatial dimension.
Uniqueness of Kerr space-time near null infinity
Wu Xiaoning; Bai Shan
2008-12-15
We reexpress the Kerr metric in standard Bondi-Sachs coordinates near null infinity I{sup +}. Using the uniqueness result of the characteristic initial value problem, we prove the Kerr metric is the only asymptotically flat, stationary, axially symmetric, type-D solution of the vacuum Einstein equation. The Taylor series of Kerr space-time is expressed in terms of Bondi-Sachs coordinates, and the Newman-Penrose constants have been calculated.
Killing tensors in stationary and axially symmetric space-times
NASA Astrophysics Data System (ADS)
Vollmer, Andreas
2017-05-01
We discuss the existence of Killing tensors for certain (physically motivated) stationary and axially symmetric vacuum space-times. We show nonexistence of a nontrivial Killing tensor for a Tomimatsu-Sato metric (up to valence 7), for a C-metric (up to valence 9) and for a Zipoy-Voorhees metric (up to valence 11). The results are obtained by mathematically completely rigorous, nontrivial computer algebra computations with a huge number of equations involved in the problem.
Corrected Hawking Temperature in Snyder's Quantized Space-time
NASA Astrophysics Data System (ADS)
Ma, Meng-Sen; Liu, Fang; Zhao, Ren
2015-06-01
In the quantized space-time of Snyder, generalized uncertainty relation and commutativity are both included. In this paper we analyze the possible form for the corrected Hawking temperature and derive it from the both effects. It is shown that the corrected Hawking temperature has a form similar to the one of noncommutative geometry inspired Schwarzschild black hole, however with an requirement for the noncommutative parameter 𝜃 and the minimal length a.
Causality in noncommutative two-sheeted space-times
NASA Astrophysics Data System (ADS)
Franco, Nicolas; Eckstein, Michał
2015-10-01
We investigate the causal structure of two-sheeted space-times using the tools of Lorentzian spectral triples. We show that the noncommutative geometry of these spaces allows for causal relations between the two sheets. The computation is given in detail when the sheet is a 2- or 4-dimensional globally hyperbolic spin manifold. The conclusions are then generalised to a point-dependent distance between the two sheets resulting from the fluctuations of the Dirac operator.
Detecting space-time cancer clusters using residential histories
NASA Astrophysics Data System (ADS)
Jacquez, Geoffrey M.; Meliker, Jaymie R.
2007-04-01
Methods for analyzing geographic clusters of disease typically ignore the space-time variability inherent in epidemiologic datasets, do not adequately account for known risk factors (e.g., smoking and education) or covariates (e.g., age, gender, and race), and do not permit investigation of the latency window between exposure and disease. Our research group recently developed Q-statistics for evaluating space-time clustering in cancer case-control studies with residential histories. This technique relies on time-dependent nearest neighbor relationships to examine clustering at any moment in the life-course of the residential histories of cases relative to that of controls. In addition, in place of the widely used null hypothesis of spatial randomness, each individual's probability of being a case is instead based on his/her risk factors and covariates. Case-control clusters will be presented using residential histories of 220 bladder cancer cases and 440 controls in Michigan. In preliminary analyses of this dataset, smoking, age, gender, race and education were sufficient to explain the majority of the clustering of residential histories of the cases. Clusters of unexplained risk, however, were identified surrounding the business address histories of 10 industries that emit known or suspected bladder cancer carcinogens. The clustering of 5 of these industries began in the 1970's and persisted through the 1990's. This systematic approach for evaluating space-time clustering has the potential to generate novel hypotheses about environmental risk factors. These methods may be extended to detect differences in space-time patterns of any two groups of people, making them valuable for security intelligence and surveillance operations.
Class of Einstein-Maxwell-dilaton-axion space-times
Matos, Tonatiuh; Miranda, Galaxia; Sanchez-Sanchez, Ruben; Wiederhold, Petra
2009-06-15
We use the harmonic maps ansatz to find exact solutions of the Einstein-Maxwell-dilaton-axion (EMDA) equations. The solutions are harmonic maps invariant to the symplectic real group in four dimensions Sp(4,R){approx}O(5). We find solutions of the EMDA field equations for the one- and two-dimensional subspaces of the symplectic group. Specially, for illustration of the method, we find space-times that generalize the Schwarzschild solution with dilaton, axion, and electromagnetic fields.
Review of software for space-time disease surveillance
2010-01-01
Disease surveillance makes use of information technology at almost every stage of the process, from data collection and collation, through to analysis and dissemination. Automated data collection systems enable near-real time analysis of incoming data. This context places a heavy burden on software used for space-time surveillance. In this paper, we review software programs capable of space-time disease surveillance analysis, and outline some of their salient features, shortcomings, and usability. Programs with space-time methods were selected for inclusion, limiting our review to ClusterSeer, SaTScan, GeoSurveillance and the Surveillance package for R. We structure the review around stages of analysis: preprocessing, analysis, technical issues, and output. Simulated data were used to review each of the software packages. SaTScan was found to be the best equipped package for use in an automated surveillance system. ClusterSeer is more suited to data exploration, and learning about the different methods of statistical surveillance. PMID:20226054
Relativistic helicity and link in Minkowski space-time
Yoshida, Z.; Kawazura, Y.; Yokoyama, T.
2014-04-15
A relativistic helicity has been formulated in the four-dimensional Minkowski space-time. Whereas the relativistic distortion of space-time violates the conservation of the conventional helicity, the newly defined relativistic helicity conserves in a barotropic fluid or plasma, dictating a fundamental topological constraint. The relation between the helicity and the vortex-line topology has been delineated by analyzing the linking number of vortex filaments which are singular differential forms representing the pure states of Banach algebra. While the dimension of space-time is four, vortex filaments link, because vorticities are primarily 2-forms and the corresponding 2-chains link in four dimension; the relativistic helicity measures the linking number of vortex filaments that are proper-time cross-sections of the vorticity 2-chains. A thermodynamic force yields an additional term in the vorticity, by which the vortex filaments on a reference-time plane are no longer pure states. However, the vortex filaments on a proper-time plane remain to be pure states, if the thermodynamic force is exact (barotropic), thus, the linking number of vortex filaments conserves.
Relativistic helicity and link in Minkowski space-time
Yoshida, Z.; Kawazura, Y.; Yokoyama, T.
2014-04-15
A relativistic helicity has been formulated in the four-dimensional Minkowski space-time. Whereas the relativistic distortion of space-time violates the conservation of the conventional helicity, the newly defined relativistic helicity conserves in a barotropic fluid or plasma, dictating a fundamental topological constraint. The relation between the helicity and the vortex-line topology has been delineated by analyzing the linking number of vortex filaments which are singular differential forms representing the pure states of Banach algebra. While the dimension of space-time is four, vortex filaments link, because vorticities are primarily 2-forms and the corresponding 2-chains link in four dimension; the relativistic helicity measures the linking number of vortex filaments that are proper-time cross-sections of the vorticity 2-chains. A thermodynamic force yields an additional term in the vorticity, by which the vortex filaments on a reference-time plane are no longer pure states. However, the vortex filaments on a proper-time plane remain to be pure states, if the thermodynamic force is exact (barotropic), thus, the linking number of vortex filaments conserves.
Probabilistic space-time video modeling via piecewise GMM.
Greenspan, Hayit; Goldberger, Jacob; Mayer, Arnaldo
2004-03-01
In this paper, we describe a statistical video representation and modeling scheme. Video representation schemes are needed to segment a video stream into meaningful video-objects, useful for later indexing and retrieval applications. In the proposed methodology, unsupervised clustering via Gaussian mixture modeling extracts coherent space-time regions in feature space, and corresponding coherent segments (video-regions) in the video content. A key feature of the system is the analysis of video input as a single entity as opposed to a sequence of separate frames. Space and time are treated uniformly. The probabilistic space-time video representation scheme is extended to a piecewise GMM framework in which a succession of GMMs are extracted for the video sequence, instead of a single global model for the entire sequence. The piecewise GMM framework allows for the analysis of extended video sequences and the description of nonlinear, nonconvex motion patterns. The extracted space-time regions allow for the detection and recognition of video events. Results of segmenting video content into static versus dynamic video regions and video content editing are presented.
Experimental Constraints of the Exotic Shearing of Space-Time
Richardson, Jonathan William
2016-08-01
The Holometer program is a search for rst experimental evidence that space-time has quantum structure. The detector consists of a pair of co-located 40-m power-recycled interferometers whose outputs are read out synchronously at 50 MHz, achieving sensitivity to spatiallycorrelated uctuations in dierential position on time scales shorter than the light-crossing time of the instruments. Unlike gravitational wave interferometers, which time-resolve transient geometrical disturbances in the spatial background, the Holometer is searching for a universal, stationary quantization noise of the background itself. This dissertation presents the nal results of the Holometer Phase I search, an experiment congured for sensitivity to exotic coherent shearing uctuations of space-time. Measurements of high-frequency cross-spectra of the interferometer signals obtain sensitivity to spatially-correlated eects far exceeding any previous measurement, in a broad frequency band extending to 7.6 MHz, twice the inverse light-crossing time of the apparatus. This measurement is the statistical aggregation of 2.1 petabytes of 2-byte dierential position measurements obtained over a month-long exposure time. At 3 signicance, it places an upper limit on the coherence scale of spatial shear two orders of magnitude below the Planck length. The result demonstrates the viability of this novel spatially-correlated interferometric detection technique to reach unprecedented sensitivity to coherent deviations of space-time from classicality, opening the door for direct experimental tests of theories of relational quantum gravity.
Experimental constraints on the exotic shearing of space-time
NASA Astrophysics Data System (ADS)
Richardson, Jonathan William
The Holometer program is a search for first experimental evidence that space-time has quantum structure. The detector consists of a pair of co-located 40-m power-recycled interferometers whose outputs are read out synchronously at 50 MHz, achieving sensitivity to spatially-correlated fluctuations in differential position on time scales shorter than the light-crossing time of the instruments. Unlike gravitational wave interferometers, which time-resolve transient geometrical disturbances in the spatial background, the Holometer is searching for a universal, stationary quantization noise of the background itself. This dissertation presents the final results of the Holometer Phase I search, an experiment configured for sensitivity to exotic coherent shearing fluctuations of space-time. Measurements of high-frequency cross-spectra of the interferometer signals obtain sensitivity to spatially-correlated effects far exceeding any previous measurement, in a broad frequency band extending to 7.6 MHz, twice the inverse light-crossing time of the apparatus. This measurement is the statistical aggregation of 2.1 petabytes of 2-byte differential position measurements obtained over a month-long exposure time. At 3-sigma significance, it places an upper limit on the coherence scale of spatial shear two orders of magnitude below the Planck length. The result demonstrates the viability of this novel spatially-correlated interferometric detection technique to reach unprecedented sensitivity to coherent deviations of space-time from classicality, opening the door for direct experimental tests of theories of relational quantum gravity.
Space-time epidemiology of Crimean-Congo hemorrhagic fever (CCHF) in Iran.
Ahmadkhani, Mohsen; Alesheikh, Ali Asghar; Khakifirouz, Sahar; Salehi-Vaziri, Mostafa
2017-09-18
Iran, as an endemic country of Crimean-Congo hemorrhagic fever (CCHF), has been suffering from severe health issues and substantial economic burdens imposed by the disease. We analyzed monthly and yearly spatial and temporal distributions of CCHF to better understand the epidemiology of the disease in Iran. A cross-sectional survey was performed on 1027 recorded cases between 2000 and 2014. Global Moran's I analysis was applied to statistically evaluate the spatial pattern of the disease. Additionally, spatial and space-time scan statistics were used to study the presence of possible spatial and space-time hotspots. Global Moran's I analysis proved that the incidence of the disease is strongly clustered in Iran (p<0.01). Purely spatial scan statistics identified that there were three clusters in the eastern, southern and western parts of the country. Through space-time analysis, we found that the highest incidence of CCHF occurred in the eastern parts of the country between 2006 and 2012. Monthly clusters, which include cities with lower (average) temperatures, had been occurring in relatively short periods. The distribution of CCHF incidence in this country is spatially and temporally clustered. The majority of the clusters emerged during the critical years of 2009 and 2013. Summer is the predominant period for the formation of CCHF clusters. Copyright © 2017 Elsevier GmbH. All rights reserved.
Beutler, D.E.; Halbleib, J.A. ); Knott, D.P. )
1989-12-01
This paper reports pulse-height distributions in two different types of Ge detectors measured for a variety of medium-energy x-ray bremsstrahlung spectra. These measurements have been compared to predictions using the integrated tiger series (ITS) Monte Carlo electron/photon transport code. In general, the authors find excellent agreement between experiments and predictions using no free parameters. These results demonstrate that the ITS codes can predict the combined bremsstrahlung production and energy deposition with good precision (within measurement uncertainties). The one region of disagreement observed occurs for low-energy (<50 keV) photons using low-energy bremsstrahlung spectra. In this case the ITS codes appear to underestimate the produced and/or absorbed radiation by almost an order of magnitude.
A MAPLE Package for Energy-Momentum Tensor Assessment in Curved Space-Time
Murariu, Gabriel; Praisler, Mirela
2010-01-21
One of the most interesting problem which remain unsolved, since the birth of the General Theory of Relativity (GR), is the energy-momentum localization. All our reflections are within the Lagrange formalism of the field theory. The concept of the energy-momentum tensor for gravitational interactions has a long history. To find a generally accepted expression, there have been different attempts. This paper is dedicated to the investigation of the energy-momentum problem in the theory of General Relativity. We use Einstein [1], Landau-Lifshitz [2], Bergmann-Thomson [3] and Moller's [4] prescriptions to evaluate energy-momentum distribution. In order to cover the huge volume of computation and, bearing in mind to make a general approaching for different space-time configurations, a MAPLE application to succeed in studying the energy momentum tensor was built. In the second part of the paper for two space-time configuration, the comparative results were presented.
MØLLER Energy-Momentum Prescription for a Locally Rotationally Symmetric Space-Time
NASA Astrophysics Data System (ADS)
Aydogdu, Oktay
The energy distribution in the Locally Rotationally Symmetric (LRS) Bianchi type II space-time is obtained by considering the Møller energy-momentum definition in both Einstein's theory of general relativity and teleparallel theory of relativity. The energy distribution which includes both the matter and gravitational field is found to be zero in both of these different gravitation theories. This result agrees with previous works of Cooperstock and Israelit, Rosen, Johri et al., Banerjee and Sen, Vargas, and Aydogdu and Salti. Our result — the total energy of the universe is zero — supports the view points of Albrow and Tryon.
Space-time adaptive solution of inverse problems with the discrete adjoint method
NASA Astrophysics Data System (ADS)
Alexe, Mihai; Sandu, Adrian
2014-08-01
This paper develops a framework for the construction and analysis of discrete adjoint sensitivities in the context of time dependent, adaptive grid, adaptive step models. Discrete adjoints are attractive in practice since they can be generated with low effort using automatic differentiation. However, this approach brings several important challenges. The space-time adjoint of the forward numerical scheme may be inconsistent with the continuous adjoint equations. A reduction in accuracy of the discrete adjoint sensitivities may appear due to the inter-grid transfer operators. Moreover, the optimization algorithm may need to accommodate state and gradient vectors whose dimensions change between iterations. This work shows that several of these potential issues can be avoided through a multi-level optimization strategy using discontinuous Galerkin (DG) hp-adaptive discretizations paired with Runge-Kutta (RK) time integration. We extend the concept of dual (adjoint) consistency to space-time RK-DG discretizations, which are then shown to be well suited for the adaptive solution of time-dependent inverse problems. Furthermore, we prove that DG mesh transfer operators on general meshes are also dual consistent. This allows the simultaneous derivation of the discrete adjoint for both the numerical solver and the mesh transfer logic with an automatic code generation mechanism such as algorithmic differentiation (AD), potentially speeding up development of large-scale simulation codes. The theoretical analysis is supported by numerical results reported for a two-dimensional non-stationary inverse problem.
NASA Astrophysics Data System (ADS)
Jiang, Xue-Qin; Huang, Peng; Huang, Duan; Lin, Dakai; Zeng, Guihua
2017-02-01
Achieving information theoretic security with practical complexity is of great interest to continuous-variable quantum key distribution in the postprocessing procedure. In this paper, we propose a reconciliation scheme based on the punctured low-density parity-check (LDPC) codes. Compared to the well-known multidimensional reconciliation scheme, the present scheme has lower time complexity. Especially when the chosen punctured LDPC code achieves the Shannon capacity, the proposed reconciliation scheme can remove the information that has been leaked to an eavesdropper in the quantum transmission phase. Therefore, there is no information leaked to the eavesdropper after the reconciliation stage. This indicates that the privacy amplification algorithm of the postprocessing procedure is no more needed after the reconciliation process. These features lead to a higher secret key rate, optimal performance, and availability for the involved quantum key distribution scheme.
Algorithmic characterization results for the Kerr-NUT-(A)dS space-time. I. A space-time approach
NASA Astrophysics Data System (ADS)
Paetz, Tim-Torben
2017-04-01
We provide an algorithm to check whether a given vacuum space-time (M ,g ) admits a Killing vector field with respect to which the Mars-Simon tensor vanishes. In particular, we obtain an algorithmic procedure to check whether (M ,g ) is locally isometric to a member of the Kerr-NUT-(A)dS family. A particular emphasis will be devoted to the Kerr-(A)dS case.
NASA Astrophysics Data System (ADS)
Lovejoy, S.; de Lima, I. P.
2015-12-01
Over the range of time scales from about 10 days to 30-100 years, in addition to the familiar weather and climate regimes, there is an intermediate "macroweather" regime characterized by negative temporal fluctuation exponents: implying that fluctuations tend to cancel each other out, that averages tend to converge. We show theoretically and numerically that macroweather precipitation can be modeled by a stochastic weather-climate model (the Climate Extended Fractionally Integrated Flux, model, CEFIF) first proposed for macroweather temperatures and we show numerically that a four parameter space-time CEFIF model can approximately reproduce eight or so empirical space-time exponents. In spite of this success, CEFIF is theoretically and numerically difficult to manage. We therefore propose a simplified stochastic model in which the temporal behavior is modeled as a fractional Gaussian noise but the spatial behaviour as a multifractal (climate) cascade: a spatial extension of the recently introduced ScaLIng Macroweather Model, SLIMM. Both the CEFIF and this spatial SLIMM model have a property often implicitly assumed by climatologists: that climate statistics can be "homogenized" by normalizing them with the standard deviation of the anomalies. Physically, it means that the spatial macroweather variability corresponds to different climate zones that multiplicatively modulate the local, temporal statistics. This simplified macroweather model provides a framework for macroweather forecasting that exploits the system's long range memory and spatial correlations. We test factorization and the model with the help of three centennial, global scale precipitation products that we analyze jointly in space-time.
Taki, M; Signorini, A; Oton, C J; Nannipieri, T; Di Pasquale, F
2013-10-15
We experimentally demonstrate the use of cyclic pulse coding for distributed strain and temperature measurements in hybrid Raman/Brillouin optical time-domain analysis (BOTDA) optical fiber sensors. The highly integrated proposed solution effectively addresses the strain/temperature cross-sensitivity issue affecting standard BOTDA sensors, allowing for simultaneous meter-scale strain and temperature measurements over 10 km of standard single mode fiber using a single narrowband laser source only.
Space-time investigation of the effects of fishing on fish populations.
Ono, Kotaro; Shelton, Andrew O; Ward, Eric J; Thorson, James T; Feist, Blake E; Hilborn, Ray
2016-03-01
Species distribution models (SDMs) are important statistical tools for obtaining ecological insight into species-habitat relationships and providing advice for natural resource management. Many SDMs have been developed over the past decades, with a focus on space- and more recently, time-dependence. However, most of these studies have been on terrestrial species and applications to marine species have been limited. In this study, we used three large spatio-temporal data sources (habitat maps, survey-based fish density estimates, and fishery catch data) and a novel space-time model to study how the distribution of fishing may affect the seasonal dynamics of a commercially important fish species (Pacific Dover sole, Microstomus pacificus) off the west coast of the USA. Dover sole showed a large scale change in seasonal and annual distribution of biomass, and its distribution shifted from mid-depth zones to inshore or deeper waters during late summer/early fall. In many cases, the scale of fishery removal was small compared to these broader changes in biomass, suggesting that seasonal dynamics were primarily driven by movement and not by fishing. The increasing availability of appropriate data and space-time modeling software should facilitate extending this work to many other species, particularly those in marine ecosystems, and help tease apart the role of growth, natural mortality, recruitment, movement, and fishing on spatial patterns of species distribution in marine systems.
Is there a space-time continuum in olfaction?
Leon, Michael; Johnson, Brett A
2009-07-01
The coding of olfactory stimuli across a wide range of organisms may rely on fundamentally similar mechanisms in which a complement of specific odorant receptors on olfactory sensory neurons respond differentially to airborne chemicals to initiate the process by which specific odors are perceived. The question that we address in this review is the role of specific neurons in mediating this sensory system--an identity code--relative to the role that temporally specific responses across many neurons play in producing an olfactory perception--a temporal code. While information coded in specific neurons may be converted into a temporal code, it is also possible that temporal codes exist in the absence of response specificity for any particular neuron or subset of neurons. We review the data supporting these ideas, and we discuss the research perspectives that could help to reveal the mechanisms by which odorants become perceptions.
NASA Astrophysics Data System (ADS)
Lake, Kayll
2010-12-01
The title immediately brings to mind a standard reference of almost the same title [1]. The authors are quick to point out the relationship between these two works: they are complementary. The purpose of this work is to explain what is known about a selection of exact solutions. As the authors state, it is often much easier to find a new solution of Einstein's equations than it is to understand it. Even at first glance it is very clear that great effort went into the production of this reference. The book is replete with beautifully detailed diagrams that reflect deep geometric intuition. In many parts of the text there are detailed calculations that are not readily available elsewhere. The book begins with a review of basic tools that allows the authors to set the notation. Then follows a discussion of Minkowski space with an emphasis on the conformal structure and applications such as simple cosmic strings. The next two chapters give an in-depth review of de Sitter space and then anti-de Sitter space. Both chapters contain a remarkable collection of useful diagrams. The standard model in cosmology these days is the ICDM model and whereas the chapter on the Friedmann-Lemaître-Robertson-Walker space-times contains much useful information, I found the discussion of the currently popular a representation rather too brief. After a brief but interesting excursion into electrovacuum, the authors consider the Schwarzschild space-time. This chapter does mention the Swiss cheese model but the discussion is too brief and certainly dated. Space-times related to Schwarzschild are covered in some detail and include not only the addition of charge and the cosmological constant but also the addition of radiation (the Vaidya solution). Just prior to a discussion of the Kerr space-time, static axially symmetric space-times are reviewed. Here one can find a very interesting discussion of the Curzon-Chazy space-time. The chapter on rotating black holes is rather brief and, for
Re-examination of globally flat space-time.
Feldman, Michael R
2013-01-01
In the following, we offer a novel approach to modeling the observed effects currently attributed to the theoretical concepts of "dark energy," "dark matter," and "dark flow." Instead of assuming the existence of these theoretical concepts, we take an alternative route and choose to redefine what we consider to be inertial motion as well as what constitutes an inertial frame of reference in flat space-time. We adopt none of the features of our current cosmological models except for the requirement that special and general relativity be local approximations within our revised definition of inertial systems. Implicit in our ideas is the assumption that at "large enough" scales one can treat objects within these inertial systems as point-particles having an insignificant effect on the curvature of space-time. We then proceed under the assumption that time and space are fundamentally intertwined such that time- and spatial-translational invariance are not inherent symmetries of flat space-time (i.e., observable clock rates depend upon both relative velocity and spatial position within these inertial systems) and take the geodesics of this theory in the radial Rindler chart as the proper characterization of inertial motion. With this commitment, we are able to model solely with inertial motion the observed effects expected to be the result of "dark energy," "dark matter," and "dark flow." In addition, we examine the potential observable implications of our theory in a gravitational system located within a confined region of an inertial reference frame, subsequently interpreting the Pioneer anomaly as support for our redefinition of inertial motion. As well, we extend our analysis into quantum mechanics by quantizing for a real scalar field and find a possible explanation for the asymmetry between matter and antimatter within the framework of these redefined inertial systems.
Re-Examination of Globally Flat Space-Time
NASA Astrophysics Data System (ADS)
Feldman, Michael R.
2013-11-01
In the following, we offer a novel approach to modeling the observed effects currently attributed to the theoretical concepts of "dark energy," "dark matter," and "dark flow." Instead of assuming the existence of these theoretical concepts, we take an alternative route and choose to redefine what we consider to be inertial motion as well as what constitutes an inertial frame of reference in flat space-time. We adopt none of the features of our current cosmological models except for the requirement that special and general relativity be local approximations within our revised definition of inertial systems. Implicit in our ideas is the assumption that at "large enough" scales one can treat objects within these inertial systems as point-particles having an insignificant effect on the curvature of space-time. We then proceed under the assumption that time and space are fundamentally intertwined such that time- and spatial-translational invariance are not inherent symmetries of flat space-time (i.e., observable clock rates depend upon both relative velocity and spatial position within these inertial systems) and take the geodesics of this theory in the radial Rindler chart as the proper characterization of inertial motion. With this commitment, we are able to model solely with inertial motion the observed effects expected to be the result of "dark energy," "dark matter," and "dark flow." In addition, we examine the potential observable implications of our theory in a gravitational system located within a confined region of an inertial reference frame, subsequently interpreting the Pioneer anomaly as support for our redefinition of inertial motion. As well, we extend our analysis into quantum mechanics by quantizing for a real scalar field and find a possible explanation for the asymmetry between matter and antimatter within the framework of these redefined inertial systems.
Re-Examination of Globally Flat Space-Time
Feldman, Michael R.
2013-01-01
In the following, we offer a novel approach to modeling the observed effects currently attributed to the theoretical concepts of “dark energy,” “dark matter,” and “dark flow.” Instead of assuming the existence of these theoretical concepts, we take an alternative route and choose to redefine what we consider to be inertial motion as well as what constitutes an inertial frame of reference in flat space-time. We adopt none of the features of our current cosmological models except for the requirement that special and general relativity be local approximations within our revised definition of inertial systems. Implicit in our ideas is the assumption that at “large enough” scales one can treat objects within these inertial systems as point-particles having an insignificant effect on the curvature of space-time. We then proceed under the assumption that time and space are fundamentally intertwined such that time- and spatial-translational invariance are not inherent symmetries of flat space-time (i.e., observable clock rates depend upon both relative velocity and spatial position within these inertial systems) and take the geodesics of this theory in the radial Rindler chart as the proper characterization of inertial motion. With this commitment, we are able to model solely with inertial motion the observed effects expected to be the result of “dark energy,” “dark matter,” and “dark flow.” In addition, we examine the potential observable implications of our theory in a gravitational system located within a confined region of an inertial reference frame, subsequently interpreting the Pioneer anomaly as support for our redefinition of inertial motion. As well, we extend our analysis into quantum mechanics by quantizing for a real scalar field and find a possible explanation for the asymmetry between matter and antimatter within the framework of these redefined inertial systems. PMID:24250790
New Efficient Sparse Space Time Algorithms for Superparameterization on Mesoscales
Xing, Yulong; Majda, Andrew J.; Grabowski, Wojciech W.
2009-12-01
Superparameterization (SP) is a large-scale modeling system with explicit representation of small-scale and mesoscale processes provided by a cloud-resolving model (CRM) embedded in each column of a large-scale model. New efficient sparse space-time algorithms based on the original idea of SP are presented. The large-scale dynamics are unchanged, but the small-scale model is solved in a reduced spatially periodic domain to save the computation cost following a similar idea applied by one of the authors for aquaplanet simulations. In addition, the time interval of integration of the small-scale model is reduced systematically for the same purpose, which results in a different coupling mechanism between the small- and large-scale models. The new algorithms have been applied to a stringent two-dimensional test suite involving moist convection interacting with shear with regimes ranging from strong free and forced squall lines to dying scattered convection as the shear strength varies. The numerical results are compared with the CRM and original SP. It is shown here that for all of the regimes of propagation and dying scattered convection, the large-scale variables such as horizontal velocity and specific humidity are captured in a statistically accurate way (pattern correlations above 0.75) based on space-time reduction of the small-scale models by a factor of 1/3; thus, the new efficient algorithms for SP result in a gain of roughly a factor of 10 in efficiency while retaining a statistical accuracy on the large-scale variables. Even the models with 1/6 reduction in space-time with a gain of 36 in efficiency are able to distinguish between propagating squall lines and dying scattered convection with a pattern correlation above 0.6 for horizontal velocity and specific humidity. These encouraging results suggest the possibility of using these efficient new algorithms for limited-area mesoscale ensemble forecasting.
Harmonic Analysis on the Space-Time Gauge Continuum
NASA Astrophysics Data System (ADS)
Bleecker, David D.
1983-06-01
The classical Kaluza-Klein unified field theory has previously been extended to unify and geometrize gravitational and gauge fields, through a study of the geometry of a bundle space P over space-time. Here, we examine the physical relevance of the Laplace operator on the complex-valued functions on P. The spectrum and eigenspaces are shown (via the Peter-Weyl theorem) to determine the possible masses of any type of particle field. In the Euclidean case, we prove that zero-mass particles necessarily come in infinite families. Also, lower bounds on masses of particles of a given type are obtained in terms of the curvature of P.
Space-Time, Phenomenology, and the Picture Theory of Language
NASA Astrophysics Data System (ADS)
Grelland, Hans Herlof
To estimate Minkowski's introduction of space-time in relativity, the case is made for the view that abstract language and mathematics carries meaning not only by its connections with observation but as pictures of facts. This view is contrasted to the more traditional intuitionism of Hume, Mach, and Husserl. Einstein's attempt at a conceptual reconstruction of space and time as well as Husserl's analysis of the loss of meaning in science through increasing abstraction is analysed. Wittgenstein's picture theory of language is used to explain how meaning is conveyed by abstract expressions, with the Minkowski space as a case.
MAPLE Procedures For Boson Fields System On Curved Space - Time
Murariu, Gabriel
2007-04-23
Systems of interacting boson fields are an important subject in the last years. From the problem of dark matter to boson stars' study, boson fields are involved. In the general configuration, it is considered a Klein-Gordon-Maxwell-Einstein fields system for a complex scalar field minimally coupled to a gravitational one. The necessity of studying a larger number of space-time configurations and the huge volume of computations for each particular situation are some reasons for building a MAPLE procedures set for this kind of systems.
Canonical quantization of general relativity in discrete space-times.
Gambini, Rodolfo; Pullin, Jorge
2003-01-17
It has long been recognized that lattice gauge theory formulations, when applied to general relativity, conflict with the invariance of the theory under diffeomorphisms. We analyze discrete lattice general relativity and develop a canonical formalism that allows one to treat constrained theories in Lorentzian signature space-times. The presence of the lattice introduces a "dynamical gauge" fixing that makes the quantization of the theories conceptually clear, albeit computationally involved. The problem of a consistent algebra of constraints is automatically solved in our approach. The approach works successfully in other field theories as well, including topological theories. A simple cosmological application exhibits quantum elimination of the singularity at the big bang.
Naked singularities in higher dimensional Vaidya space-times
Ghosh, S. G.; Dadhich, Naresh
2001-08-15
We investigate the end state of the gravitational collapse of a null fluid in higher-dimensional space-times. Both naked singularities and black holes are shown to be developing as the final outcome of the collapse. The naked singularity spectrum in a collapsing Vaidya region (4D) gets covered with the increase in dimensions and hence higher dimensions favor a black hole in comparison to a naked singularity. The cosmic censorship conjecture will be fully respected for a space of infinite dimension.
Gauge invariant perturbations of Petrov type D space-times
NASA Astrophysics Data System (ADS)
Whiting, Bernard; Shah, Abhay
2016-03-01
The Regge-Wheeler and Zerilli equations are satisfied by gauge invariant perturbations of the Schwarzschild black hole geometry. Both the perturbation of the imaginary part of Ψ2 (a component of the Weyl curvature), and its time derivative, are gauge invariant and solve the Regge-Wheeler equation with different sources. The Ψ0 and Ψ4 perturbations of the Weyl curvature are not only gauge, but also tetrad, invariant. We explore the framework in which these results hold, and consider what generalizations may extend to the Kerr geometry, and presumably to Petrov type D space-times in general. NSF Grants PHY 1205906 and 1314529, ERC (EU) FP7 Grant 304978.
Founding Gravitation in 4D Euclidean Space-Time Geometry
Winkler, Franz-Guenter
2010-11-24
The Euclidean interpretation of special relativity which has been suggested by the author is a formulation of special relativity in ordinary 4D Euclidean space-time geometry. The natural and geometrically intuitive generalization of this view involves variations of the speed of light (depending on location and direction) and a Euclidean principle of general covariance. In this article, a gravitation model by Jan Broekaert, which implements a view of relativity theory in the spirit of Lorentz and Poincare, is reconstructed and shown to fulfill the principles of the Euclidean approach after an appropriate reinterpretation.
Modeling of space-time focusing of localized nondiffracting pulses
NASA Astrophysics Data System (ADS)
Zamboni-Rached, Michel; Besieris, Ioannis M.
2016-10-01
In this paper we develop a method capable of modeling the space-time focusing of nondiffracting pulses. These pulses can possess arbitrary peak velocities and, in addition to being resistant to diffraction, can have their peak intensities and focusing positions chosen a priori. More specifically, we can choose multiple locations (spatial ranges) of space and time focalization; also, the pulse intensities can be chosen in advance. The pulsed wave solutions presented here can have very interesting applications in many different fields, such as free-space optical communications, remote sensing, medical apparatus, etc.
Particle propagation and effective space-time in gravity's rainbow
NASA Astrophysics Data System (ADS)
Garattini, Remo; Mandanici, Gianluca
2012-01-01
Based on the results obtained in our previous study on gravity’s rainbow, we determine the quantum corrections to the space-time metric for the Schwarzschild and the de Sitter background, respectively. We analyze how quantum fluctuations alter these metrics, inducing modifications on the propagation of test particles. Significantly enough, we find that quantum corrections can become relevant not only for particles approaching the Planck energy but, due to the one-loop contribution, even for low-energy particles as far as Planckian length scales are considered. We briefly compare our results with others obtained in similar studies and with the recent experimental OPERA announcement of superluminal neutrino propagation.
Scalar field equation in Robertson-Walker space-time.
NASA Astrophysics Data System (ADS)
Zecca, A.
1997-06-01
The quantization of the scalar field is reconsidered in some of its basic elements in the context of the Robertson-Walker space-time. The integration of the generalized Klein-Gordon equation is performed by preliminary separation of the equation with the usual separation method. The orthonormal mode solutions are determined by the explicit integration of the resulting angular and radial equations and by standard properties of the time equation. The time evolution given by the standard cosmological model is briefly discussed.
Schwinger Effect in a Robertson-Walker Space-Time
NASA Astrophysics Data System (ADS)
Haouat, S.; Chekireb, R.
2012-06-01
The problem of particle creation from vacuum in a flat Robertson-Walker space-time in the presence of a varying electric field is studied. The Klein Gordon equation is exactly solved when the scale factor is a( η)= A+ Btanh( λη). The canonical method based on Bogoliubov transformation is applied. The pair creation probability and the density number of created particles are calculated. The particular case of radiation dominated universe is considered where the total probability is written as a Schwinger-like series. It is shown that the electric field amplifies gravitational particle creation.
Kinematical properties of topologically nontrivial models of space-time
NASA Astrophysics Data System (ADS)
Konstantinov, M. Y.
1992-12-01
Using simple geometric models of wormholes as examples, we analyze the influence of the boundary conditions arising as a result of sewing the inner and outer spaces, on the casual structure of space-time and clock synchronization. It is shown that the relativity principle cannot be applied to the motion of the wormhole in the outer space. We demonstrate that it is impossible to dynamically transform a wormhole into a time machine. It is noted that the considered models are counterexamples to a number of statements concerning causality violation.
Time adaptive variational integrators: A space-time geodesic approach
NASA Astrophysics Data System (ADS)
Nair, Sujit
2012-02-01
The goal of this paper is to show that the space-time geodesic approach of classical mechanics can be used to generate a time adaptive variational integration scheme. The only assumption we make is that the Lagrangian for the system is in a separable form. The geometric structure which is preserved in the integration scheme is made explicit and the algorithm is illustrated with simulation for a compact case, a non-compact case, a chaotic system which arises as a perturbation of an integrable system and the figure eight solution for a three body problem.
Mode locking with a compensated space--time astigmatism
Christov, I.P.; Stoev, V.D.; Murnane, M.M.; Kapteyn, H.C.
1995-10-15
We present what is to our knowledge the first full spatial plus temporal model of a self-mode-locked titanium-doped sapphire laser. The self-consistent evolution of the pulse toward steady state imposes strong space--time focusing in the crystal, where both the space and time foci are located. This combined focusing significantly improves the discrimination properties of the nonlinear resonator for shorter pulses and reduces the transient stage of pulse formation. Our theoretical results are in very good agreement with experiment. {copyright} {ital 1995} {ital Optical} {ital Society} {ital of} {ital America}.
Dirac equation on coordinate dependent noncommutative space-time
NASA Astrophysics Data System (ADS)
Kupriyanov, V. G.
2014-05-01
In this paper we discuss classical aspects of spinor field theory on the coordinate dependent noncommutative space-time. The noncommutative Dirac equation describing spinning particle in an external vector field and the corresponding action principle are proposed. The specific choice of a star product allows us to derive a conserved noncommutative probability current and to obtain the energy-momentum tensor for free noncommutative spinor field. Finally, we consider a free noncommutative Dirac fermion and show that if the Poisson structure is Lorentz-covariant, the standard energy-momentum dispersion relation remains valid.
Video stabilization using space-time video completion
NASA Astrophysics Data System (ADS)
Voronin, V.; Frantc, V.; Marchuk, V.; Shrayfel, I.; Gapon, N.; Agaian, S.
2016-05-01
This paper proposes a video stabilization method using space-time video completion for effective static and dynamic textures reconstruction instead of frames cropping. The proposed method can produce full-frame videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. We propose to use a set of descriptors that encapsulate the information of periodical motion of objects necessary to reconstruct missing/corrupted frames. The background is filled-in by extending spatial texture synthesis techniques using set of 3D patches. Experimental results demonstrate the effectiveness of the proposed method in the task of full-frame video stabilization.
K speckle: space-time correlation function of doubly scattered light in an imaging system.
Li, Dayan; Kelly, Damien P; Sheridan, John T
2013-05-01
The scattering of coherent monochromatic light at an optically rough surface, such as a diffuser, produces a speckle field, which is usually described by reference to its statistical properties. For example, the real and imaginary parts of a fully developed speckle field can be modeled as a random circular Gaussian process. When such a speckle field is used to illuminate a second diffuser, the statistics of the resulting doubly scattered field are in general no longer Gaussian, but rather follow a K distribution. In this paper we determine the space-time correlation function of such a doubly scattered speckle field that has been imaged by a single lens system. A space-time correlation function is derived that contains four separate terms; similar to the Gaussian case it contains an average DC term and a fluctuating AC term. However, in addition there are two terms that are related to contributions from each of the diffusers independently. We examine how our space-time correlation function varies as the diffusers are rotated at different speeds and as the point spread function of the imaging system is changed. A series of numerical simulations are used to confirm different aspects of the theoretical analysis. We then finish with a discussion of our results and some potential applications, including controlling spatial coherence and speckle reduction.
Energy Density Associated with the Bianchi Type-II Space-Time
NASA Astrophysics Data System (ADS)
Aydogdu, O.; Salti, M.
2006-01-01
To calculate the total energy distribution (due to both matter and fields including gravitation) associated with locally rotationally symmetric (LRS) Bianchi type-II space-times. We use the Bergmann-Thomson energy-momentum complex in both general relativity and teleparallel gravity. We find that the energy density in these different gravitation theories is vanishing at all times. This result is the same as that obtained by one of the present authors who solved the problem of finding the energy-momentum in LRS Bianchi type-II by using the energy-momentum complexes of Einstein and Landau and Lifshitz. The results of this paper also are consistent with those given in the previous works of Cooperstock and Israelit, Rosen, Johri et al., Banerjee-Sen, Vargas, and Salti et al. In this paper, we perform the calculations for a non-diagonal expanding space-time to determine whether the Bergmann-Thomson energy momentum prescription is consistent with the other formulations. (We previously considered diagonal and expanding space-time models.) Our result supports the viewpoints of Albrow and Tryon.
Surveying Multidisciplinary Aspects in Real-Time Distributed Coding for Wireless Sensor Networks
Braccini, Carlo; Davoli, Franco; Marchese, Mario; Mongelli, Maurizio
2015-01-01
Wireless Sensor Networks (WSNs), where a multiplicity of sensors observe a physical phenomenon and transmit their measurements to one or more sinks, pertain to the class of multi-terminal source and channel coding problems of Information Theory. In this category, “real-time” coding is often encountered for WSNs, referring to the problem of finding the minimum distortion (according to a given measure), under transmission power constraints, attainable by encoding and decoding functions, with stringent limits on delay and complexity. On the other hand, the Decision Theory approach seeks to determine the optimal coding/decoding strategies or some of their structural properties. Since encoder(s) and decoder(s) possess different information, though sharing a common goal, the setting here is that of Team Decision Theory. A more pragmatic vision rooted in Signal Processing consists of fixing the form of the coding strategies (e.g., to linear functions) and, consequently, finding the corresponding optimal decoding strategies and the achievable distortion, generally by applying parametric optimization techniques. All approaches have a long history of past investigations and recent results. The goal of the present paper is to provide the taxonomy of the various formulations, a survey of the vast related literature, examples from the authors' own research, and some highlights on the inter-play of the different theories. PMID:25633597
Surveying multidisciplinary aspects in real-time distributed coding for Wireless Sensor Networks.
Braccini, Carlo; Davoli, Franco; Marchese, Mario; Mongelli, Maurizio
2015-01-27
Wireless Sensor Networks (WSNs), where a multiplicity of sensors observe a physical phenomenon and transmit their measurements to one or more sinks, pertain to the class of multi-terminal source and channel coding problems of Information Theory. In this category, "real-time" coding is often encountered for WSNs, referring to the problem of finding the minimum distortion (according to a given measure), under transmission power constraints, attainable by encoding and decoding functions, with stringent limits on delay and complexity. On the other hand, the Decision Theory approach seeks to determine the optimal coding/decoding strategies or some of their structural properties. Since encoder(s) and decoder(s) possess different information, though sharing a common goal, the setting here is that of Team Decision Theory. A more pragmatic vision rooted in Signal Processing consists of fixing the form of the coding strategies (e.g., to linear functions) and, consequently, finding the corresponding optimal decoding strategies and the achievable distortion, generally by applying parametric optimization techniques. All approaches have a long history of past investigations and recent results. The goal of the present paper is to provide the taxonomy of the various formulations, a survey of the vast related literature, examples from the authors' own research, and some highlights on the inter-play of the different theories.
Modeling the uncertainty associated with the observation scale of space/time natural processes
NASA Astrophysics Data System (ADS)
Lee, S.; Serre, M.
2005-12-01
In many mapping applications of spatiotemporally distributed hydrological processes, the traditional space/time Geostatistics approaches have played a significant role to estimate a variable of interest at unsampled locations. Measured values are usually sparsely located over space and time due to the difficulty and cost of obtaining data. In some cases, the data for the hydrological variable of interest may have been collected at different temporal or spatial observation scales. Even though mixing data measured at different space/time scales may alleviate the problem of the sparsity of the data available, it essentially disregards the scale effect of estimation results. The importance of the scale effect must be recognized since a variable displays different physical properties depending on the spatial or temporal scale at which it is observed. In this study we develop a mathematical framework to derive the conditional Probability Density Function (PDF) of a variable at the local scale given an observation of that variable at a larger spatial or temporal scale, which properly models the uncertainty associated with the different observations scales of space/time natural processes. The developed framework allows to efficiently mix data observed at a variety of scales by accounting for data uncertainty associated with each observation scale present, and therefore generates soft data rigorously assimilated in the Bayesian Maximum Entropy (BME) method of modern Geostatistics to increase the mapping accuracy of the map at the scale of interest. We investigate the proposed approach with synthetic case studies involving observations of a space/time process at a variety of temporal and spatial scales. These case studies demonstrate the power of the proposed approach by leading to a set of maps with a noticeable increase of mapping accuracy over classical approaches not accounting for the scale effects. Hence the proposed approach will be useful for a wide variety of
Modeling velocity space-time correlations in wind farms
NASA Astrophysics Data System (ADS)
Lukassen, Laura J.; Stevens, Richard J. A. M.; Meneveau, Charles; Wilczek, Michael
2016-11-01
Turbulent fluctuations of wind velocities cause power-output fluctuations in wind farms. The statistics of velocity fluctuations can be described by velocity space-time correlations in the atmospheric boundary layer. In this context, it is important to derive simple physics-based models. The so-called Tennekes-Kraichnan random sweeping hypothesis states that small-scale velocity fluctuations are passively advected by large-scale velocity perturbations in a random fashion. In the present work, this hypothesis is used with an additional mean wind velocity to derive a model for the spatial and temporal decorrelation of velocities in wind farms. It turns out that in the framework of this model, space-time correlations are a convolution of the spatial correlation function with a temporal decorrelation kernel. In this presentation, first results on the comparison to large eddy simulations will be presented and the potential of the approach to characterize power output fluctuations of wind farms will be discussed. Acknowledgements: 'Fellowships for Young Energy Scientists' (YES!) of FOM, the US National Science Foundation Grant IIA 1243482, and support by the Max Planck Society.
Canonical quantum gravity on noncommutative space-time
NASA Astrophysics Data System (ADS)
Kober, Martin
2015-06-01
In this paper canonical quantum gravity on noncommutative space-time is considered. The corresponding generalized classical theory is formulated by using the Moyal star product, which enables the representation of the field quantities depending on noncommuting coordinates by generalized quantities depending on usual coordinates. But not only the classical theory has to be generalized in analogy to other field theories. Besides, the necessity arises to replace the commutator between the gravitational field operator and its canonical conjugated quantity by a corresponding generalized expression on noncommutative space-time. Accordingly the transition to the quantum theory has also to be performed in a generalized way and leads to extended representations of the quantum theoretical operators. If the generalized representations of the operators are inserted to the generalized constraints, one obtains the corresponding generalized quantum constraints including the Hamiltonian constraint as dynamical constraint. After considering quantum geometrodynamics under incorporation of a coupling to matter fields, the theory is transferred to the Ashtekar formalism. The holonomy representation of the gravitational field as it is used in loop quantum gravity opens the possibility to calculate the corresponding generalized area operator.
Space time neural networks for tether operations in space
NASA Technical Reports Server (NTRS)
Lea, Robert N.; Villarreal, James A.; Jani, Yashvant; Copeland, Charles
1993-01-01
A space shuttle flight scheduled for 1992 will attempt to prove the feasibility of operating tethered payloads in earth orbit. due to the interaction between the Earth's magnetic field and current pulsing through the tether, the tethered system may exhibit a circular transverse oscillation referred to as the 'skiprope' phenomenon. Effective damping of skiprope motion depends on rapid and accurate detection of skiprope magnitude and phase. Because of non-linear dynamic coupling, the satellite attitude behavior has characteristic oscillations during the skiprope motion. Since the satellite attitude motion has many other perturbations, the relationship between the skiprope parameters and attitude time history is very involved and non-linear. We propose a Space-Time Neural Network implementation for filtering satellite rate gyro data to rapidly detect and predict skiprope magnitude and phase. Training and testing of the skiprope detection system will be performed using a validated Orbital Operations Simulator and Space-Time Neural Network software developed in the Software Technology Branch at NASA's Lyndon B. Johnson Space Center.
What makes space-time interactions in human vision asymmetrical?
Homma, Chizuru T; Ashida, Hiroshi
2015-01-01
The interaction of space and time affects perception of extents: (1) the longer the exposure duration, the longer the line length is perceived and vice versa; (2) the shorter the line length is, the shorter the exposure duration is perceived. Previous studies have shown that space-time interactions in human vision are asymmetrical; spatial cognition has a larger effect on temporal cognition rather than vice versa (Merritt et al., 2010). What makes the interactions asymmetrical? In this study, participants were asked to judge exposure duration of lines that differed in length or to judge the lengths of the lines with different exposure time; to judge the task-relevant stimulus extents that also varied in the task-irrelevant stimulus extents. Paired spatial and temporal tasks in which the ranges of task-relevant and -irrelevant stimulus values were common, were conducted. In our hypothesis, the imbalance in saliency of spatial and temporal information would cause asymmetrical space-time interaction. To assess the saliency, task difficulty was rated. If saliency of relevant stimuli is high, the difficulty of discrimination task would be low, and vice versa. The saliency of irrelevant stimuli in one task would be reflected in the difficulty of the other task, in the pair of tasks. If saliency of irrelevant stimuli is high, the difficulty of paired task would be low, and vice versa. The result supports our hypothesis; spatial cognition asymmetrically affected on temporal cognition when the difficulty of temporal task was significantly higher than that of spatial task.
Space-time reference with an optical link
NASA Astrophysics Data System (ADS)
Berceau, P.; Taylor, M.; Kahn, J.; Hollberg, L.
2016-07-01
We describe a concept for realizing a high performance space-time reference using a stable atomic clock in a precisely defined orbit and synchronizing the orbiting clock to high-accuracy atomic clocks on the ground. The synchronization would be accomplished using a two-way lasercom link between ground and space. The basic approach is to take advantage of the highest-performance cold-atom atomic clocks at national standards laboratories on the ground and to transfer that performance to an orbiting clock that has good stability and that serves as a ‘frequency-flywheel’ over time-scales of a few hours. The two-way lasercom link would also provide precise range information and thus precise orbit determination. With a well-defined orbit and a synchronized clock, the satellite could serve as a high-accuracy space-time reference, providing precise time worldwide, a valuable reference frame for geodesy, and independent high-accuracy measurements of GNSS clocks. Under reasonable assumptions, a practical system would be able to deliver picosecond timing worldwide and millimeter orbit determination, and could serve as an enabling subsystem for other proposed space-gravity missions, which are briefly reviewed.
Brain system for mental orientation in space, time, and person
Peer, Michael; Salomon, Roy; Goldberg, Ilan; Blanke, Olaf; Arzy, Shahar
2015-01-01
Orientation is a fundamental mental function that processes the relations between the behaving self to space (places), time (events), and person (people). Behavioral and neuroimaging studies have hinted at interrelations between processing of these three domains. To unravel the neurocognitive basis of orientation, we used high-resolution 7T functional MRI as 16 subjects compared their subjective distance to different places, events, or people. Analysis at the individual-subject level revealed cortical activation related to orientation in space, time, and person in a precisely localized set of structures in the precuneus, inferior parietal, and medial frontal cortex. Comparison of orientation domains revealed a consistent order of cortical activity inside the precuneus and inferior parietal lobes, with space orientation activating posterior regions, followed anteriorly by person and then time. Core regions at the precuneus and inferior parietal lobe were activated for multiple orientation domains, suggesting also common processing for orientation across domains. The medial prefrontal cortex showed a posterior activation for time and anterior for person. Finally, the default-mode network, identified in a separate resting-state scan, was active for all orientation domains and overlapped mostly with person-orientation regions. These findings suggest that mental orientation in space, time, and person is managed by a specific brain system with a highly ordered internal organization, closely related to the default-mode network. PMID:26283353
Introducing the Dimensional Continuous Space-Time Theory
NASA Astrophysics Data System (ADS)
Martini, Luiz Cesar
2013-04-01
This article is an introduction to a new theory. The name of the theory is justified by the dimensional description of the continuous space-time of the matter, energy and empty space, that gathers all the real things that exists in the universe. The theory presents itself as the consolidation of the classical, quantum and relativity theories. A basic equation that describes the formation of the Universe, relating time, space, matter, energy and movement, is deduced. The four fundamentals physics constants, light speed in empty space, gravitational constant, Boltzmann's constant and Planck's constant and also the fundamentals particles mass, the electrical charges, the energies, the empty space and time are also obtained from this basic equation. This theory provides a new vision of the Big-Bang and how the galaxies, stars, black holes and planets were formed. Based on it, is possible to have a perfect comprehension of the duality between wave-particle, which is an intrinsic characteristic of the matter and energy. It will be possible to comprehend the formation of orbitals and get the equationing of atomics orbits. It presents a singular comprehension of the mass relativity, length and time. It is demonstrated that the continuous space-time is tridimensional, inelastic and temporally instantaneous, eliminating the possibility of spatial fold, slot space, worm hole, time travels and parallel universes. It is shown that many concepts, like dark matter and strong forces, that hypothetically keep the cohesion of the atomics nucleons, are without sense.
Rolling tachyon in Anti-de Sitter space-time
NASA Astrophysics Data System (ADS)
Israël, Dan; Rabinovici, Eliezer
2007-01-01
We study the decay of the unstable D-particle in three-dimensional anti-de Sitter space-time using worldsheet boundary conformal field theory methods. We test the open string completeness conjecture in a background for which the phase space available is only field-theoretic. This could present a serious challenge to the claim. We compute the emission of closed strings in the AdS3 × S3 × T4 background from the knowledge of the exact corresponding boundary state we construct. We show that the energy stored in the brane is mainly converted into very excited long strings. The energy stored in short strings and in open string pair production is much smaller and finite for any value of the string coupling. We find no "missing energy" problem. We compare our results to those obtained for a decay in flat space-time and to a background in the presence of a linear dilaton. Some remarks on holographic aspects of the problem are made.
Relativistic space-time positioning: principles and strategies
NASA Astrophysics Data System (ADS)
Tartaglia, Angelo
2013-11-01
Starting from the description of space- time as a curved four-dimensional manifold, null Gaussian coordinates systems as appropriate for relativistic positioning will be discussed. Different approaches and strategies will be reviewed, implementing the null coordinates with both continuous and pulsating electromagnetic signals. In particular, methods based on purely local measurements of proper time intervals between pulses will be expounded and the various possible sources of uncertainty will be analyzed. As sources of pulses both artificial and natural emitters will be considered. The latter will concentrate on either radio- or X ray-emitting pulsars, discussing advantages and drawbacks. As for artificial emitters, various solutions will be presented, from satellites orbiting the Earth to broadcasting devices carried both by spacecrafts and celestial bodies of the solar system. In general the accuracy of the positioning is expected to be limited, besides the instabilities and drift of the sources, by the precision of the local clock, but in any case in long journeys systematic cumulated errors will tend to become dominant. The problem can be kept under control properly using a high level of redundancy in the procedure for the calculation of the coordinates of the receiver and by mixing a number of different and complementary strategies. Finally various possibilities for doing fundamental physics experiments by means of space-time topography techniques will shortly be presented and discussed.
Space-Time Coding Using Algebraic Number Theory for Broadband Wireless Communications
2008-05-31
order 2. (III) Robust Phase Unwrapping, Chinese Remainder Theorem, and Their Applications in SAR Imaging of Moving Targets (i) A Sharpened Dynamic...signal processing. Motivated from the phase unwrapping algorithm, we have then obtained a robust CRT. (iii) New SAR Techniques for Fast and Slowly...Moving Target Imaging and Location: We have obtained non-uniform antenna array synthetic aperture radar (NUAA- SAR ) 7 where an antenna array is arranged
An Insight into Space-Time Block Codes using Hurwitz-Radon Families of Matrices
2008-01-01
1 ¼ ðk 2 2 þ t22ÞI8 and M2M T 2 ¼ ðk 2 2 þ t22Þ 1ð1þ k21 þ t21Þ 2I8. Furthermore, MT1 M2 ¼ ðk 2 2 þ t 2 2Þ 1ðAT2 k2 þ A T 3 t2ÞðA4A T 6 k2 þ A4A T 7... A4A T 6 k2 þ A4A T 7 t2 þ A6A T 7 ðk1t2 þ k2t1Þ...Because of AT1 A6A T 7 A0 ¼ AT5 A2A T 4 A3 ¼ A T 2 A4A T 5 A3, we can write ½245673 ¼ ½245367 ¼ ½1670½67 ¼ ½01. Therefore, we can also express b1A T
A Systematic Approach to Design of Space-Time Block Coded MIMO Systems
2006-06-01
MonteCarlo ,” which determines how many Monte Carlo iterations will be used for the simulation. Typically, twelve Monte Carlo iterations are chosen for...100; % total number of bits to be transmitted for the given SNR value MonteCarlo = 1; % number of runs to be simulated EbNo_db = 0:5:30; % SNR...total_bit_errors_monte_4 = []; for run = 1: MonteCarlo % Monte Carlo iteration % resetting ’total bit errors array’ for all SNR values
Many-to-Many Communications via Space-Time Network Coding (PREPRINT)
2010-01-01
are derived. In addition, an asymptotic SER approximation is also provided which is shown to be tight at high signal-to-noise ratio ( SNR ). Finally...In addition, an asymptotic SER approximation is also provided which is shown to be tight at high signal-to-noise ratio ( SNR ). Finally, the analytical...matched filtering operation is applied on each of the received signals yj,i, in the form of (√ Psjh ∗ j,i/N0 ) yj,i. Therefore, the SNR at the output
Multipoint-to-Point and Point-to-Multipoint Space-Time Network Coding
2010-03-01
on July 06,2010 at 13:53:22 UTC from IEEE Xplore . Restrictions apply. Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting...Authorized licensed use limited to: University of Maryland College Park. Downloaded on July 06,2010 at 13:53:22 UTC from IEEE Xplore . Restrictions apply...University of Maryland College Park. Downloaded on July 06,2010 at 13:53:22 UTC from IEEE Xplore . Restrictions apply. 0 5 10 15 20 25 30 10 −6 10 −5 10
Visceral leishmaniasis in the state of Sao Paulo, Brazil: spatial and space-time analysis
Cardim, Marisa Furtado Mozini; Guirado, Marluci Monteiro; Dibo, Margareth Regina; Chiaravalloti, Francisco
2016-01-01
ABSTRACT OBJECTIVE To perform both space and space-time evaluations of visceral leishmaniasis in humans in the state of Sao Paulo, Brazil. METHODS The population considered in the study comprised autochthonous cases of visceral leishmaniasis and deaths resulting from it in Sao Paulo, between 1999 and 2013. The analysis considered the western region of the state as its studied area. Thematic maps were created to show visceral leishmaniasis dissemination in humans in the municipality. Spatial analysis tools Kernel and Kernel ratio were used to respectively obtain the distribution of cases and deaths and the distribution of incidence and mortality. Scan statistics were used in order to identify spatial and space-time clusters of cases and deaths. RESULTS The visceral leishmaniasis cases in humans, during the studied period, were observed to occur in the western portion of Sao Paulo, and their territorial extension mainly followed the eastbound course of the Marechal Rondon highway. The incidences were characterized as two sequences of concentric ellipses of decreasing intensities. The first and more intense one was found to have its epicenter in the municipality of Castilho (where the Marechal Rondon highway crosses the border of the state of Mato Grosso do Sul) and the second one in Bauru. Mortality was found to have a similar behavior to incidence. The spatial and space-time clusters of cases were observed to coincide with the two areas of highest incidence. Both the space-time clusters identified, even without coinciding in time, were started three years after the human cases were detected and had the same duration, that is, six years. CONCLUSIONS The expansion of visceral leishmaniasis in Sao Paulo has been taking place in an eastbound direction, focusing on the role of highways, especially Marechal Rondon, in this process. The space-time analysis detected the disease occurred in cycles, in different spaces and time periods. These meetings, if considered, may
Proper conformal Killing vectors in static plane symmetric space-times
NASA Astrophysics Data System (ADS)
Hussain, T.; Khan, S.; Bokhari, A. H.; Khan, G. A.
2017-04-01
Conformal Killing vectors (CKVs) in static plane symmetric space-times were recently studied by Saifullah and Yazdan, who concluded by remarking that static plane symmetric space-times do not admit any proper CKV except in the case where these space-times are conformally flat. We present some non-conformally flat static plane symmetric space-time metrics admitting proper CKVs. For these space-times, we also investigate a special type of CKVs, known as inheriting CKVs.
Extreme wave analysis in the space-time domain: from observations to applications
NASA Astrophysics Data System (ADS)
Barbariol, Francesco; Alves, Jose-Henrique; Benetazzo, Alvise; Bergamasco, Filippo; Carniel, Sandro; Chao, Yung Y.; Chawla, Arun; Ricchi, Antonio; Sclavo, Mauro
2016-04-01
this end, analytical directional spectra that explicitly depend upon the wind forcing (e.g. Pierson-Moskowitz or JONSWAP frequency spectra, combined with a cos2 directional distribution) have been integrated to provide kinematic and geometric parameters of the sea state as a function of the wind speed and fetch length. Then, the SWAN numerical wave model has been modified in order to compute kinematic and geometric properties of the sea state, and run under different wave-current conditions and bathymetric gradients. In doing so, it has been possible to estimate the contribution to the space-time extremes variation due to the wind inputs, to current speed and to depth gradients. Weather forecasting applications consist of using spectra simulated by wave forecasting models to compute space-time extremes. In this context, we have recently implemented the space-time extremes computation (according to the second order Fedele model) within the WAVEWATCH III numerical wave model. New output products (i.e. the maximum expected crest and wave heights) have been validated using space-time stereo-photogrammetric measurements, proving the concept that powerful tools that provide space-time extremes forecasts over extended domains may be developed for applications beneficial to the marine community.
The application of the phase space time evolution method to electron shielding
NASA Technical Reports Server (NTRS)
Cordaro, M. C.; Zucker, M. S.
1972-01-01
A computer technique for treating the motion of charged and neutral particles and called the phase space time evolution method was developed. This technique employs the computer's bookkeeping capacity to keep track of the time development of a phase space distribution of particles. This method was applied to a study of the penetration of electrons. A 1 MeV beam of electrons normally incident on a semi-infinite slab of aluminum was used. Results of the calculation were compared with Monte Carlo calculations and experimental results. Time-dependent PSTE electron penetration results for the same problem are presented.
Direct space-time observation of pulse tunneling in an electromagnetic band gap
Doiron, Serge; Hache, Alain; Winful, Herbert G.
2007-08-15
We present space-time-resolved measurements of electromagnetic pulses tunneling through a coaxial electromagnetic band gap structure. The results show that during the tunneling process the field distribution inside the barrier is an exponentially decaying standing wave whose amplitude increases and decreases as it slowly follows the temporal evolution of the input pulse. At no time is a pulse maximum found inside the barrier, and hence the transmitted peak is not the incident peak that has propagated to the exit. The results support the quasistatic interpretation of tunneling dynamics and confirm that the group delay is not the traversal time of the input pulse peak.
The Space-Time Asymmetry Research (STAR) program
NASA Astrophysics Data System (ADS)
Buchman, Sasha
Stanford University, NASA Ames, and international partners propose the Space-Time Asymme-try Research (STAR) program, a series of three Science and Technology Development Missions, which will probe the fundamental relationships between space, time and gravity. What is the nature of space-time? Is space truly isotropic? Is the speed of light truly isotropic? If not, what is its direction and location dependency? What are the answers beyond Einstein? How will gravity and the standard model ultimately be combined? The first mission, STAR-1, will measure the absolute anisotropy of the velocity of light to one part in 1017 , derive the Kennedy-Thorndike (KT) coefficient to 7x10-10 (150-fold improvement over modern ground measurements), derive the Michelson-Morley (MM) coefficient to 10-11 (confirming the ground measurements), and derive the coefficients of Lorentz violation in the Standard Model Exten-sion (SME), in the range 7x10-17 to 10-13 (an order of magnitude improvement over ground measurements). The follow-on missions will achieve a factor of 100 higher sensitivities. The core instruments are high stability optical cavities and high accuracy gas spectroscopy frequency standards using the "NICE-OHMS technique. STAR-1 is accomplished with a fully redundant instrument flown on a standard bus, spin-stabilized spacecraft with a mission lifetime of two years. Spacecraft and instrument have a total mass of less than 180 kg and consume less than 200 W of power. STAR-1 would launch in 2015 as a secondary payload in a 650 km, sun-synchronous orbit. We describe the STAR-1 mission in detail and the STAR series in general, with a focus on how each mission will build on the development and success of the previous missions, methodically enhancing both the capabilities of the STAR instrument suite and our understanding of this important field. By coupling state-of-the-art scientific instrumentation with proven and cost-effective small satellite technology in an environment
On the relative energy associated with space-times of diagonal metrics
NASA Astrophysics Data System (ADS)
Korunur, Murat; Salti, Mustafa; Havare, Ali
2007-05-01
In order to evaluate the energy distribution (due to matter and fields including gravitation) associated with a space-time model of generalized diagonal metric, we consider the Einstein, Bergmann--Thomson and Landau--Lifshitz energy and/or momentum definitions both in Einstein's theory of general relativity and the teleparallel gravity (the tetrad theory of gravitation). We find same energy distribution using Einstein and Bergmann--Thomson formulations, but we also find that the energy--momentum prescription of Landau--Lifshitz disagree in general with these definitions. We also give eight different well-known space-time models as examples, and considering these models and using our results, we calculate the energy distributions associated with them. Furthermore, we show that for the Bianchi Type-I models all the formulations give the same result. This result agrees with the previous works of Cooperstock--Israelit, Rosen, Johri et al, Banerjee--Sen, Xulu, Vargas and Salti et al and supports the viewpoints of Albrow and Tryon.
He, Liang; Sieberer, Lukas M; Diehl, Sebastian
2017-02-24
We find a first-order transition driven by the strength of nonequilibrium conditions of one-dimensional driven open condensates. Associated with this transition is a new stable nonequilibrium phase, space-time vortex turbulence, whose vortex density and quasiparticle distribution show strongly nonthermal behavior. Below the transition, we identify a new time scale associated with noise-activated unbound space-time vortices, beyond which, the temporal coherence function changes from a Kardar-Parisi-Zhang-type subexponential to a disordered exponential decay. Experimental realization of the nonequilibrium vortex turbulent phase is facilitated in driven open condensates with a large diffusion rate.
A Space-Time Signal Decomposition Algorithm for Downlink MIMO DS-CDMA Receivers
NASA Astrophysics Data System (ADS)
Wang, Yung-Yi; Fang, Wen-Hsien; Chen, Jiunn-Tsair
We propose a dimension reduction algorithm for the receiver of the downlink of direct-sequence code-division multiple access (DS-CDMA) systems in which both the transmitters and the receivers employ antenna arrays of multiple elements. To estimate the high order channel parameters, we develop a layered architecture using dimension-reduced parameter estimation algorithms to estimate the frequency-selective multipath channels. In the proposed architecture, to exploit the space-time geometric characteristics of multipath channels, spatial beamformers and constrained (or unconstrained) temporal filters are adopted for clustered-multipath grouping and path isolation. In conjunction with the multiple access interference (MAI) suppression techniques, the proposed architecture jointly estimates the direction of arrivals, propagation delays, and fading amplitudes of the downlink fading multipaths. With the outputs of the proposed architecture, the signals of interest can then be naturally detected by using path-wise maximum ratio combining. Compared to the traditional techniques, such as the Joint-Angle-and-Delay-Estimation (JADE) algorithm for DOA-delay joint estimation and the space-time minimum mean square error (ST-MMSE) algorithm for signal detection, computer simulations show that the proposed algorithm substantially mitigate the computational complexity at the expense of only slight performance degradation.
A space-time discretization procedure for wave propagation problems
NASA Technical Reports Server (NTRS)
Davis, Sanford
1989-01-01
Higher order compact algorithms are developed for the numerical simulation of wave propagation by using the concept of a discrete dispersion relation. The dispersion relation is the imprint of any linear operator in space-time. The discrete dispersion relation is derived from the continuous dispersion relation by examining the process by which locally plane waves propagate through a chosen grid. The exponential structure of the discrete dispersion relation suggests an efficient splitting of convective and diffusive terms for dissipative waves. Fourth- and eighth-order convection schemes are examined that involve only three or five spatial grid points. These algorithms are subject to the same restrictions that govern the use of dispersion relations in the constructions of asymptotic expansions to nonlinear evolution equations. A new eighth-order scheme is developed that is exact for Courant numbers of 1, 2, 3, and 4. Examples are given of a pulse and step wave with a small amount of physical diffusion.
Entanglement, space-time and the Mayer-Vietoris theorem
NASA Astrophysics Data System (ADS)
Patrascu, Andrei T.
2017-06-01
Entanglement appears to be a fundamental building block of quantum gravity leading to new principles underlying the nature of quantum space-time. One such principle is the ER-EPR duality. While supported by our present intuition, a proof is far from obvious. In this article I present a first step towards such a proof, originating in what is known to algebraic topologists as the Mayer-Vietoris theorem. The main result of this work is the re-interpretation of the various morphisms arising when the Mayer-Vietoris theorem is used to assemble a torus-like topology from more basic subspaces on the torus in terms of quantum information theory resulting in a quantum entangler gate (Hadamard and c-NOT).
The twistor approach to space-time structures
NASA Astrophysics Data System (ADS)
Penrose, Roger
2005-11-01
An outline of twistor theory is presented. Initial motivations (from 1963) are given for this type of non-local geometry, as an intended scheme for unifying quantum theory and space-time structure. Basic twistor geometry and algebra is exhibited, and it is shown that this provides a complex-manifold description of classical (spinning) massless particles. Simple quantum commutation rules lead to a concise representation of massless particle wavefunctions, in terms of contour integrals or (more profoundly) holomorphic 1st cohomology. Non-linear versions give elegant representations of anti-self-dual Einstein (or Yang-Mills) fields, describing left-handed non-linear gravitons (or Yang-Mills particles). A brief outline of the current status of the "googly problem" is provided, whereby the right-handed particles would also be incorporated.
Space-time CFTs from the Riemann sphere
NASA Astrophysics Data System (ADS)
Adamo, Tim; Monteiro, Ricardo; Paulos, Miguel F.
2017-08-01
We consider two-dimensional chiral, first-order conformal field theories governing maps from the Riemann sphere to the projective light cone inside Minkowski space — the natural setting for describing conformal field theories in two fewer dimensions. These theories have a SL(2) algebra of local bosonic constraints which can be supplemented by additional fermionic constraints depending on the matter content of the theory. By computing the BRST charge associated with gauge fixing these constraints, we find anomalies which vanish for specific target space dimensions. These critical dimensions coincide precisely with those for which (biadjoint) cubic scalar theory, gauge theory and gravity are classically conformally invariant. Furthermore, the BRST cohomology of each theory contains vertex operators for the full conformal multiplets of single field insertions in each of these space-time CFTs. We give a prescription for the computation of three-point functions, and compare our formalism with the scattering equations approach to on-shell amplitudes.
Multipole structure of current vectors in curved space-time
NASA Astrophysics Data System (ADS)
Harte, Abraham I.
2007-01-01
A method is presented which allows the exact construction of conserved (i.e., divergence-free) current vectors from appropriate sets of multipole moments. Physically, such objects may be taken to represent the flux of particles or electric charge inside some classical extended body. Several applications are discussed. In particular, it is shown how to easily write down the class of all smooth and spatially bounded currents with a given total charge. This implicitly provides restrictions on the moments arising from the smoothness of physically reasonable vector fields. We also show that requiring all of the moments to be constant in an appropriate sense is often impossible. This likely limits the applicability of the Ehlers-Rudolph-Dixon notion of quasirigid motion. A simple condition is also derived that allows currents to exist in two different space-times with identical sets of multipole moments (in a natural sense).
Fermion wave-mechanical operators in curved space-time
Cocke, W.J.; Lloyd-Hart, M. )
1990-09-15
In the context of a general wave-mechanical formalism, we derive explicit forms for the Hamiltonian, kinetic energy, and momentum operators for a massive fermion in curved space-time. In the two-spinor representation, the scalar products of state vectors are conserved under the Dirac equation, but the time-development Hamiltonian is in general not Hermitian for a nonstatic metric. A geodesic normal coordinate system provides an economical framework in which to interpret the results. We apply the formalism to a closed Robertson-Walker metric, for which we find the eigenvalues and eigenfunctions of the kinetic energy density. The angular momentum parts turn out to be simpler than in the usual four-spinor representation, and the radial parts involve Jacobi polynomials.
Curved Space-Times by Crystallization of Liquid Fiber Bundles
NASA Astrophysics Data System (ADS)
Hélein, Frédéric; Vey, Dimitri
2017-01-01
Motivated by the search for a Hamiltonian formulation of Einstein equations of gravity which depends in a minimal way on choices of coordinates, nor on a choice of gauge, we develop a multisymplectic formulation on the total space of the principal bundle of orthonormal frames on the 4-dimensional space-time. This leads quite naturally to a new theory which takes place on 10-dimensional manifolds. The fields are pairs of ((α ,ω ),π), where (α ,ω ) is a 1-form with coefficients in the Lie algebra of the Poincaré group and π is an 8-form with coefficients in the dual of this Lie algebra. The dynamical equations derive from a simple variational principle and imply that the 10-dimensional manifold looks locally like the total space of a fiber bundle over a 4-dimensional base manifold. Moreover this base manifold inherits a metric and a connection which are solutions of a system of Einstein-Cartan equations.
Representations of space, time, and number in neonates.
de Hevia, Maria Dolores; Izard, Véronique; Coubart, Aurélie; Spelke, Elizabeth S; Streri, Arlette
2014-04-01
A rich concept of magnitude--in its numerical, spatial, and temporal forms--is a central foundation of mathematics, science, and technology, but the origins and developmental relations among the abstract concepts of number, space, and time are debated. Are the representations of these dimensions and their links tuned by extensive experience, or are they readily available from birth? Here, we show that, at the beginning of postnatal life, 0- to 3-d-old neonates reacted to a simultaneous increase (or decrease) in spatial extent and in duration or numerical quantity, but they did not react when the magnitudes varied in opposite directions. The findings provide evidence that representations of space, time, and number are systematically interrelated at the start of postnatal life, before acquisition of language and cultural metaphors, and before extensive experience with the natural correlations between these dimensions.
Wormhole in higher-dimensional space-time
NASA Astrophysics Data System (ADS)
Shinkai, Hisa-aki; Torii, Takashi
2015-04-01
We introduce our recent studies on wormhole, especially its stability aspect in higher-dimensional space-time both in general relativity and in Gauss-Bonnet gravity. We derived the Ellis-type wormhole solution in n-dimensional general relativity, and found existence of an unstable mode in its linear perturbation analysis. We also evolved it numerically in dualnullcoordinate system, and confirmed its instability. The wormhole throat will change into black hole horizons for the input of the (relatively) positive energy, while it will change into inflationary expansion for the (relatively) negative energy input. If we add Gauss-Bonnet terms (higher curvature correction terms in gravity), then wormhole tends to expand (or change to black hole) if the coupling constant α is positive (negative), and such bifurcation of the throat horizon is observed earlier in higher dimension.
Space-time statistics for decision support to smart farming.
Stein, A; Hoosbeek, M R; Sterk, G
1997-01-01
This paper summarizes statistical procedures which are useful for precision farming at different scales. Three topics are addressed: spatial comparison of scenarios for land use, analysis of data in the space-time domain, and sampling in space and time. The first study compares six scenarios for nitrate leaching to ground water. Disjunctive cokriging reduces the computing time by 80% without loss of accuracy. The second study analyses wind erosion during four storms in a field in Niger measured with 21 devices. We investigated the use of temporal replicates to overcome the lack of spatial data. The third study analyses the effects of sampling in space and time for soil nutrient data in a Southwest African field. We concluded that statistical procedures are indispensable for decision support to smart farming.
Space-Time Event Sparse Penalization for Magneto-/Electroencephalography
Bolstad, Andrew; Van Veen, Barry; Nowak, Robert
2009-01-01
This article presents a new spatio-temporal method for M/EEG source reconstruction based on the assumption that only a small number of events, localized in space and/or time, are responsible for the measured signal. Each space-time event is represented using a basis function expansion which reflects the most relevant (or measurable) features of the signal. This model of neural activity leads naturally to a Bayesian likelihood function which balances the model fit to the data with the complexity of the model, where the complexity is related to the number of included events. A novel Expectation-Maximization algorithm which maximizes the likelihood function is presented. The new method is shown to be effective on several MEG simulations of neurological activity as well as data from a self-paced finger tapping experiment. PMID:19457366
Inoue, Akira; Futakuchi, Masanobu; Yagi, Makoto; Mitsutake, Toru; Morooka, Shinichi
1995-12-01
Void fraction measurement tests for boiling water reactor (BWR) simulated nuclear fuel assemblies have been conducted using an X-ray computed tomography scanner.there are two types of fuel assemblies concerning water rods. One fuel assembly has two water rods; the other has one large water rod. The effects of the water rods on radial void fraction distributions are measured within the fuel assemblies. The results show that the water rod effect does not make a large difference in void fraction distribution. The subchannel analysis codes COBRA/BWR and THERMIT-2 were compared with subchannel-averaged void fractions. The prediction accuracy of COBRA/BWR and THERMIT-2 for the subchannel-averaged void fraction was {Delta}{alpha} = {minus}3.6%, {sigma} = 4.8% and {Delta}{alpha} = {minus}4.1%, {sigma} = 4.5%, respectively, where {Delta}{alpha} is the average of the difference measured and calculated values. The subchannel analysis codes are highly applicable for the prediction of a two-phase flow distribution within BWR fuel assemblies.
Deconstructing events: The neural bases for space, time, and causality
Kranjec, Alexander; Cardillo, Eileen R.; Lehet, Matthew; Chatterjee, Anjan
2013-01-01
Space, time, and causality provide a natural structure for organizing our experience. These abstract categories allow us to think relationally in the most basic sense; understanding simple events require one to represent the spatial relations among objects, the relative durations of actions or movements, and links between causes and effects. The present fMRI study investigates the extent to which the brain distinguishes between these fundamental conceptual domains. Participants performed a one-back task with three conditions of interest (SPACE, TIME and CAUSALITY). Each condition required comparing relations between events in a simple verbal narrative. Depending on the condition, participants were instructed to either attend to the spatial, temporal, or causal characteristics of events, but between participants, each particular event relation appeared in all three conditions. Contrasts compared neural activity during each condition against the remaining two and revealed how thinking about events is deconstructed neurally. Space trials recruited neural areas traditionally associated with visuospatial processing, primarily bilateral frontal and occipitoparietal networks. Causality trials activated areas previously found to underlie causal thinking and thematic role assignment, such as left medial frontal, and left middle temporal gyri, respectively. Causality trials also produced activations in SMA, caudate, and cerebellum; cortical and subcortical regions associated with the perception of time at different timescales. The TIME contrast however, produced no significant effects. This pattern, indicating negative results for TIME trials, but positive effects for CAUSALITY trials in areas important for time perception, motivated additional overlap analyses to further probe relations between domains. The results of these analyses suggest a closer correspondence between time and causality than between time and space. PMID:21861674
Deconstructing events: the neural bases for space, time, and causality.
Kranjec, Alexander; Cardillo, Eileen R; Schmidt, Gwenda L; Lehet, Matthew; Chatterjee, Anjan
2012-01-01
Space, time, and causality provide a natural structure for organizing our experience. These abstract categories allow us to think relationally in the most basic sense; understanding simple events requires one to represent the spatial relations among objects, the relative durations of actions or movements, and the links between causes and effects. The present fMRI study investigates the extent to which the brain distinguishes between these fundamental conceptual domains. Participants performed a 1-back task with three conditions of interest (space, time, and causality). Each condition required comparing relations between events in a simple verbal narrative. Depending on the condition, participants were instructed to either attend to the spatial, temporal, or causal characteristics of events, but between participants each particular event relation appeared in all three conditions. Contrasts compared neural activity during each condition against the remaining two and revealed how thinking about events is deconstructed neurally. Space trials recruited neural areas traditionally associated with visuospatial processing, primarily bilateral frontal and occipitoparietal networks. Causality trials activated areas previously found to underlie causal thinking and thematic role assignment, such as left medial frontal and left middle temporal gyri, respectively. Causality trials also produced activations in SMA, caudate, and cerebellum; cortical and subcortical regions associated with the perception of time at different timescales. The time contrast, however, produced no significant effects. This pattern, indicating negative results for time trials but positive effects for causality trials in areas important for time perception, motivated additional overlap analyses to further probe relations between domains. The results of these analyses suggest a closer correspondence between time and causality than between time and space.
Spherically Symmetric Space Time with Regular de Sitter Center
NASA Astrophysics Data System (ADS)
Dymnikova, Irina
We formulate the requirements which lead to the existence of a class of globally regular solutions of the minimally coupled GR equations asymptotically de Sitter at the center.
Expanding space-time and variable vacuum energy
NASA Astrophysics Data System (ADS)
Parmeggiani, Claudio
2017-08-01
The paper describes a cosmological model which contemplates the presence of a vacuum energy varying, very slightly (now), with time. The constant part of the vacuum energy generated, some 6 Gyr ago, a deceleration/acceleration transition of the metric expansion; so now, in an aged Universe, the expansion is inexorably accelerating. The vacuum energy varying part is instead assumed to be eventually responsible of an acceleration/deceleration transition, which occurred about 14 Gyr ago; this transition has a dynamic origin: it is a consequence of the general relativistic Einstein-Friedmann equations. Moreover, the vacuum energy (constant and variable) is here related to the zero-point energy of some quantum fields (scalar, vector, or spinor); these fields are necessarily described in a general relativistic way: their structure depends on the space-time metric, typically non-flat. More precisely, the commutators of the (quantum field) creation/annihilation operators are here assumed to depend on the local value of the space-time metric tensor (and eventually of its curvature); furthermore, these commutators rapidly decrease for high momentum values and they reduce to the standard ones for a flat metric. In this way, the theory is ”gravitationally” regularized; in particular, the zero-point (vacuum) energy density has a well defined value and, for a non static metric, depends on the (cosmic) time. Note that this varying vacuum energy can be negative (Fermi fields) and that a change of its sign typically leads to a minimum for the metric expansion factor (a ”bounce”).
Emergent space-time via a geometric renormalization method
NASA Astrophysics Data System (ADS)
Rastgoo, Saeed; Requardt, Manfred
2016-12-01
We present a purely geometric renormalization scheme for metric spaces (including uncolored graphs), which consists of a coarse graining and a rescaling operation on such spaces. The coarse graining is based on the concept of quasi-isometry, which yields a sequence of discrete coarse grained spaces each having a continuum limit under the rescaling operation. We provide criteria under which such sequences do converge within a superspace of metric spaces, or may constitute the basin of attraction of a common continuum limit, which hopefully may represent our space-time continuum. We discuss some of the properties of these coarse grained spaces as well as their continuum limits, such as scale invariance and metric similarity, and show that different layers of space-time can carry different distance functions while being homeomorphic. Important tools in this analysis are the Gromov-Hausdorff distance functional for general metric spaces and the growth degree of graphs or networks. The whole construction is in the spirit of the Wilsonian renormalization group (RG). Furthermore, we introduce a physically relevant notion of dimension on the spaces of interest in our analysis, which, e.g., for regular lattices reduces to the ordinary lattice dimension. We show that this dimension is stable under the proposed coarse graining procedure as long as the latter is sufficiently local, i.e., quasi-isometric, and discuss the conditions under which this dimension is an integer. We comment on the possibility that the limit space may turn out to be fractal in case the dimension is noninteger. At the end of the paper we briefly mention the possibility that our network carries a translocal far order that leads to the concept of wormhole spaces and a scale dependent dimension if the coarse graining procedure is no longer local.
Worm domains and Fefferman space-time singularities
NASA Astrophysics Data System (ADS)
Barletta, Elisabetta; Dragomir, Sorin; Peloso, Marco M.
2017-10-01
Let W be a smoothly bounded worm domain in C2 and let A = Null(Lθ) be the set of Levi-flat points on the boundary ∂W of W. We study the relationship between pseudohermitian geometry of the strictly pseudoconvex locus M = ∂W ∖ A and the theory of space-time singularities associated to the Fefferman metric Fθ on the total space of the canonical circle bundle S1 → C(M) ⟶ π M. Given any point (0 ,w0) ∈ A, we show that every lift Γ(φ) ∈ C(M) , 0 ≤ φ - log|w0 | 2 < π / 2, of the circle Γw0 : r = 2 cos [ log|w0 | 2 - φ ] in M, runs into a curvature singularity of Fefferman's space-time (C(M) ,Fθ) . We show that Σ =π-1(Γw0) is a Lorentzian real surface in (C(M) ,Fθ) such that the immersion ι : Σ ↪ C(M) has a flat normal connection. Consequently, there is a natural isometric immersion j : O(Σ) → O(C(M) , Σ) between the total spaces of the principal bundles of Lorentzian frames O(1 , 1) → O(Σ) → Σ and adapted Lorentzian frames O(1 , 1) × O(2) → O(C(M) , Σ) → Σ, endowed with Schmidt metrics, descending to a map of bundle completions which maps the b-boundary of Σ into the adapted bundle boundary of C(M) , i.e. j(Σ ˙) ⊂∂adt C(M) .
Harvey, R.W.; Chan, V.S.
1996-12-31
Runaway of electrons to high energy during plasma disruptions occurs due to large induced toroidal electric fields which tend to maintain the toroidal plasma current, in accord with Lenz law. This has been observed in many tokamaks. Within the closed flux surfaces, the bounce-averaged CQL3D Fokker-Planck code is well suited to obtain the resulting electron distributions, nonthermal contributions to electrical conductivity, and runaway rates. The time-dependent 2D in momentum-space (p{sub {parallel}} and p{sub {perpendicular}}) distributions axe calculated on a radial array of noncircular flux surfaces, including bounce-averaging of the Fokker-Planck equation to account for toroidal trapping effects. In the steady state, the resulting distributions represent a balance between applied toroidal electric field, relativistic Coulomb collisions, and synchrotron radiation. The code can be run in a mode where the electrons are sourced at low velocity and run off the high velocity edge of the computational mesh, giving runaway rates at steady state. At small minor radius, the results closely match previous results reported by Kulsrud et al. It is found that the runaway rate has a strong dependence on inverse aspect ratio e, decreasing by a factor {approx} 5 as e increases from 0.0 to 0.3. The code can also be run with a radial diffusion and pinching term, simulating radial transport with plasma pinching to maintain a given density profile. Results show a transport reduction of runaways in the plasma center, and an enhancement towards the edge due to the electrons from the plasma center. Avalanching of runaways due to a knock-on electron source is being included.
Ralchenko, Yu.; Abdallah, J. Jr.; Colgan, J.; Fontes, C. J.; Foster, M.; Zhang, H. L.; Bar-Shalom, A.; Oreg, J.; Bauche, J.; Bauche-Arnoult, C.; Bowen, C.; Faussurier, G.; Chung, H.-K.; Hansen, S. B.; Lee, R. W.; Scott, H.; Gaufridy de Dortan, F. de; Poirier, M.; Golovkin, I.; Novikov, V.
2009-09-10
We present calculations of ionization balance and radiative power losses for tungsten in magnetic fusion plasmas. The simulations were performed within the framework of Non-Local Thermodynamic Equilibrium (NLTE) Code Comparison Workshops utilizing several independent collisional-radiative models. The calculations generally agree with each other; however, a clear disagreement with experimental ionization distributions at low temperatures 2 keV
Performance evaluation of space-time-frequency spreading for MIMO OFDM-CDMA systems
NASA Astrophysics Data System (ADS)
Dahman, Haysam; Shayan, Yousef
2011-12-01
In this article, we propose a multiple-input-multiple-output, orthogonal frequency division multiplexing, code-division multiple-access (MIMO OFDM-CDMA) scheme. The main objective is to provide extra flexibility in user multiplexing and data rate adaptation, that offer higher system throughput and better diversity gains. This is done by spreading on all the signal domains; i.e, space-time frequency spreading is employed to transmit users' signals. The flexibility to spread on all three domains allows us to independently spread users' data, to maintain increased system throughput and to have higher diversity gains. We derive new accurate approximations for the probability of symbol error and signal-to-interference noise ratio (SINR) for zero forcing (ZF) receiver. This study and simulation results show that MIMO OFDM-CDMA is capable of achieving diversity gains significantly larger than that of the conventional 2-D CDMA OFDM and MIMO MC CDMA schemes.
Coherent states for FLRW space-times in loop quantum gravity
Magliaro, Elena; Perini, Claudio; Marciano, Antonino
2011-02-15
We construct a class of coherent spin-network states that capture properties of curved space-times of the Friedmann-Lamaitre-Robertson-Walker type on which they are peaked. The data coded by a coherent state are associated to a cellular decomposition of a spatial (t=const) section with a dual graph given by the complete five-vertex graph, though the construction can be easily generalized to other graphs. The labels of coherent states are complex SL(2,C) variables, one for each link of the graph, and are computed through a smearing process starting from a continuum extrinsic and intrinsic geometry of the canonical surface. The construction covers both Euclidean and Lorentzian signatures; in the Euclidean case and in the limit of flat space we reproduce the simplicial 4-simplex semiclassical states used in spin foams.
Lovejoy, S; de Lima, M I P
2015-07-01
Over the range of time scales from about 10 days to 30-100 years, in addition to the familiar weather and climate regimes, there is an intermediate "macroweather" regime characterized by negative temporal fluctuation exponents: implying that fluctuations tend to cancel each other out so that averages tend to converge. We show theoretically and numerically that macroweather precipitation can be modeled by a stochastic weather-climate model (the Climate Extended Fractionally Integrated Flux, model, CEFIF) first proposed for macroweather temperatures and we show numerically that a four parameter space-time CEFIF model can approximately reproduce eight or so empirical space-time exponents. In spite of this success, CEFIF is theoretically and numerically difficult to manage. We therefore propose a simplified stochastic model in which the temporal behavior is modeled as a fractional Gaussian noise but the spatial behaviour as a multifractal (climate) cascade: a spatial extension of the recently introduced ScaLIng Macroweather Model, SLIMM. Both the CEFIF and this spatial SLIMM model have a property often implicitly assumed by climatologists that climate statistics can be "homogenized" by normalizing them with the standard deviation of the anomalies. Physically, it means that the spatial macroweather variability corresponds to different climate zones that multiplicatively modulate the local, temporal statistics. This simplified macroweather model provides a framework for macroweather forecasting that exploits the system's long range memory and spatial correlations; for it, the forecasting problem has been solved. We test this factorization property and the model with the help of three centennial, global scale precipitation products that we analyze jointly in space and in time.
Lovejoy, S.; Lima, M. I. P. de
2015-07-15
Over the range of time scales from about 10 days to 30–100 years, in addition to the familiar weather and climate regimes, there is an intermediate “macroweather” regime characterized by negative temporal fluctuation exponents: implying that fluctuations tend to cancel each other out so that averages tend to converge. We show theoretically and numerically that macroweather precipitation can be modeled by a stochastic weather-climate model (the Climate Extended Fractionally Integrated Flux, model, CEFIF) first proposed for macroweather temperatures and we show numerically that a four parameter space-time CEFIF model can approximately reproduce eight or so empirical space-time exponents. In spite of this success, CEFIF is theoretically and numerically difficult to manage. We therefore propose a simplified stochastic model in which the temporal behavior is modeled as a fractional Gaussian noise but the spatial behaviour as a multifractal (climate) cascade: a spatial extension of the recently introduced ScaLIng Macroweather Model, SLIMM. Both the CEFIF and this spatial SLIMM model have a property often implicitly assumed by climatologists that climate statistics can be “homogenized” by normalizing them with the standard deviation of the anomalies. Physically, it means that the spatial macroweather variability corresponds to different climate zones that multiplicatively modulate the local, temporal statistics. This simplified macroweather model provides a framework for macroweather forecasting that exploits the system's long range memory and spatial correlations; for it, the forecasting problem has been solved. We test this factorization property and the model with the help of three centennial, global scale precipitation products that we analyze jointly in space and in time.
Development of a δ f code for studying the effect of non-Maxwellian velocity distributions on SRS.
NASA Astrophysics Data System (ADS)
Brunner, S.; Valeo, E.; Krommes, J. A.
2000-10-01
It has been shown that non-Maxwellian velocity distributions resulting from non-classical heating and transport in laser fusion-type plasmas can significantly affect the linear kinetic response of such systems(B. B. Afeyan et al.), Phys.Rev.Lett. 80, 2322 (1998).. In particular for the electron plasma waves (EPW), the reduction in their Landau damping may have a strong effect on the gain of stimulated Raman scattering (SRS). We are presently developing a δ f code that should enable the simulation of the fully non-linear evolution of SRS, while accurately taking account of the critical non-Maxwellian tails of the background distributions. Different techniques developed for carrying out nonlocal transport simulations(S. Brunner, E. Valeo, and J. A. Krommes, Phys.Plasmas 7), 2810 (2000). will be used to provide the backgrounds to these microinstability simulations.
Paiva, I; Oliveira, C; Trindade, R; Portugal, L
2005-01-01
Radioactive sealed sources are in use worldwide in different fields of application. When no further use is foreseen for these sources, they become spent or disused sealed sources and are subject to a specific waste management scheme. Portugal does have a Radioactive Waste Interim Storage Facility where spent or disused sealed sources are conditioned in a cement matrix inside concrete drums and following the geometrical disposition of a grid. The gamma dose values around each grid depend on the drum's enclosed activity and radionuclides considered, as well as on the drums distribution in the various layers of the grid. This work proposes a method based on the Monte Carlo simulation using the MCNPX code to estimate the best drum arrangement through the optimisation of dose distribution in a grid. Measured dose rate values at 1 m distance from the surface of the chosen optimised grid were used to validate the corresponding computational grid model.
Code Optimization for the Choi-Williams Distribution for ELINT Applications
2009-12-01
Applied Mathematics Series-55, Issued June 1964, Seventh Printing, May 1968, with corrections. [13] Oppenheim & Schafer, Digital Signal Processing ... Phillip E. Pace i REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is...PAGES 98 14. SUBJECT TERMS Choi-Williams Distribution, Signal Processing , Algorithm Optimization, C programming, Low Probability of Intercept (LPI
ERIC Educational Resources Information Center
Evans, Michael A.; Feenstra, Eliot; Ryon, Emily; McNeill, David
2011-01-01
Our research aims to identify children's communicative strategies when faced with the task of solving a geometric puzzle in CSCL contexts. We investigated how to identify and trace "distributed cognition" in problem-solving interactions based on discursive cohesion to objects, participants, and prior discursive content, and geometric and…
ERIC Educational Resources Information Center
Evans, Michael A.; Feenstra, Eliot; Ryon, Emily; McNeill, David
2011-01-01
Our research aims to identify children's communicative strategies when faced with the task of solving a geometric puzzle in CSCL contexts. We investigated how to identify and trace "distributed cognition" in problem-solving interactions based on discursive cohesion to objects, participants, and prior discursive content, and geometric and…
Space-time analysis of snow cover change in the Romanian Carpathians (2001-2016)
NASA Astrophysics Data System (ADS)
Micu, Dana; Cosmin Sandric, Ionut
2017-04-01
Snow cover is recognized as an essential climate variable, highly sensitive to the ongoing climate warming, which plays an important role in regulating mountain ecosystems. Evidence from the existing weather stations located above 800 m over the last 50 years points out that the climate of the Romanian Carpathians is visibly changing, showing an ongoing and consistent warming process. Quantifying and attributing the changes in snow cover on various spatial and temporal scales have a great environmental and socio-economic importance for this mountain region. The study is revealing the inter-seasonal changes in the timing and distribution of snow cover across the Romanian Carpathians, by combining gridded snow data (CARPATCLIM dataset, 1961-2010) and remote sensing data (2001-2016) in specific space-time assessment at regional scale. The geostatistical approach applied in this study, based on a GIS hotspot analysis, takes advantage of all the dimensions in the datasets, in order to understand the space-time trends in this climate variable at monthly time-scale. The MODIS AQUA and TERRA images available from 2001 to 2016 have been processed using ArcGIS for Desktop and Python programming language. All the images were masked out with the Carpathians boundary. Only the pixels with snow have been retained for analysis. The regional trends in snow cover distribution and timing have been analysed using Space-Time cube with ArcGIS for Desktop, according with Esri documentation using the Mann-Kendall trend test on every location with data as an independent bin time-series test. The study aimed also to assess the location of emerging hotspots of snow cover change in Carpathians. These hotspots have been calculated using Getis-Ord Gi* statistic for each bin using Hot Spot Analysis implemented in ArcGIS for Desktop. On regional scale, snow cover appear highly sensitive to the decreasing trends in air temperatures and land surface temperatures, combined with the decrease in
Observation of quantum particles on a large space-time scale
NASA Astrophysics Data System (ADS)
Landau, L. J.
1994-10-01
A quantum particle observed on a sufficiently large space-time scale can be described by means of classical particle trajectories. The joint distribution for large-scale multiple-time position and momentum measurements on a nonrelativistic quantum particle moving freely in R v is given by straight-line trajectories with probabilities determined by the initial momentum-space wavefunction. For large-scale toroidal and rectangular regions the trajectories are geodesics. In a uniform gravitational field the trajectories are parabolas. A quantum counting process on free particles is also considered and shown to converge in the large-space-time limit to a classical counting process for particles with straight-line trajectories. If the quantum particle interacts weakly with its environment, the classical particle trajectories may undergo random jumps. In the random potential model considered here, the quantum particle evolves according to a reversible unitary one-parameter group describing elastic scattering off static randomly distributed impurities (a quantum Lorentz gas). In the large-space-time weak-coupling limit a classical stochastic process is obtained with probability one and describes a classical particle moving with constant speed in straight lines between random jumps in direction. The process depends only on the ensemble value of the covariance of the random field and not on the sample field. The probability density in phase space associated with the classical stochastic process satisfies the linear Boltzmann equation for the classical Lorentz gas, which, in the limit h→0, goes over to the linear Landau equation. Our study of the quantum Lorentz gas is based on a perturbative expansion and, as in other studies of this system, the series can be controlled only for small values of the rescaled time and for Gaussian random fields. The discussion of classical particle trajectories for nonrelativistic particles on a macroscopic spacetime scale applies also to
New Space-Time Metaphors Foster New Nonlinguistic Representations.
Hendricks, Rose K; Boroditsky, Lera
2017-07-01
What is the role of language in constructing knowledge? In this article, we ask whether learning new relational language can create new ways of thinking. In Experiment 1, we taught English speakers to talk about time using new vertical linguistic metaphors, saying things like "breakfast is above dinner" or "breakfast is below dinner" (depending on condition). In Experiment 2, rather than teaching people new metaphors, we relied on the left-right representations of time that our American college student participants have already internalized through a lifetime of visuospatial experience reading and writing text from left to right. In both experiments, we asked whether the representations (whether newly acquired from metaphor or acquired over many years of visuospatial experience) are susceptible to verbal interference. We found that (a) learning new metaphors created new space-time associations that could be detected in a nonlinguistic implicit association task; (b) these newly learned representations were not susceptible to verbal interference; and (c) with respect to both verbal and visual interference, representations newly learned from linguistic metaphor behaved just like those on the left-right axis that our participants had acquired through years of visuospatial experience. Taken together, these results suggest that learning new relational language can be a powerful tool in constructing new representations and expanding our cognitive repertoire. Copyright © 2017 Cognitive Science Society, Inc.
Space-time adaptive numerical methods for geophysical applications.
Castro, C E; Käser, M; Toro, E F
2009-11-28
In this paper we present high-order formulations of the finite volume and discontinuous Galerkin finite-element methods for wave propagation problems with a space-time adaptation technique using unstructured meshes in order to reduce computational cost without reducing accuracy. Both methods can be derived in a similar mathematical framework and are identical in their first-order version. In their extension to higher order accuracy in space and time, both methods use spatial polynomials of higher degree inside each element, a high-order solution of the generalized Riemann problem and a high-order time integration method based on the Taylor series expansion. The static adaptation strategy uses locally refined high-resolution meshes in areas with low wave speeds to improve the approximation quality. Furthermore, the time step length is chosen locally adaptive such that the solution is evolved explicitly in time by an optimal time step determined by a local stability criterion. After validating the numerical approach, both schemes are applied to geophysical wave propagation problems such as tsunami waves and seismic waves comparing the new approach with the classical global time-stepping technique. The problem of mesh partitioning for large-scale applications on multi-processor architectures is discussed and a new mesh partition approach is proposed and tested to further reduce computational cost.
Video painting with space-time-varying style parameters.
Kagaya, Mizuki; Brendel, William; Deng, Qingqing; Kesterson, Todd; Todorovic, Sinisa; Neill, Patrick J; Zhang, Eugene
2011-01-01
Artists use different means of stylization to control the focus on different objects in the scene. This allows them to portray complex meaning and achieve certain artistic effects. Most prior work on painterly rendering of videos, however, uses only a single painting style, with fixed global parameters, irrespective of objects and their layout in the images. This often leads to inadequate artistic control. Moreover, brush stroke orientation is typically assumed to follow an everywhere continuous directional field. In this paper, we propose a video painting system that accounts for the spatial support of objects in the images or videos, and uses this information to specify style parameters and stroke orientation for painterly rendering. Since objects occupy distinct image locations and move relatively smoothly from one video frame to another, our object-based painterly rendering approach is characterized by style parameters that coherently vary in space and time. Space-time-varying style parameters enable more artistic freedom, such as emphasis/de-emphasis, increase or decrease of contrast, exaggeration or abstraction of different objects in the scene in a temporally coherent fashion.
Horizons versus singularities in spherically symmetric space-times
Bronnikov, K. A.; Elizalde, E.; Odintsov, S. D.; Zaslavskii, O. B.
2008-09-15
We discuss different kinds of Killing horizons possible in static, spherically symmetric configurations and recently classified as 'usual', 'naked', and 'truly naked' ones depending on the near-horizon behavior of transverse tidal forces acting on an extended body. We obtain the necessary conditions for the metric to be extensible beyond a horizon in terms of an arbitrary radial coordinate and show that all truly naked horizons, as well as many of those previously characterized as naked and even usual ones, do not admit an extension and therefore must be considered as singularities. Some examples are given, showing which kinds of matter are able to create specific space-times with different kinds of horizons, including truly naked ones. Among them are fluids with negative pressure and scalar fields with a particular behavior of the potential. We also discuss horizons and singularities in Kantowski-Sachs spherically symmetric cosmologies and present horizon regularity conditions in terms of an arbitrary time coordinate and proper (synchronous) time. It turns out that horizons of orders 2 and higher occur in infinite proper times in the past or future, but one-way communication with regions beyond such horizons is still possible.
Beyond Peaceful Coexistence: The Emergence of Space, Time and Quantum
NASA Astrophysics Data System (ADS)
Licata, Ignazio
A physical theory consists of a formal structure and one or more interpretations. The latter can come out from cultural and cognitive tension going far beyond any sound operational pact between theoretical constructs and empirical data. We have no reason to doubt about the consistency and efficacy of syntaxes if properly used in the right range. The formal side of Physics has grown in a strongly connected and stratified way through an almost autopoietic, self-dual procedure (let's think of the extraordinary success of the gauge theories), whereas the foundational debate is still blustering about the two pillars of such monumental construction. The general relativity (GR) and the quantum mechanics (QM), which still appear to be greatly incompatible and stopped in a limited peaceful coexistence between local causality in space-time and quantum non-locality [1]. The formidable challenges waiting for us beyond the Standard Model seem to require a new semantic consistency [2] within the two theories, so as to build a new way to look at them, to work and to relate them...
Augmenting synthetic aperture radar with space time adaptive processing
NASA Astrophysics Data System (ADS)
Riedl, Michael; Potter, Lee C.; Ertin, Emre
2013-05-01
Wide-area persistent radar video offers the ability to track moving targets. A shortcoming of the current technology is an inability to maintain track when Doppler shift places moving target returns co-located with strong clutter. Further, the high down-link data rate required for wide-area imaging presents a stringent system bottleneck. We present a multi-channel approach to augment the synthetic aperture radar (SAR) modality with space time adaptive processing (STAP) while constraining the down-link data rate to that of a single antenna SAR system. To this end, we adopt a multiple transmit, single receive (MISO) architecture. A frequency division design for orthogonal transmit waveforms is presented; the approach maintains coherence on clutter, achieves the maximal unaliased band of radial velocities, retains full resolution SAR images, and requires no increase in receiver data rate vis-a-vis the wide-area SAR modality. For Nt transmit antennas and N samples per pulse, the enhanced sensing provides a STAP capability with Nt times larger range bins than the SAR mode, at the cost of O(log N) more computations per pulse. The proposed MISO system and the associated signal processing are detailed, and the approach is numerically demonstrated via simulation of an airborne X-band system.
A Space/Time Dynamically Adaptive Method for Multiscale Problems
NASA Astrophysics Data System (ADS)
Grenga, Temistocle; Zikoski, Zachary; Paolucci, Samuel; Valorani, Mauro
2011-11-01
Systems of partial differential equations (PDEs) describing problems that are multiscale in space and time are computationally very expensive to solve. In order to overcome the challenges related to both thin spatial layers and temporal stiffness we propose the use of a wavelet adaptive multilevel representation (WAMR) in space and an adaptive model reduction method (G-Scheme) in time. The multilevel structure of the algorithm provides a simple way to adapt computational refinements to local demands of the solution. High resolution computations are performed only in spatial regions where sharp transitions occur, while the G-Scheme is an explicit solver developed for stiff problems which is built upon a local decomposition of the dynamics in three subspaces involving slow, active and fast time scales. Only the modes in the active subspace are integrated numerically, the others are approximated asymptotically. Subsequently, the original problem not only becomes substantially smaller, but more importantly non-stiff. Combining the WAMR technique with the G-Scheme yields a time accurate solution of a prescribed accuracy with a much smaller number of space- time degrees of freedom. While the computational scheme can be used to solve a wide class of stiff PDE problems, we will illustrate its use in the solution of the Navier Stokes equations in reactive flows.
On the space-time evolution of a cholera epidemic
NASA Astrophysics Data System (ADS)
Bertuzzo, E.; Azaele, S.; Maritan, A.; Gatto, M.; Rodriguez-Iturbe, I.; Rinaldo, A.
2008-01-01
We study how river networks, acting as environmental corridors for pathogens, affect the spreading of cholera epidemics. Specifically, we compare epidemiological data from the real world with the space-time evolution of infected individuals predicted by a theoretical scheme based on reactive transport of infective agents through a biased network portraying actual river pathways. The data pertain to a cholera outbreak in South Africa which started in 2000 and affected in particular the KwaZulu-Natal province. The epidemic lasted for 2 years and involved about 140,000 confirmed cholera cases. Hydrological and demographic data have also been carefully considered. The theoretical tools relate to recent advances in hydrochory, migration fronts, and infection spreading and are novel in that nodal reactions describe the dynamics of cholera. Transport through network links provides the coupling of the nodal dynamics of infected people, who are assumed to reside at the nodes. This proves a realistic scheme. We argue that the theoretical scheme is remarkably capable of predicting actual outbreaks and, indeed, that network structures play a controlling role in the actual, rather anisotropic propagation of infections, in analogy to spreading of species or to migration processes that also use rivers as ecological corridors.
Langevin Dynamics with Space-Time Periodic Nonequilibrium Forcing
NASA Astrophysics Data System (ADS)
Joubaud, R.; Pavliotis, G. A.; Stoltz, G.
2015-01-01
We present results on the ballistic and diffusive behavior of the Langevin dynamics in a periodic potential that is driven away from equilibrium by a space-time periodic driving force, extending some of the results obtained by Collet and Martinez in (J Math Biol, 56(6):765-792 2008). In the hyperbolic scaling, a nontrivial average velocity can be observed even if the external forcing vanishes in average. More surprisingly, an average velocity in the direction opposite to the forcing may develop at the linear response level—a phenomenon called negative mobility. The diffusive limit of the non-equilibrium Langevin dynamics is also studied using the general methodology of central limit theorems for additive functionals of Markov processes. To apply this methodology, which is based on the study of appropriate Poisson equations, we extend recent results on pointwise estimates of the resolvent of the generator associated with the Langevin dynamics. Our theoretical results are illustrated by numerical simulations of a two-dimensional system.
Fundamental radar properties: hidden variables in space-time
NASA Astrophysics Data System (ADS)
Gabriel, Andrew K.
2002-05-01
A derivation of the properties of pulsed radiative imaging systems is presented with examples drawn from conventional, synthetic aperture, and interferometric radar. A geometric construction of the space and time components of a radar observation yields a simple underlying structural equivalence among many of the properties of radar, including resolution, range ambiguity, azimuth aliasing, signal strength, speckle, layover, Doppler shifts, obliquity and slant range resolution, finite antenna size, atmospheric delays, and beam- and pulse-limited configurations. The same simple structure is shown to account for many interferometric properties of radar: height resolution, image decorrelation, surface velocity detection, and surface deformation measurement. What emerges is a simple, unified description of the complex phenomena of radar observations. The formulation comes from fundamental physical concepts in relativistic field theory, of which the essential elements are presented. In the terminology of physics, radar properties are projections of hidden variables-curved worldlines from a broken symmetry in Minkowski space-time-onto a time-serial receiver.
Conformal quantum mechanics and holography in noncommutative space-time
NASA Astrophysics Data System (ADS)
Gupta, Kumar S.; Harikumar, E.; Zuhair, N. S.
2017-09-01
We analyze the effects of noncommutativity in conformal quantum mechanics (CQM) using the κ-deformed space-time as a prototype. Up to the first order in the deformation parameter, the symmetry structure of the CQM algebra is preserved but the coupling in a canonical model of the CQM gets deformed. We show that the boundary conditions that ensure a unitary time evolution in the noncommutative CQM can break the scale invariance, leading to a quantum mechanical scaling anomaly. We calculate the scaling dimensions of the two and three point functions in the noncommutative CQM which are shown to be deformed. The AdS2 / CFT1 duality for the CQM suggests that the corresponding correlation functions in the holographic duals are modified. In addition, the Breitenlohner-Freedman bound also picks up a noncommutative correction. The strongly attractive regime of a canonical model of the CQM exhibit quantum instability. We show that the noncommutativity softens this singular behaviour and its implications for the corresponding holographic duals are discussed.
Computationally efficient ASIC implementation of space-time block decoding
NASA Astrophysics Data System (ADS)
Cavus, Enver; Daneshrad, Babak
2002-12-01
In this paper, we describe a computationally efficient ASIC design that leads to a highly efficient power and area implementation of space-time block decoder compared to a direct implementation of the original algorithm. Our study analyzes alternative methods of evaluating as well as implementing the previously reported maximum likelihood algorithms (Tarokh et al. 1998) for a more favorable hardware design. In our previous study (Cavus et al. 2001), after defining some intermediate variables at the algorithm level, highly computationally efficient decoding approaches, namely sign and double-sign methods, are developed and their effectiveness are illustrated for 2x2, 8x3 and 8x4 systems using BPSK, QPSK, 8-PSK, or 16-QAM modulation. In this work, alternative architectures for the decoder implementation are investigated and an implementation having a low computation approach is proposed. The applied techniques at the higher algorithm and architectural levels lead to a substantial simplification of the hardware architecture and significantly reduced power consumption. The proposed architecture is being fabricated in TSMC 0.18 μ process.
Space-Time Analysis of Crime Patterns in Central London
NASA Astrophysics Data System (ADS)
Cheng, T.; Williams, D.
2012-07-01
Crime continues to cast a shadow over citizen well-being in big cities today, while also imposing huge economic and social costs. Timely understanding of how criminality emerges and how crime patterns evolve is crucial to anticipating crime, dealing with it when it occurs and developing public confidence in the police service. Every day, about 10,000 crime incidents are reported by citizens, recorded and geo-referenced in the London Metropolitan Police Service Computer Aided Dispatch (CAD) database. The unique nature of this dataset allows the patterns to be explored at particularly fine temporal granularity and at multiple spatial resolutions. This study provides a framework for the exploratory spatio-temporal analysis of crime patterns that combines visual inquiry tools (interactive animations, space-time cubes and map matrices) with cluster analysis (spatial-temporal scan statistics and the self-organizing map). This framework is tested on the CAD dataset for the London Borough of Camden in March 2010. Patterns of crime through space and time are discovered and the clustering methods were evaluated on their ability to facilitate the discovery and interpretation of these patterns.
Travelling waves in expanding spatially homogeneous space-times
NASA Astrophysics Data System (ADS)
Alekseev, George
2015-04-01
Some classes of the so-called ‘travelling wave’ solutions of Einstein and Einstein-Maxwell equations in general relativity and of dynamical equations for massless bosonic fields in string gravity in four and higher dimensions are presented. Similarly to the well known plane-fronted waves with parallel rays (pp-waves), these travelling wave solutions may depend on arbitrary functions of a null coordinate which determine the arbitrary profiles and polarizations of the waves. However, in contrast with pp-waves, these waves do not admit the null Killing vector fields and can exist in some curved (expanding and spatially homogeneous) background space-times, where these waves propagate in certain directions without any scattering. Mathematically, some of these classes of solutions arise as the fixed points of Kramer-Neugebauer transformations for hyperbolic integrable reductions of the above mentioned field equations or, in other cases, after imposing the ansatz that these waves do not change the part of the spatial metric transverse to the direction of wave propagation. It is worth noting that the strikingly simple forms of all the solutions presented prospectively make possible the consideration of the nonlinear interaction of these waves with the background curvature and singularities, as well as the collision of such wave pulses with solitons or with each other in the backgrounds where such travelling waves may exist.
Implicit Space-Time Conservation Element and Solution Element Schemes
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung; Himansu, Ananda; Wang, Xiao-Yen
1999-01-01
Artificial numerical dissipation is in important issue in large Reynolds number computations. In such computations, the artificial dissipation inherent in traditional numerical schemes can overwhelm the physical dissipation and yield inaccurate results on meshes of practical size. In the present work, the space-time conservation element and solution element method is used to construct new and accurate implicit numerical schemes such that artificial numerical dissipation will not overwhelm physical dissipation. Specifically, these schemes have the property that numerical dissipation vanishes when the physical viscosity goes to zero. These new schemes therefore accurately model the physical dissipation even when it is extremely small. The new schemes presented are two highly accurate implicit solvers for a convection-diffusion equation. The two schemes become identical in the pure convection case, and in the pure diffusion case. The implicit schemes are applicable over the whole Reynolds number range, from purely diffusive equations to convection-dominated equations with very small viscosity. The stability and consistency of the schemes are analysed, and some numerical results are presented. It is shown that, in the inviscid case, the new schemes become explicit and their amplification factors are identical to those of the Leapfrog scheme. On the other hand, in the pure diffusion case, their principal amplification factor becomes the amplification factor of the Crank-Nicolson scheme.
Wireless MIMO switching: distributed zero-forcing and MMSE relaying using network coding
NASA Astrophysics Data System (ADS)
Wang, Miao; Wang, Fanggang; Zhong, Zhangdui
2013-12-01
In a switching problem, a one-to-one mapping from the inputs to the outputs is conducted according to a switch pattern, i.e., a permutation matrix. In this paper, we investigate a wireless switching problem, in which a group of single-antenna relays acts together as a multiple-input-multiple-output (MIMO) switch to carry out distributed precode-and-forward. All users transmit simultaneously to the MIMO switch in the uplink and then the MIMO switch precodes the received signals and broadcasts in the downlink. Ideally, each user could receive its desired signal from one other user with no or little interference from other users. Self-interference is allowed in the received signals, as it can be canceled when each user has the channel gain of its self-interference. We propose two distributed relaying schemes based on two widely adopted criteria, i.e., zero-forcing relaying and minimum mean square error (MMSE) relaying. For the distributed zero-forcing relaying, we further propose a message passing approach, with which the proposed zero-forcing relaying achieves significant throughput gain with little attendant overhead. We also claim that the proposed MMSE relaying achieves even larger throughput at the expense of larger amount of message passing. Simulation results validate the throughput gains of the proposed relaying schemes.
Diverse and pervasive subcellular distributions for both coding and long noncoding RNAs
Wilk, Ronit; Hu, Jack; Blotsky, Dmitry; Krause, Henry M.
2016-01-01
In a previous analysis of 2300 mRNAs via whole-mount fluorescent in situ hybridization in cellularizing Drosophila embryos, we found that 70% of the transcripts exhibited some form of subcellular localization. To see whether this prevalence is unique to early Drosophila embryos, we examined ∼8000 transcripts over the full course of embryogenesis and ∼800 transcripts in late third instar larval tissues. The numbers and varieties of new subcellular localization patterns are both striking and revealing. In the much larger cells of the third instar larva, virtually all transcripts observed showed subcellular localization in at least one tissue. We also examined the prevalence and variety of localization mechanisms for >100 long noncoding RNAs. All of these were also found to be expressed and subcellularly localized. Thus, subcellular RNA localization appears to be the norm rather than the exception for both coding and noncoding RNAs. These results, which have been annotated and made available on a recompiled database, provide a rich and unique resource for functional gene analyses, some examples of which are provided. PMID:26944682
A global conformal extension theorem for perfect fluid Bianchi space-times
Luebbe, Christian Tod, Paul
2008-12-15
A global extension theorem is established for isotropic singularities in polytropic perfect fluid Bianchi space-times. When an extension is possible, the limiting behaviour of the physical space-time near the singularity is analysed.
Wohlin, Åsa
2015-03-21
The distribution of codons in the nearly universal genetic code is a long discussed issue. At the atomic level, the numeral series 2x(2) (x=5-0) lies behind electron shells and orbitals. Numeral series appear in formulas for spectral lines of hydrogen. The question here was if some similar scheme could be found in the genetic code. A table of 24 codons was constructed (synonyms counted as one) for 20 amino acids, four of which have two different codons. An atomic mass analysis was performed, built on common isotopes. It was found that a numeral series 5 to 0 with exponent 2/3 times 10(2) revealed detailed congruency with codon-grouped amino acid side-chains, simultaneously with the division on atom kinds, further with main 3rd base groups, backbone chains and with codon-grouped amino acids in relation to their origin from glycolysis or the citrate cycle. Hence, it is proposed that this series in a dynamic way may have guided the selection of amino acids into codon domains. Series with simpler exponents also showed noteworthy correlations with the atomic mass distribution on main codon domains; especially the 2x(2)-series times a factor 16 appeared as a conceivable underlying level, both for the atomic mass and charge distribution. Furthermore, it was found that atomic mass transformations between numeral systems, possibly interpretable as dimension degree steps, connected the atomic mass of codon bases with codon-grouped amino acids and with the exponent 2/3-series in several astonishing ways. Thus, it is suggested that they may be part of a deeper reference system.
MCNP(TM) Release 6.1.1 beta: Creating and Testing the Code Distribution
Cox, Lawrence J.; Casswell, Laura
2014-06-12
This report documents the preparations for and testing of the production release of MCNP6™1.1 beta through RSICC at ORNL. It addresses tests on supported operating systems (Linux, MacOSX, Windows) with the supported compilers (Intel, Portland Group and gfortran). Verification and Validation test results are documented elsewhere. This report does not address in detail the overall packaging of the distribution. Specifically, it does not address the nuclear and atomic data collection, the other included software packages (MCNP5, MCNPX and MCNP6) and the collection of reference documents.
Memory integration constructs maps of space, time, and concepts.
Morton, Neal W; Sherrill, Katherine R; Preston, Alison R
2017-10-01
Recent evidence demonstrates that new events are learned in the context of their relationships to existing memories. Within the hippocampus and medial prefrontal cortex, related memories are represented by integrated codes that connect events experienced at different times and places. Integrated codes form the basis of spatial, temporal, and conceptual maps of experience. These maps represent information that goes beyond direct experience and support generalization behaviors that require knowledge be used in new ways. The degree to which an individual memory is integrated into a coherent map is determined by its spatial, temporal, and conceptual proximity to existing knowledge. Integration is observed over a wide range of scales, suggesting that memories contain information about both broad and fine-grained contexts.
A divide-and-conquer method for space-time series prediction
NASA Astrophysics Data System (ADS)
Deng, Min; Yang, Wentao; Liu, Qiliang; Zhang, Yunfei
2017-01-01
Space-time series can be partitioned into space-time smooth and space-time rough, which represent different scale characteristics. However, most existing methods for space-time series prediction directly address space-time series as a whole and do not consider the interaction between space-time smooth and space-time rough in the process of prediction. This will possibly affect the accuracy of space-time series prediction, because the interaction between these two components (i.e., space-time smooth and space-time rough) may cause one of them as dominant component, thus weakening the behavior of the other. Therefore, a divide-and-conquer method for space-time prediction is proposed in this paper. First, the observational fine-grained data are decomposed into two components: coarse-grained data and the residual terms of fine-grained data. These two components are then modeled, respectively. Finally, the predicted values of the fine-grained data are obtained by integrating the predicted values of the coarse-grained data with the residual terms. The experimental results of two groups of different space-time series demonstrated the effectiveness of the divide-and-conquer method.
High-Order Space-Time Methods for Conservation Laws
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2013-01-01
Current high-order methods such as discontinuous Galerkin and/or flux reconstruction can provide effective discretization for the spatial derivatives. Together with a time discretization, such methods result in either too small a time step size in the case of an explicit scheme or a very large system in the case of an implicit one. To tackle these problems, two new high-order space-time schemes for conservation laws are introduced: the first is explicit and the second, implicit. The explicit method here, also called the moment scheme, achieves a Courant-Friedrichs-Lewy (CFL) condition of 1 for the case of one-spatial dimension regardless of the degree of the polynomial approximation. (For standard explicit methods, if the spatial approximation is of degree p, then the time step sizes are typically proportional to 1/p(exp 2)). Fourier analyses for the one and two-dimensional cases are carried out. The property of super accuracy (or super convergence) is discussed. The implicit method is a simplified but optimal version of the discontinuous Galerkin scheme applied to time. It reduces to a collocation implicit Runge-Kutta (RK) method for ordinary differential equations (ODE) called Radau IIA. The explicit and implicit schemes are closely related since they employ the same intermediate time levels, and the former can serve as a key building block in an iterative procedure for the latter. A limiting technique for the piecewise linear scheme is also discussed. The technique can suppress oscillations near a discontinuity while preserving accuracy near extrema. Preliminary numerical results are shown
Space-time ambiguity functions for electronically scanned ISR applications
NASA Astrophysics Data System (ADS)
Swoboda, John; Semeter, Joshua; Erickson, Philip
2015-05-01
Electronically steerable array (ESA) technology has recently been applied to incoherent scatter radar (ISR) systems. These arrays allow for pulse-to-pulse steering of the antenna beam to collect data in a three-dimensional region. This is in direct contrast to dish-based antennas, where ISR acquisition is limited at any one time to observations in a two-dimensional slice. This new paradigm allows for more flexibility in the measurement of ionospheric plasma parameters. Multiple ESA-based ISR systems operate currently in the high-latitude region where the ionosphere is highly variable in both space and time. Because of the highly dynamic nature of the ionosphere in this region, it is important to differentiate between measurement-induced artifacts and the true behavior of the plasma. Often, three-dimensional ISR data produced by ESA systems are fitted in a spherical coordinate space and then the parameters are interpolated to a Cartesian grid, potentially introducing error and impacting the reconstructions of the plasma parameters. To take advantage of the new flexibility inherent in ESA systems, we present a new way of analyzing ISR observations through use of the space-time ambiguity function. The use of this new measurement ambiguity function allows us to pose the ISR observational problem in terms of a linear inverse problem whose goal is the estimate of the time domain lags of the intrinsic plasma autocorrelation function used for parameter fitting. The framework allows us to explore the impact of nonuniformity in plasma parameters in both time and space. We discuss examples of possible artifacts in high-latitude situations and discuss possible ways of reducing them and improving the quality of data products from electronically steerable ISRs.
Space-time dynamics estimation from space mission tracking data
NASA Astrophysics Data System (ADS)
Dirkx, D.; Noomen, R.; Visser, P. N. A. M.; Gurvits, L. I.; Vermeersen, L. L. A.
2016-03-01
Aims: Many physical parameters that can be estimated from space mission tracking data influence both the translational dynamics and proper time rates of observers. These different proper time rates cause a variability of the time transfer observable beyond that caused by their translational (and rotational) dynamics. With the near-future implementation of transponder laser ranging, these effects will become increasingly important, and will require a re-evaluation of the common data analysis practice of using a priori time ephemerides, which is the goal of this paper. Methods: We develop a framework for the simultaneous estimation of the initial translational state and the initial proper time of an observer, with the goal of facilitating robust tracking data analysis from next-generation space missions carrying highly accurate clocks and tracking equipment. Using our approach, the influence of physical parameters on both translational and time dynamics are considered at the same level in the analysis, and mutual correlations between the signatures of the two are automatically identified. We perform a covariance analysis using our proposed method with simulated laser data from Earth-based stations to both a Mars and Mercury lander. Results: Using four years of tracking data for the Mars lander simulations, we find a difference between our results using the simultaneous space-time dynamics estimation and the classical analysis technique (with an a priori time ephemeris) of around 0.1% in formal errors and correlation coefficients. For a Mercury lander this rises to around 1% for a one-month mission and 10% for a four-year mission. By means of Monte Carlo simulations, we find that using an a priori time ephemeris of representative accuracy will result in estimation errors that are orders of magnitude above the formal error when processing highly accurate laser time transfer data.
Space-time structure of long ocean swell fields
NASA Astrophysics Data System (ADS)
Delpey, Matthias T.; Ardhuin, Fabrice; Collard, Fabrice; Chapron, Bertrand
2010-12-01
The space-time structure of long-period ocean swell fields is investigated, with particular attention given to features in the direction orthogonal to the propagation direction. This study combines space-borne synthetic aperture radar (SAR) data with numerical model hindcasts and time series recorded by in situ instruments. In each data set the swell field is defined by a common storm source. The correlation of swell height time series is very high along a single great circle path with a time shift given by the deep water dispersion relation of the dominant swells. This correlation is also high for locations situated on different great circles in entire ocean basins. Given the Earth radius R, we define the distance from the source Rα and the transversal angle β so that α and β would be equal the colatitude and longitude for a storm centered on the North Pole. Outside of land influence, the swell height field at time t, Hss(α, β,t) is well approximated by a function Hss,0(t - Rα/Cg)/? times another function r2 (β), where Cg is a representative group speed. Here r2 (β) derived from SAR data is very broad, with a width at half the maximum that is larger than 70°, and varies significantly from storm to storm. Land shadows introduce further modifications so that in general r2 is a function of β and α. This separation of variables and the smoothness of the Hss field, allows the estimation of the full field of Hss from sparse measurements, such as wave mode SAR data, combined with one time series, such as that provided by a single buoy. A first crude estimation of a synthetic Hss field based on this principle already shows that swell hindcasts and forecasts can be improved by assimilating such synthetic observations.
Holism and life manifestations: molecular and space-time biology.
Krecek, J
2010-01-01
Appeals of philosophers to look for new concepts in sciences are being met with a weak response. Limited attention is paid to the relation between synthetic and analytic approach in solving problems of biology. An attempt is presented to open a discussion on a possible role of holism. The term "life manifestations" is used in accordance with phenomenology. Multicellular creatures maintain milieu intérieur to keep an aqueous milieu intracellulair in order to transform the energy of nutrients into the form utilizable for driving cellular life manifestations. Milieu intérieur enables to integrate this kind of manifestations into life manifestations of the whole multicellular creatures. The integration depends on a uniqueness and uniformity of the genome of cells, on their mutual recognition and adherence. The processes of ontogenetic development represent the natural mode of integration of cellular life manifestations. Functional systems of multicellular creatures are being established by organization of integrable cells using a wide range of developmental processes. Starting from the zygote division the new being displays all properties of a whole creature, although its life manifestations vary. Therefore, the whole organism is not only more than its parts, as supposed by holism, but also more than developmental stages of its life manifestations. Implicitly, the units of whole multicellular creature are rather molecular and developmental events than the cells per se. Holism, taking in mind the existence of molecular and space-time biology, could become a guide in looking for a new mode of the combination of analytical and synthetic reasoning in biology.
Understanding human activity patterns based on space-time-semantics
NASA Astrophysics Data System (ADS)
Huang, Wei; Li, Songnian
2016-11-01
Understanding human activity patterns plays a key role in various applications in an urban environment, such as transportation planning and traffic forecasting, urban planning, public health and safety, and emergency response. Most existing studies in modeling human activity patterns mainly focus on spatiotemporal dimensions, which lacks consideration of underlying semantic context. In fact, what people do and discuss at some places, inferring what is happening at the places, cannot be simple neglected because it is the root of human mobility patterns. We believe that the geo-tagged semantic context, representing what individuals do and discuss at a place and a specific time, drives a formation of specific human activity pattern. In this paper, we aim to model human activity patterns not only based on space and time but also with consideration of associated semantics, and attempt to prove a hypothesis that similar mobility patterns may have different motivations. We develop a spatiotemporal-semantic model to quantitatively express human activity patterns based on topic models, leading to an analysis of space, time and semantics. A case study is conducted using Twitter data in Toronto based on our model. Through computing the similarities between users in terms of spatiotemporal pattern, semantic pattern and spatiotemporal-semantic pattern, we find that only a small number of users (2.72%) have very similar activity patterns, while the majority (87.14%) show different activity patterns (i.e., similar spatiotemporal patterns and different semantic patterns, similar semantic patterns and different spatiotemporal patterns, or different in both). The population of users that has very similar activity patterns is decreased by 56.41% after incorporating semantic information in the corresponding spatiotemporal patterns, which can quantitatively prove the hypothesis.
Space-time LAI variability in Northern Puglia (Italy) from SPOT VGT data.
Balacco, Gabriella; Figorito, Benedetto; Tarantino, Eufemia; Gioia, Andrea; Iacobellis, Vito
2015-07-01
The vegetation space-time variability during 1999-2010 in the North of the Apulian region (Southern Italy) was analysed using SPOT VEGETATION (VGT) sensor data. Three bands of VEGETATION (RED, NIR and SWIR) were used to implement the vegetation index named reduced simple ratio (RSR) to derive leaf area index (LAI). The monthly average LAI is an indicator of biomass and canopy cover, while the difference between the annual maximum and minimum LAI is an indicator of annual leaf turnover. The space-time distribution of LAI at the catchment scale was analysed over the examined period to detect the consistency of vegetation dynamics in the study area. A diffuse increase of LAI was observed in the examined years that cannot be directly explained only in terms of increasing water availability. Thus, in order to explain such a general behaviour in terms of climatic factors, the analysis was performed upon stratification of land cover classes, focusing on the most widespread species: forest and wheat. An interesting ascending-descending behaviour was observed in the relationship between inter-annual increments of maximum LAI and rainfall, and in particular, a strong negative correlation was found when the rainfall amount in January and February exceeded a critical threshold of about 100 mm.
World-sheet stability, space-time horizons and cosmic censorship
NASA Astrophysics Data System (ADS)
Pollock, M. D.
2014-11-01
Previously, we have analyzed the stability and supersymmetry of the heterotic superstring world sheet in the background Friedmann space-time generated by a perfect fluid with energy density ρ and pressure p = ( γ - 1) ρ. The world sheet is tachyon-free within the range 2/3 ≤ γ ≤ ∞, and globally supersymmetric in the Minkowski-space limit ρ = ∞, or when γ = 2/3, which is the equation of state for stringy matter and corresponds to the Milne universe, that expands along its apparent horizon. Here, this result is discussed in greater detail, particularly with regard to the question of horizon structure, cosmic censorship, the TCP theorem, and local world-sheet supersymmetry. Also, we consider the symmetric background space-time generated by a static, electrically (or magnetically) charged matter distribution of total mass and charge Q, and containing a radially directed macroscopic string. We find that the effective string mass m satisfies the inequality m 2 ≥ 0, signifying stability, provided that , which corresponds to the Reissner-Nordström black hole. The case of marginal string stability, m 2 = 0, is the extremal solution , which was shown by Gibbons and Hull to be supersymmetric, and has a marginal horizon. If , the horizon disappears, m 2 < 0, and the string becomes unstable.
Adjusting for population shifts and covariates in space-time interaction tests.
Schmertmann, Carl P
2015-09-01
Statistical tests for epidemic patterns use measures of space-time event clustering, and look for high levels of clustering that are unlikely to appear randomly if events are independent. Standard approaches, such as Knox's (1964, Applied Statistics 13, 25-29) test, are biased when the spatial distribution of population changes over time, or when there is space-time interaction in important background variables. In particular, the Knox test is too sensitive to coincidental event clusters in such circumstances, and too likely to raise false alarms. Kulldorff and Hjalmars (1999, Biometrics 55, 544-552) proposed a variant of Knox's test to control for bias caused by population shifts. In this article, I demonstrate that their test is also generally biased, in an unknown direction. I suggest an alternative approach that accounts for exposure shifts while also conditioning on the observed spatial and temporal margins of event counts, as in the original Knox test. The new approach uses Metropolis sampling of permutations, and is unbiased under more general conditions. I demonstrate how the new method can also include controls for the clustering effects of covariates. © 2015, The International Biometric Society.
Exploring space-time structure of human mobility in urban space
NASA Astrophysics Data System (ADS)
Sun, J. B.; Yuan, J.; Wang, Y.; Si, H. B.; Shan, X. M.
2011-03-01
Understanding of human mobility in urban space benefits the planning and provision of municipal facilities and services. Due to the high penetration of cell phones, mobile cellular networks provide information for urban dynamics with a large spatial extent and continuous temporal coverage in comparison with traditional approaches. The original data investigated in this paper were collected by cellular networks in a southern city of China, recording the population distribution by dividing the city into thousands of pixels. The space-time structure of urban dynamics is explored by applying Principal Component Analysis (PCA) to the original data, from temporal and spatial perspectives between which there is a dual relation. Based on the results of the analysis, we have discovered four underlying rules of urban dynamics: low intrinsic dimensionality, three categories of common patterns, dominance of periodic trends, and temporal stability. It implies that the space-time structure can be captured well by remarkably few temporal or spatial predictable periodic patterns, and the structure unearthed by PCA evolves stably over time. All these features play a critical role in the applications of forecasting and anomaly detection.
NASA Astrophysics Data System (ADS)
Parodi, K.; Ferrari, A.; Sommerer, F.; Paganetti, H.
2007-07-01
Clinical investigations on post-irradiation PET/CT (positron emission tomography/computed tomography) imaging for in vivo verification of treatment delivery and, in particular, beam range in proton therapy are underway at Massachusetts General Hospital (MGH). Within this project, we have developed a Monte Carlo framework for CT-based calculation of dose and irradiation-induced positron emitter distributions. Initial proton beam information is provided by a separate Geant4 Monte Carlo simulation modelling the treatment head. Particle transport in the patient is performed in the CT voxel geometry using the FLUKA Monte Carlo code. The implementation uses a discrete number of different tissue types with composition and mean density deduced from the CT scan. Scaling factors are introduced to account for the continuous Hounsfield unit dependence of the mass density and of the relative stopping power ratio to water used by the treatment planning system (XiO (Computerized Medical Systems Inc.)). Resulting Monte Carlo dose distributions are generally found in good correspondence with calculations of the treatment planning program, except a few cases (e.g. in the presence of air/tissue interfaces). Whereas dose is computed using standard FLUKA utilities, positron emitter distributions are calculated by internally combining proton fluence with experimental and evaluated cross-sections yielding 11C, 15O, 14O, 13N, 38K and 30P. Simulated positron emitter distributions yield PET images in good agreement with measurements. In this paper, we describe in detail the specific implementation of the FLUKA calculation framework, which may be easily adapted to handle arbitrary phase spaces of proton beams delivered by other facilities or include more reaction channels based on additional cross-section data. Further, we demonstrate the effects of different acquisition time regimes (e.g., PET imaging during or after irradiation) on the intensity and spatial distribution of the irradiation
Implementation of polarization-coded free-space BB84 quantum key distribution
NASA Astrophysics Data System (ADS)
Kim, Y.-S.; Jeong, Y.-C.; Kim, Y.-H.
2008-06-01
We report on the implementation of a Bennett-Brassard 1984 quantum key distribution protocol over a free-space optical path on an optical table. Attenuated laser pulses and Pockels cells driven by a pseudorandom number generator are employed to prepare polarization-encoded photons. The sifted key generation rate of 23.6 kbits per second and the quantum bit error rate (QBER) of 3% have been demonstrated at the average photon number per pulse μ = 0.16. This QBER is sufficiently low to extract final secret keys from shared sifted keys via error correction and privacy amplification. We also tested the long-distance capability of our system by adding optical losses to the quantum channel and found that the QBER remains the same regardless of the loss.
Electrical analysis of wideband and distributed windows using time-dependent field codes
NASA Astrophysics Data System (ADS)
Shang, C. C.; Caplan, M.; Nickel, H. U.; Thumm, M.
1993-09-01
Windows, which provide the barrier to maintain the vacuum envelope in a microwave tube, are critical components in high-average-power microwave sources, especially at millimeter wavelengths. As RF power levels approach the 100's of kWs to 1 MW range (CW), the window assembly experiences severe thermal and mechanical stresses. Depending on the source, the bandwidth of the window may be less than 1 GHz for gyrotron oscillators or up to approximately 20 GHz for the FOM Institute's fast-tunable, free-electron-maser. The bandwidth requirements give rise to a number of window configurations where the common goal is locally distributed heat dissipation. In order to better understand the transmission and RF properties of these microwave structures, the authors use detailed time-dependent field solvers.
Nonextensive statistics in stringy space-time foam models and entangled meson states
Mavromatos, Nick E.; Sarkar, Sarben
2009-05-15
The possibility of generation of nonextensive statistics, in the sense of Tsallis, due to space-time foam is discussed within the context of a particular kind of foam in string/brane theory, the D-particle foam model. The latter involves pointlike brane defects (D-particles), which provide the topologically nontrivial foamy structures of space-time. A stochastic Langevin equation for the velocity recoil of D-particles can be derived from the pinched approximation for a sum over genera in the calculation of the partition function of a bosonic string in the presence of heavy D-particles. The string coupling in standard perturbation theory is related to the exponential of the expectation of the dilaton. Inclusion of fluctuations of the dilaton itself and uncertainties in the string background will then necessitate fluctuations in g{sub s}. The fluctuation in the string coupling in the sum over genera typically leads to a generic structure of the Langevin equation where the coefficient of the noise term fluctuates owing to dependence on the string coupling g{sub s}. The positivity of g{sub s} leads naturally to a stochastic modeling of its distribution with a {chi} distribution. This then rigorously implies a Tsallis-type nonextensive or, more generally, a superstatistics distribution for the recoil velocity of D-particles. As a concrete and physically interesting application, we provide a rigorous estimate of an {omega}-like effect, pertinent to CPT violating modifications of the Einstein-Podolsky-Rosen correlators in entangled states of neutral kaons. In the case of D-particle foam fluctuations, which respect the Lorentz symmetry of the vacuum on average, we find that the {omega} effect may be within the range of sensitivity of future meson factories.
A space-time multiscale modelling of Earth's gravity field variations
NASA Astrophysics Data System (ADS)
Wang, Shuo; Panet, Isabelle; Ramillien, Guillaume; Guilloux, Frédéric
2017-04-01
The mass distribution within the Earth varies over a wide range of spatial and temporal scales, generating variations in the Earth's gravity field in space and time. These variations are monitored by satellites as the GRACE mission, with a 400 km spatial resolution and 10 days to 1 month temporal resolution. They are expressed in the form of gravity field models, often with a fixed spatial or temporal resolution. The analysis of these models allows us to study the mass transfers within the Earth system. Here, we have developed space-time multi-scale models of the gravity field, in order to optimize the estimation of gravity signals resulting from local processes at different spatial and temporal scales, and to adapt the time resolution of the model to its spatial resolution according to the satellites sampling. For that, we first build a 4D wavelet family combining spatial Poisson wavelets with temporal Haar wavelets. Then, we set-up a regularized inversion of inter-satellites gravity potential differences in a bayesian framework, to estimate the model parameters. To build the prior, we develop a spectral analysis, localized in time and space, of geophysical models of mass transport and associated gravity variations. Finally, we test our approach to the reconstruction of space-time variations of the gravity field due to hydrology. We first consider a global distribution of observations along the orbit, from a simplified synthetic hydrology signal comprising only annual variations at large spatial scales. Then, we consider a regional distribution of observations in Africa, and a larger number of spatial and temporal scales. We test the influence of an imperfect prior and discuss our results.
Variable continental distribution of polymorphisms in the coding regions of DNA-repair genes.
Mathonnet, Géraldine; Labuda, Damian; Meloche, Caroline; Wambach, Tina; Krajinovic, Maja; Sinnett, Daniel
2003-01-01
DNA-repair pathways are critical for maintaining the integrity of the genetic material by protecting against mutations due to exposure-induced damages or replication errors. Polymorphisms in the corresponding genes may be relevant in genetic epidemiology by modifying individual cancer susceptibility or therapeutic response. We report data on the population distribution of potentially functional variants in XRCC1, APEX1, ERCC2, ERCC4, hMLH1, and hMSH3 genes among groups representing individuals of European, Middle Eastern, African, Southeast Asian and North American descent. The data indicate little interpopulation differentiation in some of these polymorphisms and typical FST values ranging from 10 to 17% at others. Low FST was observed in APEX1 and hMSH3 exon 23 in spite of their relatively high minor allele frequencies, which could suggest the effect of balancing selection. In XRCC1, hMSH3 exon 21 and hMLH1 Africa clusters either with Middle East and Europe or with Southeast Asia, which could be related to the demographic history of human populations, whereby human migrations and genetic drift rather than selection would account for the observed differences.
Spinor waves in a space-time lattice (II)
NASA Astrophysics Data System (ADS)
Wouthuysen, S. A.
1994-02-01
In a previous note, an exceptional space-time lattice was found by a roundabout heuristic process. This process was far from convincing; here a more translucent characterization of the lattice is presented. A cornerstone is the consideration of pairs of reciprocal lattices, together with the basic symmetry ( S 4) of the metric tensor. The basic requirement is that one member of a pair of reciprocal lattices contains the other as a sublattice. One preferred lattice is discussed in some detail; it contains three copies of its reciprocal lattice, and it is the simplest example satisfying the requirements. In the expression of the metric tensor in terms of the lattice generators a possible topology on the lattice is suggested. By means of this topology, propagation of spinor waves can be formulated. This proposed—the simplest—propagation mechanism is inhibited, though, by the fact that the three sublattices are required to carry the two types of spinors alternatively. This inhibition can be lifted by introducing a second type of elementary propagation, to next nearest neighbors. If this inhibition is only feebly lifted, this would result in particles with mass small as compared to the inverse of the lattice constant, presumably the Planck mass. Including the propagation to next nearest neighbors leads to spinor waves with six components, two components for each sublattice. In the long-wavelength limit four of them obey a massive Dirac equation, while the remaining two obey a Weyl equation. These considerations conceivably provide a root for the lack of parity invariance in nature, and for the joint occurrence of pairs of massive and massless spinor waves. The construction, furthermore, allows one to accommodate just three different families of spinor waves of this type. Extension of the above arguments outside the realm of the long-wavelength limit forcibly makes the lattice concept independent of the original continuous Minkowski spacetime: the latter is no longer
NASA Astrophysics Data System (ADS)
Leblois, Etienne; Creutin, Jean-Dominique
2013-06-01
Space-time rainfall simulation is useful to study questions like, for instance, the propagation of rainfall-measurement uncertainty in hydrological modeling. This study adapts a classical Gaussian field simulation technique, the turning-band method, in order to produce sequences of rainfall fields satisfying three key features of actual precipitation systems: (i) the skewed point distribution and the space-time structure of nonzero rainfall (NZR); (ii) the average probability and the space-time structure of intermittency; and (iii) a prescribed advection field. The acronym of our simulator is SAMPO, for simulation of advected mesoscale precipitations and their occurrence. SAMPO assembles various theoretical developments available from the literature. The concept of backtrajectories introduces a priori any type of advection field in the heart of the turning band method (TBM). TBM outputs transformation into rainfall fields with a desired structure is controlled using Chebyshev-Hermite polynomial expansion. The intermittency taken as a binary process statistically independent of the NZR process allows the use of a common algorithm for both processes. The 3-D simulation with a space-time anisotropy captures important details of the precipitation kinematics summarized by the Taylor velocity of both NZR and intermittency. A case study based on high-resolution weather radar data serves as an example of model inference. Illustrative simulations revisit some classical questions about rainfall variography like the influence of advection or intermittency. They also show the combined role of Taylor's and advection velocities.
Xu, Jinhua; Yang, Zhiyong; Tsien, Joe Z.
2010-01-01
Visual saliency is the perceptual quality that makes some items in visual scenes stand out from their immediate contexts. Visual saliency plays important roles in natural vision in that saliency can direct eye movements, deploy attention, and facilitate tasks like object detection and scene understanding. A central unsolved issue is: What features should be encoded in the early visual cortex for detecting salient features in natural scenes? To explore this important issue, we propose a hypothesis that visual saliency is based on efficient encoding of the probability distributions (PDs) of visual variables in specific contexts in natural scenes, referred to as context-mediated PDs in natural scenes. In this concept, computational units in the model of the early visual system do not act as feature detectors but rather as estimators of the context-mediated PDs of a full range of visual variables in natural scenes, which directly give rise to a measure of visual saliency of any input stimulus. To test this hypothesis, we developed a model of the context-mediated PDs in natural scenes using a modified algorithm for independent component analysis (ICA) and derived a measure of visual saliency based on these PDs estimated from a set of natural scenes. We demonstrated that visual saliency based on the context-mediated PDs in natural scenes effectively predicts human gaze in free-viewing of both static and dynamic natural scenes. This study suggests that the computation based on the context-mediated PDs of visual variables in natural scenes may underlie the neural mechanism in the early visual cortex for detecting salient features in natural scenes. PMID:21209963
NASA Astrophysics Data System (ADS)
Johnson, Sarah J.; Lance, Andrew M.; Ong, Lawrence; Shirvanimoghaddam, Mahyar; Ralph, T. C.; Symul, Thomas
2017-02-01
The maximum operational range of continuous variable quantum key distribution protocols has shown to be improved by employing high-efficiency forward error correction codes. Typically, the secret key rate model for such protocols is modified to account for the non-zero word error rate of such codes. In this paper, we demonstrate that this model is incorrect: firstly, we show by example that fixed-rate error correction codes, as currently defined, can exhibit efficiencies greater than unity. Secondly, we show that using this secret key model combined with greater than unity efficiency codes, implies that it is possible to achieve a positive secret key over an entanglement breaking channel—an impossible scenario. We then consider the secret key model from a post-selection perspective, and examine the implications for key rate if we constrain the forward error correction codes to operate at low word error rates.
2015-09-30
transmission and scattering from schools of swim bladder fish Christopher Feuillade Instituto de Física, Pontificia Universidad Católica de Chile Avenida...activities, accomplishments, and publications, of this project, which was part of the ONR Fish Acoustics Basic Research Challenge. Swim Bladder Fish...public release; distribution is unlimited. Mathematical modeling of space-time variations in acoustic transmission and scattering from schools of swim
Universal space-time scaling symmetry in the dynamics of bosons across a quantum phase transition.
Clark, Logan W; Feng, Lei; Chin, Cheng
2016-11-04
The dynamics of many-body systems spanning condensed matter, cosmology, and beyond are hypothesized to be universal when the systems cross continuous phase transitions. The universal dynamics are expected to satisfy a scaling symmetry of space and time with the crossing rate, inspired by the Kibble-Zurek mechanism. We test this symmetry based on Bose condensates in a shaken optical lattice. Shaking the lattice drives condensates across an effectively ferromagnetic quantum phase transition. After crossing the critical point, the condensates manifest delayed growth of spin fluctuations and develop antiferromagnetic spatial correlations resulting from the sub-Poisson distribution of the spacing between topological defects. The fluctuations and correlations are invariant in scaled space-time coordinates, in support of the scaling symmetry of quantum critical dynamics.
Space-time properties of Gram-Schmidt vectors in classical Hamiltonian evolution.
Green, Jason R; Jellinek, Julius; Berry, R Stephen
2009-12-01
Not all tangent space directions play equivalent roles in the local chaotic motions of classical Hamiltonian many-body systems. These directions are numerically represented by basis sets of mutually orthogonal Gram-Schmidt vectors, whose statistical properties may depend on the chosen phase space-time domain of a trajectory. We examine the degree of stability and localization of Gram-Schmidt vector sets simulated with trajectories of a model three-atom Lennard-Jones cluster. Distributions of finite-time Lyapunov exponent and inverse participation ratio spectra formed from short-time histories reveal that ergodicity begins to emerge on different time scales for trajectories spanning different phase-space regions, in a narrow range of total energy and history length. Over a range of history lengths, the most localized directions were typically the most unstable and corresponded to atomic configurations near potential landscape saddles.
Carslake, David; Bennett, Malcolm; Hazel, Sarah; Telfer, Sandra; Begon, Michael
2006-04-07
There have been virtually no studies of 'who acquires infection from whom' in wildlife populations, but patterns of transmission within and between different classes of host are likely to be reflected in the spatiotemporal distribution of infection among those host classes. Here, we use a modified form of K-function analysis to test for space-time interaction among bank voles and wood mice infectious with cowpox virus. There was no evidence for transmission between the two host species, supporting previous evidence that they act as separate reservoirs for cowpox. Among wood mice, results suggested that transmission took place primarily between individuals of the opposite sex, raising the possibility that cowpox is sexually transmitted in this species. Results for bank voles indicated that infected females might be a more important source of infection to either sex than are males. The suggestion of different modes of transmission in the two species is itself consistent with the apparent absence of transmission between species.
Renormalized stress tensor in Kerr space-time: Numerical results for the Hartle-Hawking vacuum
Duffy, Gavin; Ottewill, Adrian C.
2008-01-15
We show that the pathology which afflicts the Hartle-Hawking vacuum on the Kerr black hole space-time can be regarded as due to rigid rotation of the state with the horizon in the sense that, when the region outside the speed-of-light surface is removed by introducing a mirror, there is a state with the defining features of the Hartle-Hawking vacuum. In addition, we show that, when the field is in this state, the expectation value of the energy-momentum stress tensor measured by an observer close to the horizon and rigidly rotating with it corresponds to that of a thermal distribution at the Hawking temperature rigidly rotating with the horizon.
Universal space-time scaling symmetry in the dynamics of bosons across a quantum phase transition
NASA Astrophysics Data System (ADS)
Clark, Logan W.; Feng, Lei; Chin, Cheng
2016-11-01
The dynamics of many-body systems spanning condensed matter, cosmology, and beyond are hypothesized to be universal when the systems cross continuous phase transitions. The universal dynamics are expected to satisfy a scaling symmetry of space and time with the crossing rate, inspired by the Kibble-Zurek mechanism. We test this symmetry based on Bose condensates in a shaken optical lattice. Shaking the lattice drives condensates across an effectively ferromagnetic quantum phase transition. After crossing the critical point, the condensates manifest delayed growth of spin fluctuations and develop antiferromagnetic spatial correlations resulting from the sub-Poisson distribution of the spacing between topological defects. The fluctuations and correlations are invariant in scaled space-time coordinates, in support of the scaling symmetry of quantum critical dynamics.
Ghodoosian, N.
1984-05-01
An analytical model leading to the pressure distribution on the cross section of a Darrieus Rotor Blade (airfoil) has veen constructed. The model is based on the inviscid flow theory and the contribution of the nonsteady wake vortices was neglected. The analytical model was translated into a computer code in order to study a variety of boundary conditions encountered by the rotating blades of the Darrieus Rotor. Results indicate that, for a pitching airfoil, lift can be adequately approximated by the Kutta-Joukowski forces, despite notable deviations in the pressure distribution on the airfoil. These deviations are most significant at the upwind half of the Darrieus Rotor where higher life is accompanied by increased adverse pressure gradients. The effect of pitching on lift can be approximated by a linear shift in the angle of attack proportional to the blade angular velocity. Tabulation of the fluid velocity about the pitching-only NACA 0015 allowed the principle of superposition to be used to determine the fluid velocity about a translating and pitching airfoil.
Francis, Stephen S.; Selvin, Steve; Yang, Wei; Buffler, Patricia A.; Wiemels, Joseph L.
2012-01-01
The town of Fallon within Churchill County, Nevada exhibited an unusually high incidence of childhood leukemia during the years 1997–2003. We examined the temporal and spatial patterning of the leukemia case homes in comparison to the distribution of the general population at risk, other cancer incidence, and features of land use. Leukemia cases were predominantly diagnosed during the early to mid summer, exhibiting a seasonal bias. Leukemia cases lived outside of the “developed/urban” area of Fallon, predominantly in the “agriculture/pasture” region of Churchill County, circumscribing downtown Fallon. This pattern was different from the distribution of the underlying population (p-value < 0.01) and different from the distribution of other cancers, which were evenly distributed when compared to the population (p-value = 0.74). The unusual space-time patterning of childhood leukemia is consistent with the involvement of an infectious disease. A possible mode of transmission for such an infectious disease is by means of a vector, and mosquitoes are abundant in Churchill County outside of the urban area of Fallon. This region harbors a US Navy base, and a temporally concordant increase in military wide childhood leukemia rates suggests the base a possible source of the virus. Taken together, our current understanding of the etiology of childhood leukemia, the rural structure combined with temporal and geospatial patterning of these leukemia cases, and the high degree of population mixing in Fallon, suggest a possible infectious cause. PMID:21352818
Francis, Stephen S; Selvin, Steve; Yang, Wei; Buffler, Patricia A; Wiemels, Joseph L
2012-04-05
The town of Fallon within Churchill County, Nevada exhibited an unusually high incidence of childhood leukemia during the years 1997-2003. We examined the temporal and spatial patterning of the leukemia case homes in comparison to the distribution of the general population at risk, other cancer incidence, and features of land use. Leukemia cases were predominantly diagnosed during the early to mid summer, exhibiting a seasonal bias. Leukemia cases lived outside of the "developed/urban" area of Fallon, predominantly in the "agriculture/pasture" region of Churchill County, circumscribing downtown Fallon. This pattern was different from the distribution of the underlying population (p-value<0.01) and different from the distribution of other cancers, which were evenly distributed when compared to the population (p-value=0.74). The unusual space-time patterning of childhood leukemia is consistent with the involvement of an infectious disease. A possible mode of transmission for such an infectious disease is by means of a vector, and mosquitoes are abundant in Churchill County outside of the urban area of Fallon. This region harbors a US Navy base, and a temporally concordant increase in military wide childhood leukemia rates suggests the base a possible source of the virus. Taken together, our current understanding of the etiology of childhood leukemia, the rural structure combined with temporal and geospatial patterning of these leukemia cases, and the high degree of population mixing in Fallon, suggest a possible infectious cause.
Alton, Gillian D; Pearl, David L; Bateman, Ken G; McNab, Bruce; Berke, Olaf
2013-11-18
Abattoir condemnation data show promise as a rich source of data for syndromic surveillance of both animal and zoonotic diseases. However, inherent characteristics of abattoir condemnation data can bias results from space-time cluster detection methods for disease surveillance, and may need to be accounted for using various adjustment methods. The objective of this study was to compare the space-time scan statistics with different abilities to control for covariates and to assess their suitability for food animal syndromic surveillance. Four space-time scan statistic models were used including: animal class adjusted Poisson, space-time permutation, multi-level model adjusted Poisson, and a weighted normal scan statistic using model residuals. The scan statistics were applied to monthly bovine pneumonic lung and "parasitic liver" condemnation data from Ontario provincial abattoirs from 2001-2007. The number and space-time characteristics of identified clusters often varied between space-time scan tests for both "parasitic liver" and pneumonic lung condemnation data. While there were some similarities between isolated clusters in space, time and/or space-time, overall the results from space-time scan statistics differed substantially depending on the covariate adjustment approach used. Variability in results among methods suggests that caution should be used in selecting space-time scan methods for abattoir surveillance. Furthermore, validation of different approaches with simulated or real outbreaks is required before conclusive decisions can be made concerning the best approach for conducting surveillance with these data.
NASA Astrophysics Data System (ADS)
Athanasopoulou, Labrini; Athanasopoulos, Stavros; Karamanos, Kostas; Almirantis, Yannis
2010-11-01
Statistical methods, including block entropy based approaches, have already been used in the study of long-range features of genomic sequences seen as symbol series, either considering the full alphabet of the four nucleotides or the binary purine or pyrimidine character set. Here we explore the alternation of short protein-coding segments with long noncoding spacers in entire chromosomes, focusing on the scaling properties of block entropy. In previous studies, it has been shown that the sizes of noncoding spacers follow power-law-like distributions in most chromosomes of eukaryotic organisms from distant taxa. We have developed a simple evolutionary model based on well-known molecular events (segmental duplications followed by elimination of most of the duplicated genes) which reproduces the observed linearity in log-log plots. The scaling properties of block entropy H(n) have been studied in several works. Their findings suggest that linearity in semilogarithmic scale characterizes symbol sequences which exhibit fractal properties and long-range order, while this linearity has been shown in the case of the logistic map at the Feigenbaum accumulation point. The present work starts with the observation that the block entropy of the Cantor-like binary symbol series scales in a similar way. Then, we perform the same analysis for the full set of human chromosomes and for several chromosomes of other eukaryotes. A similar but less extended linearity in semilogarithmic scale, indicating fractality, is observed, while randomly formed surrogate sequences clearly lack this type of scaling. Genomic sequences always present entropy values much lower than their random surrogates. Symbol sequences produced by the aforementioned evolutionary model follow the scaling found in genomic sequences, thus corroborating the conjecture that “segmental duplication-gene elimination” dynamics may have contributed to the observed long rangeness in the coding or noncoding alternation in
Space-Time Processing for Tactical Mobile Ad Hoc Networks
2009-08-01
The overall performance attainable by a mobile ad hoc network depends fundamentally on the MIMO channel characteristics . During the past year the...propagation environment and an aperture within which the antennas must reside [1]. However, achieving these characteristics is difficult if not...determine the current distribution of Aperture 1 using the Covariance Method and compute the diversity gain obtained using the radiation characteristics
Black holes in loop quantum gravity: the complete space-time.
Gambini, Rodolfo; Pullin, Jorge
2008-10-17
We consider the quantization of the complete extension of the Schwarzschild space-time using spherically symmetric loop quantum gravity. We find an exact solution corresponding to the semiclassical theory. The singularity is eliminated but the space-time still contains a horizon. Although the solution is known partially numerically and therefore a proper global analysis is not possible, a global structure akin to a singularity-free Reissner-Nordström space-time including a Cauchy horizon is suggested.
Establishing more truth in space-time integration of surface turbulent heat fluxes
NASA Astrophysics Data System (ADS)
Gulev, Sergey; Belyaev, Konstantin
2016-04-01
Space-time integration of surface turbulent heat fluxes is important for obtaining area-averaged budget estimates and for producing climatologies of surface fluxes. Uncertainty of the integration or averaging of fluxes in space and in time are especially high when the data are sparse as in the case of the use of information from Voluntary Observing Ships (VOS) which are characterized by inhomogeneous sampling density in contrast to NWP products and satellite data sets. In order to minimize sampling impact onto local and larger scale surface flux averages we suggest an approach based upon analysis of surface fluxes in the coordinates of steering parameters (vertical surface temperature and humidity gradients on one hand and wind speed on the other). These variables are distributed according to the Modified Fisher-Tippett (MFT) distribution (temperature and humidity gradients) and Weibull distribution (wind speed) which imply a 2-dimentional distribution for the fluxes. Since the fluxes in these coordinates are determined in a unique manner (within a chosen bulk transfer algorithm), they can be easily integrated in the space of 2-dimentional distribution in order to get the averaged values dependent on the parameters of the MFT and Weibull distributions. Conceptually, the approach is similar to that oceanographers apply for analysing volumetric T,S-diagrams of water mass properties. We developed an algorithm for applying this approach and also provided the analysis of integrated surface fluxes for different regions of the North Atlantic for which heat flux estimates can be obtained from oceanographic cross-sections. Analysis was performed for the last 5 decades. 2-dimensitonal diagrams also make it possible to analyse temporal variability of integrated surface fluxes in the dimension of steering parameters and to further compare estimates with changes in the ocean heat content.
Super-nodal methods for space-time kinetics
NASA Astrophysics Data System (ADS)
Mertyurek, Ugur
The purpose of this research has been to develop an advanced Super-Nodal method to reduce the run time of 3-D core neutronics models, such as in the NESTLE reactor core simulator and FORMOSA nuclear fuel management optimization codes. Computational performance of the neutronics model is increased by reducing the number of spatial nodes used in the core modeling. However, as the number of spatial nodes decreases, the error in the solution increases. The Super-Nodal method reduces the error associated with the use of coarse nodes in the analyses by providing a new set of cross sections and ADFs (Assembly Discontinuity Factors) for the new nodalization. These so called homogenization parameters are obtained by employing consistent collapsing technique. During this research a new type of singularity, namely "fundamental mode singularity", is addressed in the ANM (Analytical Nodal Method) solution. The "Coordinate Shifting" approach is developed as a method to address this singularity. Also, the "Buckling Shifting" approach is developed as an alternative and more accurate method to address the zero buckling singularity, which is a more common and well known singularity problem in the ANM solution. In the course of addressing the treatment of these singularities, an effort was made to provide better and more robust results from the Super-Nodal method by developing several new methods for determining the transverse leakage and collapsed diffusion coefficient, which generally are the two main approximations in the ANM methodology. Unfortunately, the proposed new transverse leakage and diffusion coefficient approximations failed to provide a consistent improvement to the current methodology. However, improvement in the Super-Nodal solution is achieved by updating the homogenization parameters at several time points during a transient. The update is achieved by employing a refinement technique similar to pin-power reconstruction. A simple error analysis based on the relative
Analysis and modeling of space-time organization of remotely sensed soil moisture
NASA Astrophysics Data System (ADS)
Chang, Dyi-Huey
The characterization and modeling of the spatial variability of soil moisture is an important problem for various hydrological, ecological, and atmospheric processes. A compact representation of interdependencies among soil moisture distribution, mean soil moisture, soil properties and topography is necessary. This study attempts to provide such a compact representation using two complimentary approaches. In the first approach, we develop a stochastic framework to evaluate the influence of spatial variability in topography and soil physical properties, and mean soil moisture on the spatial distribution of soil moisture. Topography appears to have dominant control on soil moisture distribution when the area is dominated by coarse-texture soil or by mixed soil with small correlation scale for topography (i.e., small lambdaZ). Second, soil properties is likely to have dominant control on soil moisture distribution for fine-texture soil or for mixed soil with large lambda Z. Finally, both topography and soil properties appear to have similar control for medium-texture soil with moderate value of lambda Z. In the second approach, we explore the recent developments in Artificial Neural Network (ANN) to develop nonparametric space-time relationships between soil moisture and readily available remotely sensed surface variables. We have used remotely sensed brightness temperature data in a single drying cycle from Washita '92 Experiment and two different ANN architectures (Feed-Forward Neural Network (FFNN), Self Organizing Map (SOM)) to classify soil types into three categories. The results show that FFNN yield better classification accuracy (about 80%) than SOM (about 70% accuracy). Our attempt to classify soil types into more than three categories resulted in about 50% accuracy when a FFNN was used and even lesser accuracy when a SOM was used. To classify soil into more than three groups and to explore the limits of classification accuracy, this study suggests the use of
Gust Acoustic Response of a Single Airfoil Using the Space-Time CE/SE Method
NASA Technical Reports Server (NTRS)
Scott, James (Technical Monitor); Wang, X. Y.; Chang, S. C.; Himansu, A.; Jorgenson, P. C. E.
2003-01-01
A 2D parallel Euler code based on the space-time conservation element and solution element (CE/SE) method is validated by solving the benchmark problem I in Category 3 of the Third CAA Workshop. This problem concerns the acoustic field generated by the interaction of a convected harmonic vortical gust with a single airfoil. Three gust frequencies, two gust configurations, and three airfoil geometries are considered. Numerical results at both near and far fields are presented and compared with the analytical solutions, a frequency-domain solver GUST3D solutions, and a time-domain high-order Discontinuous Spectral Element Method (DSEM) solutions. It is shown that the CE/SE solutions agree well with the GUST3D solution for the lowest frequency, while there are discrepancies between CE/SE and GUST3D solutions for higher frequencies. However, the CE/SE solution is in good agreement with the DSEM solution for these higher frequencies. It demonstrates that the CE/SE method can produce accurate results of CAA problems involving complex geometries by using unstructured meshes.
Manz, Stephanie; Casandruc, Albert; Zhang, Dongfang; Zhong, Yinpeng; Loch, Rolf A; Marx, Alexander; Hasegawa, Taisuke; Liu, Lai Chung; Bayesteh, Shima; Delsim-Hashemi, Hossein; Hoffmann, Matthias; Felber, Matthias; Hachmann, Max; Mayet, Frank; Hirscht, Julian; Keskin, Sercan; Hada, Masaki; Epp, Sascha W; Flöttmann, Klaus; Miller, R J Dwayne
2015-01-01
The long held objective of directly observing atomic motions during the defining moments of chemistry has been achieved based on ultrabright electron sources that have given rise to a new field of atomically resolved structural dynamics. This class of experiments requires not only simultaneous sub-atomic spatial resolution with temporal resolution on the 100 femtosecond time scale but also has brightness requirements approaching single shot atomic resolution conditions. The brightness condition is in recognition that chemistry leads generally to irreversible changes in structure during the experimental conditions and that the nanoscale thin samples needed for electron structural probes pose upper limits to the available sample or "film" for atomic movies. Even in the case of reversible systems, the degree of excitation and thermal effects require the brightest sources possible for a given space-time resolution to observe the structural changes above background. Further progress in the field, particularly to the study of biological systems and solution reaction chemistry, requires increased brightness and spatial coherence, as well as an ability to tune the electron scattering cross-section to meet sample constraints. The electron bunch density or intensity depends directly on the magnitude of the extraction field for photoemitted electron sources and electron energy distribution in the transverse and longitudinal planes of electron propagation. This work examines the fundamental limits to optimizing these parameters based on relativistic electron sources using re-bunching cavity concepts that are now capable of achieving 10 femtosecond time scale resolution to capture the fastest nuclear motions. This analysis is given for both diffraction and real space imaging of structural dynamics in which there are several orders of magnitude higher space-time resolution with diffraction methods. The first experimental results from the Relativistic Electron Gun for Atomic
NASA Astrophysics Data System (ADS)
Bagheri, Zahra; Davoudifar, Pantea; Rastegarzadeh, Gohar; Shayan, Milad
2017-03-01
In this paper, we used CORSIKA code to understand the characteristics of cosmic ray induced showers at extremely high energy as a function of energy, detector distance to shower axis, number, and density of secondary charged particles and the nature particle producing the shower. Based on the standard properties of the atmosphere, lateral and longitudinal development of the shower for photons and electrons has been investigated. Fluorescent light has been collected by the detector for protons, helium, oxygen, silicon, calcium and iron primary cosmic rays in different energies. So we have obtained a number of electrons per unit area, distance to the shower axis, shape function of particles density, percentage of fluorescent light, lateral distribution of energy dissipated in the atmosphere and visual field angle of detector as well as size of the shower image. We have also shown that location of highest percentage of fluorescence light is directly proportional to atomic number of elements. Also we have shown when the distance from shower axis increases and the shape function of particles density decreases severely. At the first stages of development, shower axis distance from detector is high and visual field angle is small; then with shower moving toward the Earth, angle increases. Overall, in higher energies, the fluorescent light method has more efficiency. The paper provides standard calibration lines for high energy showers which can be used to determine the nature of the particles.
Karpievitch, Yuliya V; Almeida, Jonas S
2006-01-01
Background Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. Results mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Conclusion Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it
Karpievitch, Yuliya V; Almeida, Jonas S
2006-03-15
Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over
Bosse, Stefan
2015-01-01
Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques. PMID:25690550
Bosse, Stefan
2015-02-16
Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques.
NASA Technical Reports Server (NTRS)
Ghorai, S. K.
1983-01-01
The purpose of this project was to use a one-dimensional discrete coordinates transport code called ANISN in order to determine the energy-angle-spatial distribution of neutrons in a 6-feet cube rock box which houses a D-T neutron generator at its center. The project was two-fold. The first phase of the project involved adaptation of the ANISN code written for an IBM 360/75/91 computer to the UNIVAC system at JSC. The second phase of the project was to use the code with proper geometry, source function and rock material composition in order to determine the neutron flux distribution around the rock box when a 14.1 MeV neutron generator placed at its center is activated.
NASA Technical Reports Server (NTRS)
Otto, John C.
1993-01-01
This paper describes the parallel version of the three-dimensional, chemically reacting, computational fluid dynamics (CFD) code, SPARK. This work was performed on the Intel iPSC/860-based parallel computers. The SPARK code utilizes relatively simple explicit numerical algorithms, but models complex chemical reactions. The code solves the equations over a regular structured mesh so a simple dam decomposition is used to assign work to the individual processors. The explicit nature of the algorithm, combined with the computational intensity of the chemistry calculations, results in a very low communication-to-computation ratio when compared to typical CFD codes. The efficiency of the parallel code is examined and shown to be about 65 percent when the problem size is scaled with the number of processors. Two low-angle wall-jet injection cases are solved to demonstrate the capability of the parallel code for solving large problems efficiently.
Threshold exceedance risk assessment in complex space-time systems
NASA Astrophysics Data System (ADS)
Angulo, José M.; Madrid, Ana E.; Romero, José L.
2015-04-01
Environmental and health impact risk assessment studies most often involve analysis and characterization of complex spatio-temporal dynamics. Recent developments in this context are addressed, among other objectives, to proper representation of structural heterogeneities, heavy-tailed processes, long-range dependence, intermittency, scaling behavior, etc. Extremal behaviour related to spatial threshold exceedances can be described in terms of geometrical characteristics and distribution patterns of excursion sets, which are the basis for construction of risk-related quantities, such as in the case of evolutionary study of 'hotspots' and long-term indicators of occurrence of extremal episodes. Derivation of flexible techniques, suitable for both the application under general conditions and the interpretation on singularities, is important for practice. Modern risk theory, a developing discipline motivated by the need to establish solid general mathematical-probabilistic foundations for rigorous definition and characterization of risk measures, has led to the introduction of a variety of classes and families, ranging from some conceptually inspired by specific fields of applications, to some intended to provide generality and flexibility to risk analysts under parametric specifications, etc. Quantile-based risk measures, such as Value-at-Risk (VaR), Average Value-at-Risk (AVaR), and generalization to spectral measures, are of particular interest for assessment under very general conditions. In this work, we study the application of quantile-based risk measures in the spatio-temporal context in relation to certain geometrical characteristics of spatial threshold exceedance sets. In particular, we establish a closed-form relationship between VaR, AVaR, and the expected value of threshold exceedance areas and excess volumes. Conditional simulation allows us, by means of empirical global and local spatial cumulative distributions, the derivation of various statistics of
Influences of the MJO on the space-time organization of tropical convection
NASA Astrophysics Data System (ADS)
Dias, Juliana; Sakaeda, Naoko; Kiladis, George N.; Kikuchi, Kazuyoshi
2017-08-01
The fact that the Madden-Julian Oscillation (MJO) is characterized by large-scale patterns of enhanced tropical rainfall has been widely recognized for decades. However, the precise nature of any two-way feedback between the MJO and the properties of smaller-scale organization that makes up its convective envelope is not well understood. Satellite estimates of brightness temperature are used here as a proxy for tropical rainfall, and a variety of diagnostics are applied to determine the degree to which tropical convection is affected either locally or globally by the MJO. To address the multiscale nature of tropical convective organization, the approach ranges from space-time spectral analysis to an object-tracking algorithm. In addition to the intensity and distribution of global tropical rainfall, the relationship between the MJO and other tropical processes such as convectively coupled equatorial waves, mesoscale convective systems, and the diurnal cycle of tropical convection is also analyzed. The main findings of this paper are that, aside from the well-known increase in rainfall activity across scales within the MJO convective envelope, the MJO does not favor any particular scale or type of organization, and there is no clear signature of the MJO in terms of the globally integrated distribution of brightness temperature or rainfall.
Rigid covariance as a natural extension of Painlevé-Gullstrand space-times: gravitational waves
NASA Astrophysics Data System (ADS)
Jaén, Xavier; Molina, Alfred
2017-08-01
The group of rigid motions is considered to guide the search for a natural system of space-time coordinates in General Relativity. This search leads us to a natural extension of the space-times that support Painlevé-Gullstrand synchronization. As an interesting example, here we describe a system of rigid coordinates for the cross mode of gravitational linear plane waves.
Theorizing Space-Time Relations in Education: The Concept of Chronotope
ERIC Educational Resources Information Center
Ritella, Giuseppe; Ligorio, Maria Beatrice; Hakkarainen, Kai
2016-01-01
Due to ongoing cultural-historical transformations, the space-time of learning is radically changing, and theoretical conceptualizations are needed to investigate how such evolving space-time frames can function as a ground for learning. In this article, we argue that the concept of chronotope--from Greek chronos and topos, meaning time and…
NASA Astrophysics Data System (ADS)
Singla, Komal; Gupta, R. K.
2017-05-01
In Paper I [Singla, K. and Gupta, R. K., J. Math. Phys. 57, 101504 (2016)], Lie symmetry method is developed for time fractional systems of partial differential equations. In this article, the Lie symmetry approach is proposed for space-time fractional systems of partial differential equations and applied to study some well-known physically significant space-time fractional nonlinear systems successfully.
On the Weyl and Ricci tensors of Generalized Robertson-Walker space-times
NASA Astrophysics Data System (ADS)
Mantica, Carlo Alberto; Molinari, Luca Guido
2016-10-01
We prove theorems about the Ricci and the Weyl tensors on Generalized Robertson-Walker space-times of dimension n ≥ 3. In particular, we show that the concircular vector introduced by Chen decomposes the Ricci tensor as a perfect fluid term plus a term linear in the contracted Weyl tensor. The Weyl tensor is harmonic if and only if it is annihilated by Chen's vector, and any of the two conditions is necessary and sufficient for the Generalized Robertson-Walker (GRW) space-time to be a quasi-Einstein (perfect fluid) manifold. Finally, the general structure of the Riemann tensor for Robertson-Walker space-times is given, in terms of Chen's vector. In n = 4, a GRW space-time with harmonic Weyl tensor is a Robertson-Walker space-time.
Drift and diffusion in movement adaptation to space-time constraints.
Liu, Yeou-Teh; Hsieh, Tsung-Yu; Newell, Karl M
2013-10-01
Recent studies have shown more than one time scale of change in the movement dynamics of practice. Here, we decompose the drift and diffusion dynamics in adaptation to performing discrete aiming movements with different space-time constraints. Participants performed aiming movements on a graphics drawing board to a point target at 5 different space-time weightings on the task outcome. The drift was stronger the shorter the time constraint whereas noise was U-shaped across the space-time conditions. The drift and diffusion of adaptation in discrete aiming movements varied as a function of the space-time constraints on performance outcome and the spatial, temporal, or space-time measure of performance outcome. The findings support the postulation that the time scale of movement adaptation is task dependent.
A space-time discontinuous Galerkin method for the incompressible Navier-Stokes equations
NASA Astrophysics Data System (ADS)
Rhebergen, Sander; Cockburn, Bernardo; van der Vegt, Jaap J. W.
2013-01-01
We introduce a space-time discontinuous Galerkin (DG) finite element method for the incompressible Navier-Stokes equations. Our formulation can be made arbitrarily high-order accurate in both space and time and can be directly applied to deforming domains. Different stabilizing approaches are discussed which ensure stability of the method. A numerical study is performed to compare the effect of the stabilizing approaches, to show the method's robustness on deforming domains and to investigate the behavior of the convergence rates of the solution. Recently we introduced a space-time hybridizable DG (HDG) method for incompressible flows [S. Rhebergen, B. Cockburn, A space-time hybridizable discontinuous Galerkin method for incompressible flows on deforming domains, J. Comput. Phys. 231 (2012) 4185-4204]. We will compare numerical results of the space-time DG and space-time HDG methods. This constitutes the first comparison between DG and HDG methods.
Eikmeier, Verena; Hoppe, Dorothée; Ulrich, Rolf
2015-03-01
Previous studies reported a space-time congruency effect on response time, supporting the notion that people's thinking about time is grounded in their spatial sensorimotor experience. According to a strong view of metaphoric mapping, the congruency effect should be larger for responses that differ in their spatial features than for responses that lack such differences. In contrast, a weaker version of this account posits that the grounding of time is based on higher-level spatial concepts. In this case, response mode should not modulate the size of the space-time congruency effect. In order to assess these predictions, participants in this study responded to temporal stimuli either manually or vocally. Response mode did not modulate the space-time congruency effect which supports the weaker view of metaphoric mapping suggesting that this effect emerges at a higher cognitive level. Copyright © 2014 Elsevier B.V. All rights reserved.
Smolyaninov, Igor I; Smolyaninova, Vera N; Smolyaninov, Alexei I
2015-08-28
In the presence of an external magnetic field, cobalt nanoparticle-based ferrofluid forms a self-assembled hyperbolic metamaterial. The wave equation, which describes propagation of extraordinary light inside the ferrofluid, exhibits 2+1 dimensional Lorentz symmetry. The role of time in the corresponding effective three-dimensional Minkowski space-time is played by the spatial coordinate directed along the periodic nanoparticle chains aligned by the magnetic field. Here, we present a microscopic study of point, linear, planar and volume defects of the nanoparticle chain structure and demonstrate that they may exhibit strong similarities with such Minkowski space-time defects as magnetic monopoles, cosmic strings and the recently proposed space-time cloaks. Experimental observations of such defects are described. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Solano, Rubén; Gómez-Barroso, Diana; Simón, Fernando; Lafuente, Sarah; Simón, Pere; Rius, Cristina; Gorrindo, Pilar; Toledo, Diana; Caylà, Joan A
2014-05-01
A retrospective, space-time study of whooping cough cases reported to the Public Health Agency of Barcelona, Spain between the years 2000 and 2011 is presented. It is based on 633 individual whooping cough cases and the 2006 population census from the Spanish National Statistics Institute, stratified by age and sex at the census tract level. Cluster identification was attempted using space-time scan statistic assuming a Poisson distribution and restricting temporal extent to 7 days and spatial distance to 500 m. Statistical calculations were performed with Stata 11 and SatScan and mapping was performed with ArcGis 10.0. Only clusters showing statistical significance (P <0.05) were mapped. The most likely cluster identified included five census tracts located in three neighbourhoods in central Barcelona during the week from 17 to 23 August 2011. This cluster included five cases compared with the expected level of 0.0021 (relative risk = 2436, P <0.001). In addition, 11 secondary significant space-time clusters were detected with secondary clusters occurring at different times and localizations. Spatial statistics is felt to be useful by complementing epidemiological surveillance systems through visualizing excess in the number of cases in space and time and thus increase the possibility of identifying outbreaks not reported by the surveillance system.
Jutla, Antarpreet S.; Akanda, Ali S.; Islam, Shafiqul
2012-01-01
Cholera bacteria exhibit strong association with coastal plankton. Characterization of space-time variability of chlorophyll, a surrogate for plankton abundance, in Northern Bay of Bengal is an essential first step to develop any methodology for predicting cholera outbreaks in the Bengal Delta region using remote sensing. This study quantifies the space-time distribution of chlorophyll, using data from SeaWiFS, in the Bay of Bengal region using ten years of satellite data. Variability of chlorophyll at daily scale, irrespective of spatial averaging, resembles white noise. At a monthly scale, chlorophyll shows distinct seasonality and chlorophyll values are significantly higher close to the coast than in the offshore regions. At pixel level (9 km) on monthly scale, on the other hand, chlorophyll does not exhibit much persistence in time. With increased spatial averaging, temporal persistence of chlorophyll increases and lag one autocorrelation stabilizes around 0.60 for 1296 km2 or larger areal averages. In contrast to the offshore regions, spatial analyses of chlorophyll suggest that only coastal region has a stable correlation length of 100 km. Presence (absence) of correlation length in the coastal (offshore) regions, indicate that the two regions may have two separate processes controlling the production a phytoplankton This study puts a lower limit on space-time averaging of satellite measured plankton at 1296 km2-monthly scale to establish relationships with cholera incidence in Bengal Delta. PMID:22544976
Natural world physical, brain operational, and mind phenomenal space-time.
Fingelkurts, Andrew A; Fingelkurts, Alexander A; Neves, Carlos F H
2010-06-01
Concepts of space and time are widely developed in physics. However, there is a considerable lack of biologically plausible theoretical frameworks that can demonstrate how space and time dimensions are implemented in the activity of the most complex life-system - the brain with a mind. Brain activity is organized both temporally and spatially, thus representing space-time in the brain. Critical analysis of recent research on the space-time organization of the brain's activity pointed to the existence of so-called operational space-time in the brain. This space-time is limited to the execution of brain operations of differing complexity. During each such brain operation a particular short-term spatio-temporal pattern of integrated activity of different brain areas emerges within related operational space-time. At the same time, to have a fully functional human brain one needs to have a subjective mental experience. Current research on the subjective mental experience offers detailed analysis of space-time organization of the mind. According to this research, subjective mental experience (subjective virtual world) has definitive spatial and temporal properties similar to many physical phenomena. Based on systematic review of the propositions and tenets of brain and mind space-time descriptions, our aim in this review essay is to explore the relations between the two. To be precise, we would like to discuss the hypothesis that via the brain operational space-time the mind subjective space-time is connected to otherwise distant physical space-time reality.
2013-01-01
Background Abattoir condemnation data show promise as a rich source of data for syndromic surveillance of both animal and zoonotic diseases. However, inherent characteristics of abattoir condemnation data can bias results from space-time cluster detection methods for disease surveillance, and may need to be accounted for using various adjustment methods. The objective of this study was to compare the space-time scan statistics with different abilities to control for covariates and to assess their suitability for food animal syndromic surveillance. Four space-time scan statistic models were used including: animal class adjusted Poisson, space-time permutation, multi-level model adjusted Poisson, and a weighted normal scan statistic using model residuals. The scan statistics were applied to monthly bovine pneumonic lung and “parasitic liver” condemnation data from Ontario provincial abattoirs from 2001–2007. Results The number and space-time characteristics of identified clusters often varied between space-time scan tests for both “parasitic liver” and pneumonic lung condemnation data. While there were some similarities between isolated clusters in space, time and/or space-time, overall the results from space-time scan statistics differed substantially depending on the covariate adjustment approach used. Conclusions Variability in results among methods suggests that caution should be used in selecting space-time scan methods for abattoir surveillance. Furthermore, validation of different approaches with simulated or real outbreaks is required before conclusive decisions can be made concerning the best approach for conducting surveillance with these data. PMID:24246040
Transient Analyses for a Molten Salt Transmutation Reactor Using the Extended SIMMER-III Code
Wang, Shisheng; Rineiski, Andrei; Maschek, Werner; Ignatiev, Victor
2006-07-01
Recent developments extending the capabilities of the SIMMER-III code for the dealing with transient and accidents in Molten Salt Reactors (MSRs) are presented. These extensions refer to the movable precursor modeling within the space-time dependent neutronics framework of SIMMER-III, to the molten salt flow modeling, and to new equations of state for various salts. An important new SIMMER-III feature is that the space-time distribution of the various precursor families with different decay constants can be computed and took into account in neutron/reactivity balance calculations and, if necessary, visualized. The system is coded and tested for a molten salt transmuter. This new feature is also of interest in core disruptive accidents of fast reactors when the core melts and the molten fuel is redistributed. (authors)
NASA Astrophysics Data System (ADS)
Conrad, Clinton P.; Selway, Kate; Hirschmann, Marc M.; Ballmer, Maxim D.; Wessel, Paul
2017-07-01
Although partial melt in the asthenosphere is important geodynamically, geophysical constraints on its abundance remain ambiguous. We use a database of seamounts detected using satellite altimetry to constrain the temporal history of erupted asthenospheric melt. We find that intraplate volcanism on young seafloor (<60 Ma) equates to a 20 m thick layer spread across the seafloor. If these seamounts tap partial melt within a 20 km thick layer beneath the ridge flanks, they indicate extraction of an average melt fraction of 0.1%. If they source thinner layers or more laterally restricted domains, larger melt fractions are required. Increased seamount volumes for older lithosphere suggest either more active ridge flank volcanism during the Cretaceous or additional recent melt eruption on older seafloor. Pacific basin age constraints suggest that both processes are important. Our results indicate that small volumes of partial melt may be prevalent in the upper asthenosphere across ocean basins.
Fine-tuning the space, time, and host distribution of mycobacteria in wildlife
2011-01-01
Background We describe the diversity of two kinds of mycobacteria isolates, environmental mycobacteria and Mycobacterium bovis collected from wild boar, fallow deer, red deer and cattle in Doñana National Park (DNP, Spain), analyzing their association with temporal, spatial and environmental factors. Results High diversity of environmental mycobacteria species and M. bovis typing patterns (TPs) were found. When assessing the factors underlying the presence of the most common types of both environmental mycobacteria and M. bovis TPs in DNP, we evidenced (i) host species differences in the occurrence, (ii) spatial structuration and (iii) differences in the degree of spatial association of specific types between host species. Co-infection of a single host by two M. bovis TPs occurred in all three wild ungulate species. In wild boar and red deer, isolation of one group of mycobacteria occurred more frequently in individuals not infected by the other group. While only three TPs were detected in wildlife between 1998 and 2003, up to 8 different ones were found during 2006-2007. The opposite was observed in cattle. Belonging to an M. bovis-infected social group was a significant risk factor for mycobacterial infection in red deer and wild boar, but not for fallow deer. M. bovis TPs were usually found closer to water marshland than MOTT. Conclusions The diversity of mycobacteria described herein is indicative of multiple introduction events and a complex multi-host and multi-pathogen epidemiology in DNP. Significant changes in the mycobacterial isolate community may have taken place, even in a short time period (1998 to 2007). Aspects of host social organization should be taken into account in wildlife epidemiology. Wildlife in DNP is frequently exposed to different species of non-tuberculous, environmental mycobacteria, which could interact with the immune response to pathogenic mycobacteria, although the effects are unknown. This research highlights the suitability of molecular typing for surveys at small spatial and temporal scales. PMID:21288321
Space-time distribution of ignimbrite volcanism in the southern SMO: From Eocene to Pliocene
NASA Astrophysics Data System (ADS)
Nieto-Obregon, J.; Aguirre-Diaz, G. J.
2004-12-01
A distinct variation in the age of the ignimbrites of the Sierra Madre Occidental (SMO) is observed in the southern portion, which includes the area between Tepic, Nayarit (-105° W) and Aguascalientes, Ags (-102° W). Older, high-grade ignimbrites are Eocene and occur as scattered outcrops. These are in turn covered by a widespread and voluminous sequence of high-grade ignimbrites and silicic to intermediate lavas that ranges in age from Middle Oligocene to Middle Miocene. The peak of this ignimbrite volcanism was at about 21 Ma to 22 Ma, but there is evidence showing that it initiated since about 30 Ma and ended at about 17.5 Ma. This ignimbrite and lava sequence is in turn covered by another series of lavas, predominantly mafic to intermediate, in the southern part of the area. This latest volcanism represents the initiation of the Mexican Volcanic Belt. Ignimbrite volcanism apparently initiated at the NE part of the study area, and migrated to the SW with time, that is from the area Presa Calles to the valley of Bolaños. Isotopic ages reported on these rocks, cluster in various groups reflecting the time evolution of volcanism. Rocks older than 30 Ma tend to occur on the raised blocks of Sierra de El Laurel and Northern Sierra de Morones, in the eastern part of the area. The interval from 30 to 20 Ma comprises a discontinuous set of ages that are concentrated in the blocks of Southern Sierra de Morones, Tlaltenango, Bolaños and the area around Cinco Minas-San Pedro Analco-Hostotipaquillo. An apparent gap of ages occurs between 12 to 18 Ma, followed by a predominantly mafic volcanism scattered mainly to the south of the area, that represents the transition of SMO to MVB. Finally mafic volcanism of the MVB of 3 to 4 Ma is present in the south, in the area excavated on the vicinity of Rio Grande de Santiago. A similar migration pattern has been reported in general for the whole SMO by Aguirre-Diaz and Labarthe-Hernandez (2003), from NE Chihuahua to SW Nayarit between ca. 50 Ma to 18 Ma. Thus, in this study we confirm this pattern in a more local scale. These authors also mention that such large ignimbrite units may have been produced by the extrusion of pyroclastic material through linear conduits. In the Aguascalientes area, we have found linear fissure vents for the local ignimbrites, confirming this fact in this area.
Visceral leishmaniasis in the state of Sao Paulo, Brazil: spatial and space-time analysis.
Cardim, Marisa Furtado Mozini; Guirado, Marluci Monteiro; Dibo, Margareth Regina; Chiaravalloti, Francisco
2016-08-11
To perform both space and space-time evaluations of visceral leishmaniasis in humans in the state of Sao Paulo, Brazil. The population considered in the study comprised autochthonous cases of visceral leishmaniasis and deaths resulting from it in Sao Paulo, between 1999 and 2013. The analysis considered the western region of the state as its studied area. Thematic maps were created to show visceral leishmaniasis dissemination in humans in the municipality. Spatial analysis tools Kernel and Kernel ratio were used to respectively obtain the distribution of cases and deaths and the distribution of incidence and mortality. Scan statistics were used in order to identify spatial and space-time clusters of cases and deaths. The visceral leishmaniasis cases in humans, during the studied period, were observed to occur in the western portion of Sao Paulo, and their territorial extension mainly followed the eastbound course of the Marechal Rondon highway. The incidences were characterized as two sequences of concentric ellipses of decreasing intensities. The first and more intense one was found to have its epicenter in the municipality of Castilho (where the Marechal Rondon highway crosses the border of the state of Mato Grosso do Sul) and the second one in Bauru. Mortality was found to have a similar behavior to incidence. The spatial and space-time clusters of cases were observed to coincide with the two areas of highest incidence. Both the space-time clusters identified, even without coinciding in time, were started three years after the human cases were detected and had the same duration, that is, six years. The expansion of visceral leishmaniasis in Sao Paulo has been taking place in an eastbound direction, focusing on the role of highways, especially Marechal Rondon, in this process. The space-time analysis detected the disease occurred in cycles, in different spaces and time periods. These meetings, if considered, may contribute to the adoption of actions that aim to
Space-time quantitative source apportionment of soil heavy metal concentration increments.
Yang, Yong; Christakos, George; Guo, Mingwu; Xiao, Lu; Huang, Wei
2017-04-01
Assessing the space-time trends and detecting the sources of heavy metal accumulation in soils have important consequences in the prevention and treatment of soil heavy metal pollution. In this study, we collected soil samples in the eastern part of the Qingshan district, Wuhan city, Hubei Province, China, during the period 2010-2014. The Cd, Cu, Pb and Zn concentrations in soils exhibited a significant accumulation during 2010-2014. The spatiotemporal Kriging technique, based on a quantitative characterization of soil heavy metal concentration variations in terms of non-separable variogram models, was employed to estimate the spatiotemporal soil heavy metal distribution in the study region. Our findings showed that the Cd, Cu, and Zn concentrations have an obvious incremental tendency from the southwestern to the central part of the study region. However, the Pb concentrations exhibited an obvious tendency from the northern part to the central part of the region. Then, spatial overlay analysis was used to obtain absolute and relative concentration increments of adjacent 1- or 5-year periods during 2010-2014. The spatial distribution of soil heavy metal concentration increments showed that the larger increments occurred in the center of the study region. Lastly, the principal component analysis combined with the multiple linear regression method were employed to quantify the source apportionment of the soil heavy metal concentration increments in the region. Our results led to the conclusion that the sources of soil heavy metal concentration increments should be ascribed to industry, agriculture and traffic. In particular, 82.5% of soil heavy metal concentration increment during 2010-2014 was ascribed to industrial/agricultural activities sources. Using STK and SOA to obtain the spatial distribution of heavy metal concentration increments in soils. Using PCA-MLR to quantify the source apportionment of soil heavy metal concentration increments. Copyright © 2017
Optimal design of hydraulic head monitoring networks using space-time geostatistics
NASA Astrophysics Data System (ADS)
Herrera, G. S.; Júnez-Ferreira, H. E.
2013-05-01
This paper presents a new methodology for the optimal design of space-time hydraulic head monitoring networks and its application to the Valle de Querétaro aquifer in Mexico. The selection of the space-time monitoring points is done using a static Kalman filter combined with a sequential optimization method. The Kalman filter requires as input a space-time covariance matrix, which is derived from a geostatistical analysis. A sequential optimization method that selects the space-time point that minimizes a function of the variance, in each step, is used. We demonstrate the methodology applying it to the redesign of the hydraulic head monitoring network of the Valle de Querétaro aquifer with the objective of selecting from a set of monitoring positions and times, those that minimize the spatiotemporal redundancy. The database for the geostatistical space-time analysis corresponds to information of 273 wells located within the aquifer for the period 1970-2007. A total of 1,435 hydraulic head data were used to construct the experimental space-time variogram. The results show that from the existing monitoring program that consists of 418 space-time monitoring points, only 178 are not redundant. The implied reduction of monitoring costs was possible because the proposed method is successful in propagating information in space and time.
Space-time clusters of breast cancer using residential histories: A Danish case–control study
2014-01-01
Background A large proportion of breast cancer cases are thought related to environmental factors. Identification of specific geographical areas with high risk (clusters) may give clues to potential environmental risk factors. The aim of this study was to investigate whether clusters of breast cancer existed in space and time in Denmark, using 33 years of residential histories. Methods We conducted a population-based case–control study of 3138 female cases from the Danish Cancer Registry, diagnosed with breast cancer in 2003 and two independent control groups of 3138 women each, randomly selected from the Civil Registration System. Residential addresses of cases and controls from 1971 to 2003 were collected from the Civil Registration System and geo-coded. Q-statistics were used to identify space-time clusters of breast cancer. All analyses were carried out with both control groups, and for 66% of the study population we also conducted analyses adjusted for individual reproductive factors and area-level socioeconomic indicators. Results In the crude analyses a cluster in the northern suburbs of Copenhagen was consistently found throughout the study period (1971–2003) with both control groups. When analyses were adjusted for individual reproductive factors and area-level socioeconomic indicators, the cluster area became smaller and less evident. Conclusions The breast cancer cluster area that persisted after adjustment might be explained by factors that were not accounted for such as alcohol consumption and use of hormone replacement therapy. However, we cannot exclude environmental pollutants as a contributing cause, but no pollutants specific to this area seem obvious. PMID:24725434
Simple linear technique for the measurement of space-time coupling in ultrashort optical pulses.
Dorrer, Christophe; Walmsley, Ian A
2002-11-01
We demonstrate a simple sensitive linear technique that quantifies the spatiotemporal coupling in the electric field of an ultrashort optical pulse. The space-time uniformity of the field can be determined with only time-stationary filters and square-law integrating detectors, even if it is impossible to measure the temporal electric field in this way. A degree of spatiotemporal uniformity is defined and can be used with the demonstrated diagnostic to quantify space-time coupling. Experimental measurements of space-time coupling due to linear and nonlinear focusing, refraction, and diffraction are presented.
The Oppenheimer-Snyder space-time with a cosmological constant
NASA Astrophysics Data System (ADS)
Nakao, Ken-Ichi
1992-10-01
We investigate the Oppenheimer-Snyder space-time with a positive cosmological constant A. The interior of the dust sphere is described by the closed Friedmann-Robertson-Walker space-time while the exterior is the Schwarzschild-de Sitter space-time. Due to the cosmological constant A, when the gravitational massM o of the dust sphere is very large, there is no collapsing solution with the de Sitter-like asymptotic region which expands exponentially in the expanding universe frame. This fact suggests that the very large initial inhomogeneity does not necessarily lead to the failure of the cosmic no hair conjecture.
Separability of Gravitational Perturbation in Generalized Kerr-Nut Sitter Space-Time
NASA Astrophysics Data System (ADS)
Oota, Takeshi; Yasui, Yukinori
Generalized Kerr-NUT-de Sitter space-time is the most general space-time which admits a rank-2 closed conformal Killing-Yano tensor. It contains the higher-dimensional Kerr-de Sitter black holes with partially equal angular momenta. We study the separability of gravitational perturbations in the generalized Kerr-NUT-de Sitter space-time. We show that a certain type of tensor perturbations admits the separation of variables. The linearized perturbation equations for the Einstein condition are transformed into the ordinary differential equations of Fuchs type.
Theoretical analysis of Casimir and thermal Casimir effect in stationary space-time
NASA Astrophysics Data System (ADS)
Zhang, Anwei
2017-10-01
We investigate Casimir effect as well as thermal Casimir effect for a pair of parallel perfectly plates placed in general stationary space-time background. It is found that the Casimir energy is influenced by the 00-component of metric and the corresponding quantity in dragging frame. We give a scheme to renormalize thermal correction to free energy in curved space-time. It is shown that the thermal corrections to Casimir thermodynamic quantities not only depend on the proper temperature and proper geometrical parameters of the plates, but also on the determinant of space-time metric.
Exponential rational function method for space-time fractional differential equations
NASA Astrophysics Data System (ADS)
Aksoy, Esin; Kaplan, Melike; Bekir, Ahmet
2016-04-01
In this paper, exponential rational function method is applied to obtain analytical solutions of the space-time fractional Fokas equation, the space-time fractional Zakharov Kuznetsov Benjamin Bona Mahony, and the space-time fractional coupled Burgers' equations. As a result, some exact solutions for them are successfully established. These solutions are constructed in fractional complex transform to convert fractional differential equations into ordinary differential equations. The fractional derivatives are described in Jumarie's modified Riemann-Liouville sense. The exact solutions obtained by the proposed method indicate that the approach is easy to implement and effective.
An Approach to Integrate a Space-Time GIS Data Model with High Performance Computers
Wang, Dali; Zhao, Ziliang; Shaw, Shih-Lung
2011-01-01
In this paper, we describe an approach to integrate a Space-Time GIS data model on a high performance computing platform. The Space-Time GIS data model has been developed on a desktop computing environment. We use the Space-Time GIS data model to generate GIS module, which organizes a series of remote sensing data. We are in the process of porting the GIS module into an HPC environment, in which the GIS modules handle large dataset directly via parallel file system. Although it is an ongoing project, authors hope this effort can inspire further discussions on the integration of GIS on high performance computing platforms.
Mobile phone usage in complex urban systems: a space-time, aggregated human activity study
NASA Astrophysics Data System (ADS)
Tranos, Emmanouil; Nijkamp, Peter
2015-04-01
The present study aims to demonstrate the importance of digital data for investigating space-time dynamics of aggregated human activity in urban systems. Such dynamics can be monitored and modelled using data from mobile phone operators regarding mobile telephone usage. Using such an extensive dataset from the city of Amsterdam, this paper introduces space-time explanatory models of aggregated human activity patterns. Various modelling experiments and results are presented, which demonstrate that mobile telephone data are a good proxy of the space-time dynamics of aggregated human activity in the city.
NASA Technical Reports Server (NTRS)
Nerushev, Alexander F.; Vasiliev, Victor I.
1994-01-01
An analysis of specific features of space-time variations of ozone in the tropical areas which has been performed on the basis of processing of the results of special expedition studies in the Atlantic and Pacific in 1987-1990 and the data of observations at the stations of the world ozonometric network over the 25-year period. The existence of a cause-and-effect relation has been revealed between the processes determining tropical cyclone (TC) development, and specific features of variations of the total content of ozone (TCO) and the vertical distribution of ozone (VDO) in the regions of TC action. Characteristic features of day-to-day and daily variations of TCO during TC development have been found. On the periphery of a developing TC, 1-4 days before it reaches the stage of storm, TCO increases, on average, by 5-8 percent, and a substantial increase in the concentration of ozone occurs in the middle and upper troposphere. The most probable physical mechanisms relating the observed specific features of ozone variations to TC evolution have been suggested. A hypothesis of the possibility of using ozone as an indicator for early prediction of TC development has been substantiated.
Earthquake Declustering via a Nearest-Neighbor Approach in Space-Time-Magnitude Domain
NASA Astrophysics Data System (ADS)
Zaliapin, I. V.; Ben-Zion, Y.
2016-12-01
We propose a new method for earthquake declustering based on nearest-neighbor analysis of earthquakes in space-time-magnitude domain. The nearest-neighbor approach was recently applied to a variety of seismological problems that validate the general utility of the technique and reveal the existence of several different robust types of earthquake clusters. Notably, it was demonstrated that clustering associated with the largest earthquakes is statistically different from that of small-to-medium events. In particular, the characteristic bimodality of the nearest-neighbor distances that helps separating clustered and background events is often violated after the largest earthquakes in their vicinity, which is dominated by triggered events. This prevents using a simple threshold between the two modes of the nearest-neighbor distance distribution for declustering. The current study resolves this problem hence extending the nearest-neighbor approach to the problem of earthquake declustering. The proposed technique is applied to seismicity of different areas in California (San Jacinto, Coso, Salton Sea, Parkfield, Ventura, Mojave, etc.), as well as to the global seismicity, to demonstrate its stability and efficiency in treating various clustering types. The results are compared with those of alternative declustering methods.
Space-time resolved measurements of spontaneous magnetic fields in laser-produced plasma
Pisarczyk, T.; Chodukowski, T.; Kalinowska, Z.; Borodziuk, S.; Gus'kov, S. Yu.; Dudzak, R.; Dostal, J.; Krousky, E.; Ullschmied, J.; Hrebicek, J.; Medrik, T.; Golasowski, J.; Pfeifer, M.; Skala, J.; Demchenko, N. N.; Korneev, Ph.; Kalal, M.; Renner, O.; Smid, M.; Pisarczyk, P.
2015-10-15
The first space-time resolved spontaneous magnetic field (SMF) measurements realized on Prague Asterix Laser System are presented. The SMF was generated as a result of single laser beam (1.315 μm) interaction with massive planar targets made of materials with various atomic numbers (plastic and Cu). Measured SMF confirmed azimuthal geometry and their maximum amplitude reached the value of 10 MG at the laser energy of 250 J for both target materials. It was demonstrated that spatial distributions of these fields are associated with the character of the ablative plasma expansion which clearly depends on the target material. To measure the SMF, the Faraday effect was employed causing rotation of the vector of polarization of the linearly polarized diagnostic beam. The rotation angle was determined together with the phase shift using a novel design of a two-channel polaro-interferometer. To obtain sufficiently high temporal resolution, the polaro-interferometer was irradiated by Ti:Sa laser pulse with the wavelength of 808 nm and the pulse duration of 40 fs. The results of measurements were compared with theoretical analysis.
NASA Astrophysics Data System (ADS)
Panthou, G.; Vischel, T.; Lebel, T.; Quantin, G.; Molinié, G.
2014-07-01
Intensity-duration-area-frequency (IDAF) curves are increasingly demanded for characterizing the severity of storms and for designing hydraulic structures. Their computation requires inferring areal rainfall distributions over the range of space-time scales that are the most relevant for hydrological studies at catchment scale. In this study, IDAF curves are computed for the first time in West Africa, based on the data provided by the AMMA-CATCH Niger network, composed of 30 recording rain gauges having operated since 1990 over a 16 000 km2 area in South West Niger. The IDAF curves are obtained by separately considering the time (IDF) and space (Areal Reduction Factor - ARF) components of the extreme rainfall distribution. Annual maximum intensities are extracted for resolutions between 1 and 24 h in time and from point (rain-gauge) to 2500 km2 in space. The IDF model used is based on the concept of scale invariance (simple scaling) which allows the normalization of the different temporal resolutions of maxima series to which a global GEV is fitted. This parsimonious framework allows using the concept of dynamic scaling to describe the ARF. The results show that coupling a simple scaling in space and time with a dynamical scaling relating space and time allows modeling satisfactorily the effect of space-time aggregation on the distribution of extreme rainfall.
Space-time mesh adaptation for solute transport in randomly heterogeneous porous media.
Dell'Oca, Aronne; Porta, Giovanni Michele; Guadagnini, Alberto; Riva, Monica
2017-07-05
We assess the impact of an anisotropic space and time grid adaptation technique on our ability to solve numerically solute transport in heterogeneous porous media. Heterogeneity is characterized in terms of the spatial distribution of hydraulic conductivity, whose natural logarithm, Y, is treated as a second-order stationary random process. We consider nonreactive transport of dissolved chemicals to be governed by an Advection Dispersion Equation at the continuum scale. The flow field, which provides the advective component of transport, is obtained through the numerical solution of Darcy's law. A suitable recovery-based error estimator is analyzed to guide the adaptive discretization. We investigate two diverse strategies guiding the (space-time) anisotropic mesh adaptation. These are respectively grounded on the definition of the guiding error estimator through the spatial gradients of: (i) the concentration field only; (ii) both concentration and velocity components. We test the approach for two-dimensional computational scenarios with moderate and high levels of heterogeneity, the latter being expressed in terms of the variance of Y. As quantities of interest, we key our analysis towards the time evolution of section-averaged and point-wise solute breakthrough curves, second centered spatial moment of concentration, and scalar dissipation rate. As a reference against which we test our results, we consider corresponding solutions associated with uniform space-time grids whose level of refinement is established through a detailed convergence study. We find a satisfactory comparison between results for the adaptive methodologies and such reference solutions, our adaptive technique being associated with a markedly reduced computational cost. Comparison of the two adaptive strategies tested suggests that: (i) defining the error estimator relying solely on concentration fields yields some advantages in grasping the key features of solute transport taking place within
Space-Time Controls on Carbon Sequestration Over Large-Scale Amazon Basin
NASA Technical Reports Server (NTRS)
Smith, Eric A.; Cooper, Harry J.; Gu, Jiujing; Grose, Andrew; Norman, John; daRocha, Humberto R.; Starr, David O. (Technical Monitor)
2002-01-01
A major research focus of the LBA Ecology Program is an assessment of the carbon budget and the carbon sequestering capacity of the large scale forest-pasture system that dominates the Amazonia landscape, and its time-space heterogeneity manifest in carbon fluxes across the large scale Amazon basin ecosystem. Quantification of these processes requires a combination of in situ measurements, remotely sensed measurements from space, and a realistically forced hydrometeorological model coupled to a carbon assimilation model, capable of simulating details within the surface energy and water budgets along with the principle modes of photosynthesis and respiration. Here we describe the results of an investigation concerning the space-time controls of carbon sources and sinks distributed over the large scale Amazon basin. The results are derived from a carbon-water-energy budget retrieval system for the large scale Amazon basin, which uses a coupled carbon assimilation-hydrometeorological model as an integrating system, forced by both in situ meteorological measurements and remotely sensed radiation fluxes and precipitation retrieval retrieved from a combination of GOES, SSM/I, TOMS, and TRMM satellite measurements. Brief discussion concerning validation of (a) retrieved surface radiation fluxes and precipitation based on 30-min averaged surface measurements taken at Ji-Parana in Rondonia and Manaus in Amazonas, and (b) modeled carbon fluxes based on tower CO2 flux measurements taken at Reserva Jaru, Manaus and Fazenda Nossa Senhora. The space-time controls on carbon sequestration are partitioned into sets of factors classified by: (1) above canopy meteorology, (2) incoming surface radiation, (3) precipitation interception, and (4) indigenous stomatal processes varied over the different land covers of pristine rainforest, partially, and fully logged rainforests, and pasture lands. These are the principle meteorological, thermodynamical, hydrological, and biophysical
Space-Time Controls on Carbon Sequestration Over Large-Scale Amazon Basin
NASA Technical Reports Server (NTRS)
Smith, Eric A.; Cooper, Harry J.; Gu, Jiujing; Grose, Andrew; Norman, John; daRocha, Humberto R.; Starr, David O. (Technical Monitor)
2002-01-01
A major research focus of the LBA Ecology Program is an assessment of the carbon budget and the carbon sequestering capacity of the large scale forest-pasture system that dominates the Amazonia landscape, and its time-space heterogeneity manifest in carbon fluxes across the large scale Amazon basin ecosystem. Quantification of these processes requires a combination of in situ measurements, remotely sensed measurements from space, and a realistically forced hydrometeorological model coupled to a carbon assimilation model, capable of simulating details within the surface energy and water budgets along with the principle modes of photosynthesis and respiration. Here we describe the results of an investigation concerning the space-time controls of carbon sources and sinks distributed over the large scale Amazon basin. The results are derived from a carbon-water-energy budget retrieval system for the large scale Amazon basin, which uses a coupled carbon assimilation-hydrometeorological model as an integrating system, forced by both in situ meteorological measurements and remotely sensed radiation fluxes and precipitation retrieval retrieved from a combination of GOES, SSM/I, TOMS, and TRMM satellite measurements. Brief discussion concerning validation of (a) retrieved surface radiation fluxes and precipitation based on 30-min averaged surface measurements taken at Ji-Parana in Rondonia and Manaus in Amazonas, and (b) modeled carbon fluxes based on tower CO2 flux measurements taken at Reserva Jaru, Manaus and Fazenda Nossa Senhora. The space-time controls on carbon sequestration are partitioned into sets of factors classified by: (1) above canopy meteorology, (2) incoming surface radiation, (3) precipitation interception, and (4) indigenous stomatal processes varied over the different land covers of pristine rainforest, partially, and fully logged rainforests, and pasture lands. These are the principle meteorological, thermodynamical, hydrological, and biophysical
Matsumoto, Masaki; Yamanaka, Tsuneyasu; Hayakawa, Nobuhiro; Iwai, Satoshi; Sugiura, Nobuyuki
2015-03-01
This paper describes the Basic Radionuclide vAlue for Internal Dosimetry (BRAID) code, which was developed to calculate the time-dependent activity distribution in each organ and tissue characterised by the biokinetic compartmental models provided by the International Commission on Radiological Protection (ICRP). Translocation from one compartment to the next is taken to be governed by first-order kinetics, which is formulated by the first-order differential equations. In the source program of this code, the conservation equations are solved for the mass balance that describes the transfer of a radionuclide between compartments. This code is applicable to the evaluation of the radioactivity of nuclides in an organ or tissue without modification of the source program. It is also possible to handle easily the cases of the revision of the biokinetic model or the application of a uniquely defined model by a user, because this code is designed so that all information on the biokinetic model structure is imported from an input file. The sample calculations are performed with the ICRP model, and the results are compared with the analytic solutions using simple models. It is suggested that this code provides sufficient result for the dose estimation and interpretation of monitoring data.
Influence of the input database in detecting fire space-time clusters
NASA Astrophysics Data System (ADS)
Pereira, Mário; Costa, Ricardo; Tonini, Marj; Vega Orozco, Carmen; Parente, Joana
2015-04-01
Fire incidence variability is influenced by local environmental variables such as topography, land use, vegetation and weather conditions. These induce a cluster pattern of the fire events distribution. The space-time permutation scan statistics (STPSS) method developed by Kulldorff et al. (2005) and implemented in the SaTScanTM software (http://www.satscan.org/) proves to be able to detect space-time clusters in many different fields, even when using incomplete and/or inaccurate input data. Nevertheless, the dependence of the STPSS method on the different characteristics of different datasets describing the same environmental phenomenon has not been studied yet. In this sense, the objective of this study is to assess the robustness of the STPSS for detecting real clusters using different input datasets and to justify the obtained results. This study takes advantage of the existence of two very different official fire datasets currently available for Portugal, both provided by the Institute for the Conservation of Nature and Forests. The first one is the aggregated Portuguese Rural Fire Database PRFD (Pereira et al., 2011), which is based on ground measurements and provides detailed information about the ignition and extinction date/time and the area burnt by each fire in forest, scrubs and agricultural areas. However, in the PRFD, the fire location of each fire is indicated by the name of smallest administrative unit (the parish) where the ignition occurred. Consequently, since the application of the STPSS requires the geographic coordinates of the events, the centroid of the parishes was considered. The second fire dataset is the national mapping burnt areas (NMBA), which is based on satellite measurements and delivered in shape file format. The NMBA provides a detailed spatial information (shape and size of each fire) but the temporal information is restricted to the year of occurrence. Besides these differences, the two datasets cover different periods, they
2001-08-30
agencies and 13 Learning Management System vendors. Eighteen surveys were returned, and only one-third of the respondents indicated that they used...Documenting the frequency with which mobile code is used in web- enabled courseware programming Questionnaires were distributed to learning management system courseware...web-enabled courseware was sent to points of contact at 51 DoD Academic agencies and 13 Learning Management System vendors. Eighteen surveys were
Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding.
Gao, Yuan; Liu, Pengyu; Wu, Yueying; Jia, Kebin; Gao, Guandong
2016-01-01
In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions.
Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding
Wu, Yueying; Jia, Kebin; Gao, Guandong
2016-01-01
In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions. PMID:26999741
Space-time stick-breaking processes for small area disease cluster estimation.
Hossain, Md Monir; Lawson, Andrew B; Cai, Bo; Choi, Jungsoon; Liu, Jihong; Kirby, Russell S
2013-03-01
We propose a space-time stick-breaking process for the disease cluster estimation. The dependencies for spatial and temporal effects are introduced by using space-time covariate dependent kernel stick-breaking processes. We compared this model with the space-time standard random effect model by checking each model's ability in terms of cluster detection of various shapes and sizes. This comparison was made for simulated data where the true risks were known. For the simulated data, we have observed that space-time stick-breaking process performs better in detecting medium- and high-risk clusters. For the real data, county specific low birth weight incidences for the state of South Carolina for the years 1997-2007, we have illustrated how the proposed model can be used to find grouping of counties of higher incidence rate.
Space-time stick-breaking processes for small area disease cluster estimation
Lawson, Andrew B.; Cai, Bo; Choi, Jungsoon; Liu, Jihong; Kirby, Russell S.
2012-01-01
We propose a space-time stick-breaking process for the disease cluster estimation. The dependencies for spatial and temporal effects are introduced by using space-time covariate dependent kernel stick-breaking processes. We compared this model with the space-time standard random effect model by checking each model’s ability in terms of cluster detection of various shapes and sizes. This comparison was made for simulated data where the true risks were known. For the simulated data, we have observed that space-time stick-breaking process performs better in detecting medium- and high-risk clusters. For the real data, county specific low birth weight incidences for the state of South Carolina for the years 1997–2007, we have illustrated how the proposed model can be used to find grouping of counties of higher incidence rate. PMID:23869181