Science.gov

Sample records for distributed space-time codes

  1. Power Allocation Strategies for Distributed Space-Time Codes in Amplify-and-Forward Mode

    NASA Astrophysics Data System (ADS)

    Maham, Behrouz; Hjørungnes, Are

    2009-12-01

    We consider a wireless relay network with Rayleigh fading channels and apply distributed space-time coding (DSTC) in amplify-and-forward (AF) mode. It is assumed that the relays have statistical channel state information (CSI) of the local source-relay channels, while the destination has full instantaneous CSI of the channels. It turns out that, combined with the minimum SNR based power allocation in the relays, AF DSTC results in a new opportunistic relaying scheme, in which the best relay is selected to retransmit the source's signal. Furthermore, we have derived the optimum power allocation between two cooperative transmission phases by maximizing the average received SNR at the destination. Next, assuming M-PSK and M-QAM modulations, we analyze the performance of cooperative diversity wireless networks using AF opportunistic relaying. We also derive an approximate formula for the symbol error rate (SER) of AF DSTC. Assuming the use of full-diversity space-time codes, we derive two power allocation strategies minimizing the approximate SER expressions, for constrained transmit power. Our analytical results have been confirmed by simulation results, using full-rate, full-diversity distributed space-time codes.

  2. Weighted adaptively grouped multilevel space time trellis codes

    NASA Astrophysics Data System (ADS)

    Jain, Dharmvir; Sharma, Sanjay

    2015-05-01

    In existing grouped multilevel space-time trellis codes (GMLSTTCs), the groups of transmit antennas are predefined, and the transmit power is equally distributed across all transmit antennas. When the channel parameters are perfectly known at the transmitter, adaptive antenna grouping and beamforming scheme can achieve the better performance by optimum grouping of transmit antennas and properly weighting transmitted signals based on the available channel information. In this paper, we present a new code designed by combining GMLSTTCs, adaptive antenna grouping and beamforming using the channel state information at transmitter (CSIT), henceforth referred to as weighted adaptively grouped multilevel space time trellis codes (WAGMLSTTCs). The CSIT is used to adaptively group the transmitting antennas and provide a beamforming scheme by allocating the different powers to the transmit antennas. Simulation results show that WAGMLSTTCs provide improvement in error performance of 2.6 dB over GMLSTTCs.

  3. Space-Time Code Designs for Broadband Wireless Communications

    DTIC Science & Technology

    2005-03-01

    Decoding Algorithms (i). Fast iterative decoding algorithms for lattice based space-time coded MIMO systems and single antenna vector OFDM systems: We...Information Theory, vol. 49, p.313, Jan. 2003. 5. G. Fan and X.-G. Xia, " Wavelet - Based Texture Analysis and Synthesis Using Hidden Markov Models," IEEE...PSK, and CPM signals, lattice based space-time codes, and unitary differential space-time codes for large number of transmit antennas. We want to

  4. Differential Cooperative Communications with Space-Time Network Coding

    DTIC Science & Technology

    2010-01-01

    The received signal at Un in the mth time slot of Phase I is ykmn = √ Ptg k mnv k m + w k mn, (1) where Pt is the power constraint of the user nodes, w...rate (SER) at Un for the symbols from Um is pmn , βmn’s are independent Bernoulli random variables with a distribution P (βmn) = { 1− pmn , for βmn = 1... pmn , for βmn = 0 . (17) The SER for M-QAM modulation can be expressed as [12] pmn = F2 ( 1 + bqγmn sin2 θ ) , (18) where bq = bQAM 2 = 3 2(M+1) and γmn

  5. On the Application of Time-Reversed Space-Time Block Code to Aeronautical Telemetry

    DTIC Science & Technology

    2014-06-01

    Keying (SOQPSK), bit error rate (BER), Orthogonal Frequency Division Multiplexing ( OFDM ), Generalized time-reversed space-time block codes (GTR-STBC) 16...Alamouti code [4]) is optimum [2]. Although OFDM is generally applied on a per subcarrier basis in frequency selective fading, it is not a viable

  6. Differential Space-Time Coding Scheme Using Star Quadrature Amplitude Modulation Method

    NASA Astrophysics Data System (ADS)

    Yu, Xiangbin; Xu, DaZhuan; Bi, Guangguo

    2006-12-01

    Differential space-time coding (DSTC) has received much interest as it obviates the requirement of the channel state information at the receiver while maintaining the desired properties of space-time coding techniques. In this paper, by introducing star quadrature amplitude modulation (star QAM) method, two kinds of multiple amplitudes DSTC schemes are proposed. One is based on differential unitary space-time coding (DUSTC) scheme, and the other is based on differential orthogonal space-time coding (DOSTC) scheme. Corresponding bit-error-rate (BER) performance and coding-gain analysis are given, respectively. The proposed schemes can avoid the performance loss of conventional DSTC schemes based on phase-shift keying (PSK) modulation in high spectrum efficiency via multiple amplitudes modulation. Compared with conventional PSK-based DSTC schemes, the developed schemes have higher spectrum efficiency via carrying information not only on phases but also on amplitudes, and have higher coding gain. Moreover, the first scheme can implement low-complexity differential modulation and different code rates and be applied to any number of transmit antennas; while the second scheme has simple decoder and high code rate in the case of 3 and 4 antennas. The simulation results show that our schemes have lower BER when compared with conventional DUSTC and DOSTC schemes.

  7. Novel space-time trellis codes for free-space optical communications using transmit laser selection.

    PubMed

    García-Zambrana, Antonio; Boluda-Ruiz, Rubén; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz

    2015-09-21

    In this paper, the deployment of novel space-time trellis codes (STTCs) with transmit laser selection (TLS) for free-space optical (FSO) communication systems using intensity modulation and direct detection (IM/DD) over atmospheric turbulence and misalignment fading channels is presented. Combining TLS and STTC with rate 1 bit/(s · Hz), a new code design criterion based on the use of the largest order statistics is here proposed for multiple-input/single-output (MISO) FSO systems in order to improve the diversity order gain by properly chosing the transmit lasers out of the available L lasers. Based on a pairwise error probability (PEP) analysis, closed-form asymptotic bit error-rate (BER) expressions in the range from low to high signal-to-noise ratio (SNR) are derived when the irradiance of the transmitted optical beam is susceptible to moderate-to-strong turbulence conditions, following a gamma-gamma (GG) distribution, and pointing error effects, following a misalignment fading model where the effect of beam width, detector size and jitter variance is considered. Obtained results show diversity orders of 2L and 3L when simple two-state and four-state STTCs are considered, respectively. Simulation results are further demonstrated to confirm the analytical results.

  8. Performance of a space-time block coded code division multiple access system over Nakagami-m fading channels

    NASA Astrophysics Data System (ADS)

    Yu, Xiangbin; Dong, Tao; Xu, Dazhuan; Bi, Guangguo

    2010-09-01

    By introducing an orthogonal space-time coding scheme, multiuser code division multiple access (CDMA) systems with different space time codes are given, and corresponding system performance is investigated over a Nakagami-m fading channel. A low-complexity multiuser receiver scheme is developed for space-time block coded CDMA (STBC-CDMA) systems. The scheme can make full use of the complex orthogonality of space-time block coding to simplify the high decoding complexity of the existing scheme. Compared to the existing scheme with exponential decoding complexity, it has linear decoding complexity. Based on the performance analysis and mathematical calculation, the average bit error rate (BER) of the system is derived in detail for integer m and non-integer m, respectively. As a result, a tight closed-form BER expression is obtained for STBC-CDMA with an orthogonal spreading code, and an approximate closed-form BER expression is attained for STBC-CDMA with a quasi-orthogonal spreading code. Simulation results show that the proposed scheme can achieve almost the same performance as the existing scheme with low complexity. Moreover, the simulation results for average BER are consistent with the theoretical analysis.

  9. Power optimization of wireless media systems with space-time block codes.

    PubMed

    Yousefi'zadeh, Homayoun; Jafarkhani, Hamid; Moshfeghi, Mehran

    2004-07-01

    We present analytical and numerical solutions to the problem of power control in wireless media systems with multiple antennas. We formulate a set of optimization problems aimed at minimizing total power consumption of wireless media systems subject to a given level of QoS and an available bit rate. Our formulation takes into consideration the power consumption related to source coding, channel coding, and transmission of multiple-transmit antennas. In our study, we consider Gauss-Markov and video source models, Rayleigh fading channels along with the Bernoulli/Gilbert-Elliott loss models, and space-time block codes.

  10. Alamouti-Type Space-Time Coding for Free-Space Optical Communication with Direct Detection

    NASA Astrophysics Data System (ADS)

    Simon, M. K.; Vilnrotter, V.

    2003-11-01

    In optical communication systems employing direct detection at the receiver, intensity modulations such as on-off keying (OOK) or pulse-position modulation (PPM) are commonly used to convey the information. Consider the possibility of applying space-time coding in such a scenario, using, for example, an Alamouti-type coding scheme [1]. Implicit in the Alamouti code is the fact that the modulation that defines the signal set is such that it is meaningful to transmit and detect both the signal and its negative. While modulations such as phase-shift keying (PSK) and quadrature amplitude modulation (QAM) naturally fall into this class, OOK and PPM do not since the signal polarity (phase) would not be detected at the receiver. We investigate a modification of the Alamouti code to be used with such modulations that has the same desirable properties as the conventional Alamouti code but does not rely on the necessity of transmitting the negative of a signal.

  11. A novel repetition space-time coding scheme for mobile FSO systems

    NASA Astrophysics Data System (ADS)

    Li, Ming; Cao, Yang; Li, Shu-ming; Yang, Shao-wen

    2015-03-01

    Considering the influence of more random atmospheric turbulence, worse pointing errors and highly dynamic link on the transmission performance of mobile multiple-input multiple-output (MIMO) free space optics (FSO) communication systems, this paper establishes a channel model for the mobile platform. Based on the combination of Alamouti space-time code and time hopping ultra-wide band (TH-UWB) communications, a novel repetition space-time coding (RSTC) method for mobile 2×2 free-space optical communications with pulse position modulation (PPM) is developed. In particular, two decoding methods of equal gain combining (EGC) maximum likelihood detection (MLD) and correlation matrix detection (CMD) are derived. When a quasi-static fading and weak turbulence channel model are considered, simulation results show that whether the channel state information (CSI) is known or not, the coding system demonstrates more significant performance of the symbol error rate (SER) than the uncoding. In other words, transmitting diversity can be achieved while conveying the information only through the time delays of the modulated signals transmitted from different antennas. CMD has almost the same effect of signal combining with maximal ratio combining (MRC). However, when the channel correlation increases, SER performance of the coding 2×2 system degrades significantly.

  12. Numerical relativity for D dimensional axially symmetric space-times: Formalism and code tests

    NASA Astrophysics Data System (ADS)

    Zilhão, Miguel; Witek, Helvi; Sperhake, Ulrich; Cardoso, Vitor; Gualtieri, Leonardo; Herdeiro, Carlos; Nerozzi, Andrea

    2010-04-01

    The numerical evolution of Einstein’s field equations in a generic background has the potential to answer a variety of important questions in physics: from applications to the gauge-gravity duality, to modeling black hole production in TeV gravity scenarios, to analysis of the stability of exact solutions, and to tests of cosmic censorship. In order to investigate these questions, we extend numerical relativity to more general space-times than those investigated hitherto, by developing a framework to study the numerical evolution of D dimensional vacuum space-times with an SO(D-2) isometry group for D≥5, or SO(D-3) for D≥6. Performing a dimensional reduction on a (D-4) sphere, the D dimensional vacuum Einstein equations are rewritten as a 3+1 dimensional system with source terms, and presented in the Baumgarte, Shapiro, Shibata, and Nakamura formulation. This allows the use of existing 3+1 dimensional numerical codes with small adaptations. Brill-Lindquist initial data are constructed in D dimensions and a procedure to match them to our 3+1 dimensional evolution equations is given. We have implemented our framework by adapting the Lean code and perform a variety of simulations of nonspinning black hole space-times. Specifically, we present a modified moving puncture gauge, which facilitates long-term stable simulations in D=5. We further demonstrate the internal consistency of the code by studying convergence and comparing numerical versus analytic results in the case of geodesic slicing for D=5, 6.

  13. Lab-on-a-chip flow cytometer employing color-space-time coding.

    PubMed

    Cho, Sung Hwan; Qiao, Wen; Tsai, Frank S; Yamashita, Kenichi; Lo, Yu-Hwa

    2010-08-30

    We describe a fluorescent detection technique for a lab-on-a-chip flow cytometer. Fluorescent emission is encoded into a time-dependent signal as a fluorescent cell or bead traverses a waveguide array with integrated spatial filters and color filters. Different from conventional colored filters with well-defined transmission spectral window, the integrated color filters are designed to have broad transmission characteristics, similar to the red-green-blue photoreceptors in the retina of human eye. This unique design allows us to detect multiple fluorescent colors with only three color filters based on the technique of color-space-time coding using only one single photomultiplier tube or avalanche photodetector.

  14. Two Novel Space-Time Coding Techniques Designed for UWB MISO Systems Based on Wavelet Transform

    PubMed Central

    Zaki, Amira Ibrahim; El-Khamy, Said E.

    2016-01-01

    In this paper two novel space-time coding multi-input single-output (STC MISO) schemes, designed especially for Ultra-Wideband (UWB) systems, are introduced. The proposed schemes are referred to as wavelet space-time coding (WSTC) schemes. The WSTC schemes are based on two types of multiplexing, spatial and wavelet domain multiplexing. In WSTC schemes, four symbols are transmitted on the same UWB transmission pulse with the same bandwidth, symbol duration, and number of transmitting antennas of the conventional STC MISO scheme. The used mother wavelet (MW) is selected to be highly correlated with transmitted pulse shape and such that the multiplexed signal has almost the same spectral characteristics as those of the original UWB pulse. The two WSTC techniques increase the data rate to four times that of the conventional STC. The first WSTC scheme increases the data rate with a simple combination process. The second scheme achieves the increase in the data rate with a less complex receiver and better performance than the first scheme due to the spatial diversity introduced by the structure of its transmitter and receiver. The two schemes use Rake receivers to collect the energy in the dense multipath channel components. The simulation results show that the proposed WSTC schemes have better performance than the conventional scheme in addition to increasing the data rate to four times that of the conventional STC scheme. PMID:27959939

  15. A multi-layer VLC imaging system based on space-time trace-orthogonal coding

    NASA Astrophysics Data System (ADS)

    Li, Peng-Xu; Yang, Yu-Hong; Zhu, Yi-Jun; Zhang, Yan-Yu

    2017-02-01

    In visible light communication (VLC) imaging systems, different properties of data are usually demanded for transmission with different priorities in terms of reliability and/or validity. For this consideration, a novel transmission scheme called space-time trace-orthogonal coding (STTOC) for VLC is proposed in this paper by taking full advantage of the characteristics of time-domain transmission and space-domain orthogonality. Then, several constellation designs for different priority strategies subject to the total power constraint are presented. One significant advantage of this novel scheme is that the inter-layer interference (ILI) can be eliminated completely and the computation complexity of maximum likelihood (ML) detection is linear. Computer simulations verify the correctness of our theoretical analysis, and demonstrate that both transmission rate and error performance of the proposed scheme greatly outperform the conventional multi-layer transmission system.

  16. General relativistic radiative transfer code in rotating black hole space-time: ARTIST

    NASA Astrophysics Data System (ADS)

    Takahashi, Rohta; Umemura, Masayuki

    2017-02-01

    We present a general relativistic radiative transfer code, ARTIST (Authentic Radiative Transfer In Space-Time), that is a perfectly causal scheme to pursue the propagation of radiation with absorption and scattering around a Kerr black hole. The code explicitly solves the invariant radiation intensity along null geodesics in the Kerr-Schild coordinates, and therefore properly includes light bending, Doppler boosting, frame dragging, and gravitational redshifts. The notable aspect of ARTIST is that it conserves the radiative energy with high accuracy, and is not subject to the numerical diffusion, since the transfer is solved on long characteristics along null geodesics. We first solve the wavefront propagation around a Kerr black hole that was originally explored by Hanni. This demonstrates repeated wavefront collisions, light bending, and causal propagation of radiation with the speed of light. We show that the decay rate of the total energy of wavefronts near a black hole is determined solely by the black hole spin in late phases, in agreement with analytic expectations. As a result, the ARTIST turns out to correctly solve the general relativistic radiation fields until late phases as t ˜ 90 M. We also explore the effects of absorption and scattering, and apply this code for a photon wall problem and an orbiting hotspot problem. All the simulations in this study are performed in the equatorial plane around a Kerr black hole. The ARTIST is the first step to realize the general relativistic radiation hydrodynamics.

  17. Wireless Network Cocast: Cooperative Communications with Space-Time Network Coding

    DTIC Science & Technology

    2011-04-21

    the transformed-based STNC for different numbers of user nodes (N = 2 and N = 3), QPSK and 16- QAM modulation , and (a) (M = 1) and (b) (M = 2...and (b) 16- QAM modulations . . . . . . . . . . . . . . . . . 95 4.6 Performance comparison between the proposed STNC scheme and a scheme employing...distributed Alamouti code for N = 2 and M = 2, (a) QPSK and (b) 16- QAM modulations . . . . . . . . . . . . . . . . . 96 5.1 A multi-source wireless network

  18. A two-level space-time color-coding method for 3D measurements using structured light

    NASA Astrophysics Data System (ADS)

    Xue, Qi; Wang, Zhao; Huang, Junhui; Gao, Jianmin; Qi, Zhaoshuai

    2015-11-01

    Color-coding methods have significantly improved the measurement efficiency of structured light systems. However, some problems, such as color crosstalk and chromatic aberration, decrease the measurement accuracy of the system. A two-level space-time color-coding method is thus proposed in this paper. The method, which includes a space-code level and a time-code level, is shown to be reliable and efficient. The influence of chromatic aberration is completely mitigated when using this method. Additionally, a self-adaptive windowed Fourier transform is used to eliminate all color crosstalk components. Theoretical analyses and experiments have shown that the proposed coding method solves the problems of color crosstalk and chromatic aberration effectively. Additionally, the method guarantees high measurement accuracy which is very close to the measurement accuracy using monochromatic coded patterns.

  19. Low Complexity Receiver Based Space-Time Codes for Broadband Wireless Communications

    DTIC Science & Technology

    2011-01-31

    STBC family is a combina- tion/overlay between orthogonal STBC and Toeplitz codes, which could be viewed as a generalization of overlapped Alamouti...codes (OAC) and Toeplitz codes recently pro- posed in the literature. It is shown that the newly proposed STBC may outperform the existing codes when...mixed asynchronous signals in the first time-slot by a Toeplitz matrix, and then broadcasts them back to the terminals in the second time-slot. A

  20. Energy Distributions in Szekeres Type I and II Space-Times

    NASA Astrophysics Data System (ADS)

    Ayguen, S.; Ayguen, M.; Tarhan, I.

    2006-10-01

    In this study, in context of general relativity we consider Einstein, Bergmann-Thomson, M{o}ller and Landau-Lifshitz energy-momentum definitions and we compute the total energy distribution (due to matter and fields including gravitation) of the universe based on Szekeres class I and class II space-times. We show that Einstein and Bergmann-Thomson definitions of the energy-momentum complexes give the same results, while M{o}ller's and Landau-Lifshitz's energy-momentum definition does not provide same results for Szekeres class II space. The definitions of Einstein, Bergmann-Thomson and M{o}ller definitions of the energy-momentum complexes give similar results in Szekeres class I space-time.

  1. Computer code for space-time diagnostics of nuclear safety parameters

    SciTech Connect

    Solovyev, D. A.; Semenov, A. A.; Gruzdov, F. V.; Druzhaev, A. A.; Shchukin, N. V.; Dolgenko, S. G.; Solovyeva, I. V.; Ovchinnikova, E. A.

    2012-07-01

    The computer code ECRAN 3D (Experimental and Calculation Reactor Analysis) is designed for continuous monitoring and diagnostics of reactor cores and databases for RBMK-1000 on the basis of analytical methods for the interrelation parameters of nuclear safety. The code algorithms are based on the analysis of deviations between the physically obtained figures and the results of neutron-physical and thermal-hydraulic calculations. Discrepancies between the measured and calculated signals are equivalent to obtaining inadequacy between performance of the physical device and its simulator. The diagnostics system can solve the following problems: identification of facts and time for inconsistent results, localization of failures, identification and quantification of the causes for inconsistencies. These problems can be effectively solved only when the computer code is working in a real-time mode. This leads to increasing requirements for a higher code performance. As false operations can lead to significant economic losses, the diagnostics system must be based on the certified software tools. POLARIS, version 4.2.1 is used for the neutron-physical calculation in the computer code ECRAN 3D. (authors)

  2. UNIX code management and distribution

    SciTech Connect

    Hung, T.; Kunz, P.F.

    1992-09-01

    We describe a code management and distribution system based on tools freely available for the UNIX systems. At the master site, version control is managed with CVS, which is a layer on top of RCS, and distribution is done via NFS mounted file systems. At remote sites, small modifications to CVS provide for interactive transactions with the CVS system at the master site such that remote developers are true peers in the code development process.

  3. Detection of overall space-time clustering in a non-uniformly distributed population. DiMe Study Group.

    PubMed

    Ranta, J; Pitkäniemi, J; Karvonen, M; Virtala, E; Rusanen, J; Colpaert, A; Naukkarinen, A; Tuomilehto, J

    1996-12-15

    We developed a test statistic based on an approach of Whittemore et al. (1987) to detect space-time clustering for non-infectious diseases. We extended the spatial test of Whittemore et al. by deriving conditional probabilities for Poisson distributed random variables. To combine spatial and time distances we defined a distance matrix D, where dij is the distance between the ith and jth cell in a three-dimensional space-time grid. Spatial and temporal components are controlled by a weight. By altering the weight, both marginal tests and the intermediate test can be reached. Allowing a continuum from a pure spatial to a pure temporal test, the best result will be gained by trying different weights, because the occurrence of a disease might show some temporal and some spatial tendency to cluster. We examined the behaviour of the test statistic by simulating different distributions for cases and the population. The test was applied to the incidence data of insulin-dependent diabetes mellitus in Finland. This test could be used in the analysis of data which are localized according to map co-ordinates, by addresses or postcodes. This information is important when using the Geographical Information System (GIS) technology to compute the pairwise distances needed for the proposed test.

  4. Seismicity along the Main Marmara Fault, Turkey: from space-time distribution to repeating events

    NASA Astrophysics Data System (ADS)

    Schmittbuhl, Jean; Karabulut, Hayrullah; Lengliné, Olivier; Bouchon, Michel

    2016-04-01

    The North Anatolian Fault (NAF) poses a significant hazard for the large cities surrounding the Marmara Sea region particularly the megalopolis of Istanbul. Indeed, the NAF is presently hosting a long unruptured segment below the Sea of Marmara. This seismic gap is approximately 150 km long and corresponds to the Main Marmara Fault (MMF). The seismicity along the Main Marmara Fault (MMF) below the Marmara Sea is analyzed here during the 2007-2012 period to provide insights on the recent evolution of this important regional seismic gap. High precision locations show that seismicity is strongly varying along strike and depth providing fine details of the fault behavior that are inaccessible from geodetic inversions. The activity strongly clusters at the regions of transition between basins. The Central basin shows significant seismicity located below the shallow locking depth inferred from GPS measurements. Its b-value is low and the average seismic slip is high. Interestingly we found also several long term repeating earthquakes in this domain. Using a template matching technique, we evidenced two new families of repeaters: a first family that typically belongs to aftershock sequences and a second family of long lasting repeaters with a multi-month recurrence period. All observations are consistent with a deep creep of this segment. On the contrary, the Kumburgaz basin at the center of the fault shows sparse seismicity with the hallmarks of a locked segment. In the eastern Marmara Sea, the seismicity distribution along the Princes Island segment in the Cinarcik basin, is consistent with the geodetic locking depth of 10km and a low contribution to the regional seismic energy release. The assessment of the locked segment areas provide an estimate of the magnitude of the main forthcoming event to be about 7.3 assuming that the rupture will not enter significantly within creeping domains.

  5. Typical BWR/4 MSIV closure ATWS analysis using RAMONA-3B code with space-time neutron kinetics

    SciTech Connect

    Neymotin, L.; Saha, P.

    1984-01-01

    A best-estimate analysis of a typical BWR/4 MSIV closure ATWS has been performed using the RAMONA-3B code with three-dimensional neutron kinetics. All safety features, namely, the safety and relief valves, recirculation pump trip, high pressure safety injections and the standby liquid control system (boron injection), were assumed to work as designed. No other operator action was assumed. The results show a strong spatial dependence of reactor power during the transient. After the initial peak of pressure and reactor power, the reactor vessel pressure oscillated between the relief valve set points, and the reactor power oscillated between 20 to 50% of the steady state power until the hot shutdown condition was reached at approximately 1400 seconds. The suppression pool bulk water temperature at this time was predicted to be approx. 96/sup 0/C (205/sup 0/F). In view of code performance and reasonable computer running time, the RAMONA-3B code is recommended for further best-estimate analyses of ATWS-type events in BWRs.

  6. Link-Adaptive Distributed Coding for Multisource Cooperation

    NASA Astrophysics Data System (ADS)

    Cano, Alfonso; Wang, Tairan; Ribeiro, Alejandro; Giannakis, Georgios B.

    2007-12-01

    Combining multisource cooperation and link-adaptive regenerative techniques, a novel protocol is developed capable of achieving diversity order up to the number of cooperating users and large coding gains. The approach relies on a two-phase protocol. In Phase 1, cooperating sources exchange information-bearing blocks, while in Phase 2, they transmit reencoded versions of the original blocks. Different from existing approaches, participation in the second phase does not require correct decoding of Phase 1 packets. This allows relaying of soft information to the destination, thus increasing coding gains while retaining diversity properties. For any reencoding function the diversity order is expressed as a function of the rank properties of the distributed coding strategy employed. This result is analogous to the diversity properties of colocated multi-antenna systems. Particular cases include repetition coding, distributed complex field coding (DCFC), distributed space-time coding, and distributed error-control coding. Rate, diversity, complexity and synchronization issues are elaborated. DCFC emerges as an attractive choice because it offers high-rate, full spatial diversity, and relaxed synchronization requirements. Simulations confirm analytically established assessments.

  7. The effect of structural design parameters on FPGA-based feed-forward space-time trellis coding-orthogonal frequency division multiplexing channel encoders

    NASA Astrophysics Data System (ADS)

    Passas, Georgios; Freear, Steven; Fawcett, Darren

    2010-08-01

    Orthogonal frequency division multiplexing (OFDM)-based feed-forward space-time trellis code (FFSTTC) encoders can be synthesised as very high speed integrated circuit hardware description language (VHDL) designs. Evaluation of their FPGA implementation can lead to conclusions that help a designer to decide the optimum implementation, given the encoder structural parameters. VLSI architectures based on 1-bit multipliers and look-up tables (LUTs) are compared in terms of FPGA slices and block RAMs (area), as well as in terms of minimum clock period (speed). Area and speed graphs versus encoder memory order are provided for quadrature phase shift keying (QPSK) and 8 phase shift keying (8-PSK) modulation and two transmit antennas, revealing best implementation under these conditions. The effect of number of modulation bits and transmit antennas on the encoder implementation complexity is also investigated.

  8. Colour cyclic code for Brillouin distributed sensors

    NASA Astrophysics Data System (ADS)

    Le Floch, Sébastien; Sauser, Florian; Llera, Miguel; Rochat, Etienne

    2015-09-01

    For the first time, a colour cyclic coding (CCC) is theoretically and experimentally demonstrated for Brillouin optical time-domain analysis (BOTDA) distributed sensors. Compared to traditional intensity-modulated cyclic codes, the code presents an additional gain of √2 while keeping the same number of sequences as for a colour coding. A comparison with a standard BOTDA sensor is realized and validates the theoretical coding gain.

  9. Distribution Coding in the Visual Pathway

    PubMed Central

    Sanderson, A. C.; Kozak, W. M.; Calvert, T. W.

    1973-01-01

    Although a variety of types of spike interval histograms have been reported, little attention has been given to the spike interval distribution as a neural code and to how different distributions are transmitted through neural networks. In this paper we present experimental results showing spike interval histograms recorded from retinal ganglion cells of the cat. These results exhibit a clear correlation between spike interval distribution and stimulus condition at the retinal ganglion cell level. The averaged mean rates of the cells studied were nearly the same in light as in darkness whereas the spike interval histograms were much more regular in light than in darkness. We present theoretical models which illustrate how such a distribution coding at the retinal level could be “interpreted” or recorded at some higher level of the nervous system such as the lateral geniculate nucleus. Interpretation is an essential requirement of a neural code which has often been overlooked in modeling studies. Analytical expressions are derived describing the role of distribution coding in determining the transfer characteristics of a simple interaction model and of a lateral inhibition network. Our work suggests that distribution coding might be interpreted by simply interconnected neural networks such as relay cell networks, in general, and the primary thalamic sensory nuclei in particular. PMID:4697235

  10. Robust entanglement distribution via quantum network coding

    NASA Astrophysics Data System (ADS)

    Epping, Michael; Kampermann, Hermann; Bruß, Dagmar

    2016-10-01

    Many protocols of quantum information processing, like quantum key distribution or measurement-based quantum computation, ‘consume’ entangled quantum states during their execution. When participants are located at distant sites, these resource states need to be distributed. Due to transmission losses quantum repeater become necessary for large distances (e.g. ≳ 300 {{km}}). Here we generalize the concept of the graph state repeater to D-dimensional graph states and to repeaters that can perform basic measurement-based quantum computations, which we call quantum routers. This processing of data at intermediate network nodes is called quantum network coding. We describe how a scheme to distribute general two-colourable graph states via quantum routers with network coding can be constructed from classical linear network codes. The robustness of the distribution of graph states against outages of network nodes is analysed by establishing a link to stabilizer error correction codes. Furthermore we show, that for any stabilizer error correction code there exists a corresponding quantum network code with similar error correcting capabilities.

  11. Performance Analysis of Decode-and-Forward with Cooperative Diversity and Alamouti Cooperative Space-Time Coding in Clustered Multihop Wireless Networks

    DTIC Science & Technology

    2008-09-01

    Layered Space-Time Architecture CSI Channel State Information CSIR Channel State Information at Receiver C-DIV Cooperative Diversity C-STC...is the capability for the relay terminal RMTji to decide, based on channel state information at reception ( CSIR ), whether or not it is helpful for

  12. The analysis of space-time structure in QCD vacuum II: Dynamics of polarization and absolute X-distribution

    SciTech Connect

    Alexandru, Andrei; Draper, Terrence; Horvath, Ivan; Streuer, Thomas

    2011-08-15

    Highlights: > We propose a method to compute the polarization for a multi-dimensional random distribution. > We apply the method to the eigenemodes of the Dirac operator in pure glue QCD. > We compute the chiral polarization for these modes and study its scale dependence. > We find that in a finite volume there is a scale where the polarization tendency changes. > We study the continuum limit of this chiral polarization scale. - Abstract: We propose a framework for quantitative evaluation of dynamical tendency for polarization in an arbitrary random variable that can be decomposed into a pair of orthogonal subspaces. The method uses measures based on comparisons of given dynamics to its counterpart with statistically independent components. The formalism of previously considered X-distributions is used to express the aforementioned comparisons, in effect putting the former approach on solid footing. Our analysis leads to the definition of a suitable correlation coefficient with clear statistical meaning. We apply the method to the dynamics induced by pure-glue lattice QCD in local left-right components of overlap Dirac eigenmodes. It is found that, in finite physical volume, there exists a non-zero physical scale in the spectrum of eigenvalues such that eigenmodes at smaller (fixed) eigenvalues exhibit convex X-distribution (positive correlation), while at larger eigenvalues the distribution is concave (negative correlation). This chiral polarization scale thus separates a regime where dynamics enhances chirality relative to statistical independence from a regime where it suppresses it, and gives an objective definition to the notion of 'low' and 'high' Dirac eigenmode. We propose to investigate whether the polarization scale remains non-zero in the infinite volume limit, in which case it would represent a new kind of low energy scale in QCD.

  13. The analysis of space-time structure in QCD vacuum II: Dynamics of polarization and absolute X-distribution

    NASA Astrophysics Data System (ADS)

    Alexandru, Andrei; Draper, Terrence; Horváth, Ivan; Streuer, Thomas

    2011-08-01

    We propose a framework for quantitative evaluation of dynamical tendency for polarization in an arbitrary random variable that can be decomposed into a pair of orthogonal subspaces. The method uses measures based on comparisons of given dynamics to its counterpart with statistically independent components. The formalism of previously considered X-distributions is used to express the aforementioned comparisons, in effect putting the former approach on solid footing. Our analysis leads to the definition of a suitable correlation coefficient with clear statistical meaning. We apply the method to the dynamics induced by pure-glue lattice QCD in local left-right components of overlap Dirac eigenmodes. It is found that, in finite physical volume, there exists a non-zero physical scale in the spectrum of eigenvalues such that eigenmodes at smaller (fixed) eigenvalues exhibit convex X-distribution (positive correlation), while at larger eigenvalues the distribution is concave (negative correlation). This chiral polarization scale thus separates a regime where dynamics enhances chirality relative to statistical independence from a regime where it suppresses it, and gives an objective definition to the notion of "low" and "high" Dirac eigenmode. We propose to investigate whether the polarization scale remains non-zero in the infinite volume limit, in which case it would represent a new kind of low energy scale in QCD.

  14. Distributed transform coding via source-splitting

    NASA Astrophysics Data System (ADS)

    Yahampath, Pradeepa

    2012-12-01

    Transform coding (TC) is one of the best known practical methods for quantizing high-dimensional vectors. In this article, a practical approach to distributed TC of jointly Gaussian vectors is presented. This approach, referred to as source-split distributed transform coding (SP-DTC), can be used to easily implement two terminal transform codes for any given rate-pair. The main idea is to apply source-splitting using orthogonal-transforms, so that only Wyner-Ziv (WZ) quantizers are required for compression of transform coefficients. This approach however requires optimizing the bit allocation among dependent sets of WZ quantizers. In order to solve this problem, a low-complexity tree-search algorithm based on analytical models for transform coefficient quantization is developed. A rate-distortion (RD) analysis of SP-DTCs for jointly Gaussian sources is presented, which indicates that these codes can significantly outperform the practical alternative of independent TC of each source, whenever there is a strong correlation between the sources. For practical implementation of SP-DTCs, the idea of using conditional entropy constrained (CEC) quantizers followed by Slepian-Wolf coding is explored. Experimental results obtained with SP-DTC designs based on both CEC scalar quantizers and CEC trellis-coded quantizers demonstrate that actual implementations of SP-DTCs can achieve RD performance close to the analytically predicted limits.

  15. Quantum Space-Times

    NASA Astrophysics Data System (ADS)

    Ashtekar, Abhay

    In general relativity space-time ends at singularities. The big bang is considered as the Beginning and the big crunch, the End. However these conclusions are arrived at by using general relativity in regimes which lie well beyond its physical domain of validity. Examples where detailed analysis is possible show that these singularities are naturally resolved by quantum geometry effects. Quantum space-times can be vastly larger than what Einstein had us believe. These non-trivial space-time extensions enable us to answer of some long standing questions and resolve of some puzzles in fundamental physics. Thus, a century after Minkowski's revolutionary ideas on the nature of space and time, yet another paradigm shift appears to await us in the wings.

  16. Branching space-times

    NASA Astrophysics Data System (ADS)

    Placek, Tomasz; Müller, Thomas

    The five papers presented below have been selected from among the fourteen read at the European Science Foundation workshop Branching Space-Times (BST), held at the Jagiellonian University in Kraków, Poland, in October 2005. This event gathered for the first time leading researchers working on this subject.

  17. Frequency-coded quantum key distribution.

    PubMed

    Bloch, Matthieu; McLaughlin, Steven W; Merolla, Jean-Marc; Patois, Frédéric

    2007-02-01

    We report an intrinsically stable quantum key distribution scheme based on genuine frequency-coded quantum states. The qubits are efficiently processed without fiber interferometers by fully exploiting the nonlinear interaction occurring in electro-optic phase modulators. The system requires only integrated off-the-shelf devices and could be used with a true single-photon source. Preliminary experiments have been performed with weak laser pulses and have demonstrated the feasibility of this new setup.

  18. Emergent Space-Time

    NASA Astrophysics Data System (ADS)

    Chapline, George

    It has been shown that a nonlinear Schrödinger equation in 2+1 dimensions equipped with an SU(N) Chern-Simons gauge field can provide an exact description of certain self-dual Einstein spaces in the limit N-=∞. Ricci flat Einstein spaces can then be viewed as arising from a quantum pairing of the classical self-dual and anti-self-dual solutions. In this chapter, we will outline how this theory of empty space-time might be generalized to include matter and vacuum energy by transplanting the nonlinear Schrödinger equation used to construct Einstein spaces to the 25+1-dimensional Lorentzian Leech lattice. If the distinguished 2 spatial dimensions underlying the construction of Einstein spaces are identified with a hexagonal lattice section of the Leech lattice, the wave-function becomes an 11 × 11 matrix that can represent fermion and boson degrees of freedom (DOF) associated with 2-form and Yang-Mills gauge symmetries. The resulting theory of gravity and matter in 3+1 dimensions is not supersymmetric, which provides an entry for a vacuum energy. Indeed, in the case of a Lemaitre cosmological model, the emergent space-time will naturally have a vacuum energy on the order of the observed cosmological constant.

  19. Time coded distribution via broadcasting stations

    NASA Technical Reports Server (NTRS)

    Leschiutta, S.; Pettiti, V.; Detoma, E.

    1979-01-01

    The distribution of standard time signals via AM and FM broadcasting stations presents the distinct advantages to offer a wide area coverage and to allow the use of inexpensive receivers, but the signals are radiated a limited number of times per day, are not usually available during the night, and no full and automatic synchronization of a remote clock is possible. As an attempt to overcome some of these problems, a time coded signal with a complete date information is diffused by the IEN via the national broadcasting networks in Italy. These signals are radiated by some 120 AM and about 3000 FM and TV transmitters around the country. In such a way, a time ordered system with an accuracy of a couple of milliseconds is easily achieved.

  20. The weight distribution and randomness of linear codes

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.

    1989-01-01

    Finding the weight distributions of block codes is a problem of theoretical and practical interest. Yet the weight distributions of most block codes are still unknown except for a few classes of block codes. Here, by using the inclusion and exclusion principle, an explicit formula is derived which enumerates the complete weight distribution of an (n,k,d) linear code using a partially known weight distribution. This expression is analogous to the Pless power-moment identities - a system of equations relating the weight distribution of a linear code to the weight distribution of its dual code. Also, an approximate formula for the weight distribution of most linear (n,k,d) codes is derived. It is shown that for a given linear (n,k,d) code over GF(q), the ratio of the number of codewords of weight u to the number of words of weight u approaches the constant Q = q(-)(n-k) as u becomes large. A relationship between the randomness of a linear block code and the minimum distance of its dual code is given, and it is shown that most linear block codes with rigid algebraic and combinatorial structure also display certain random properties which make them similar to random codes with no structure at all.

  1. Distributed source coding using chaos-based cryptosystem

    NASA Astrophysics Data System (ADS)

    Zhou, Junwei; Wong, Kwok-Wo; Chen, Jianyong

    2012-12-01

    A distributed source coding scheme is proposed by incorporating a chaos-based cryptosystem in the Slepian-Wolf coding. The punctured codeword generated by the chaos-based cryptosystem results in ambiguity at the decoder side. This ambiguity can be removed by the maximum a posteriori decoding with the help of side information. In this way, encryption and source coding are performed simultaneously. This leads to a simple encoder structure with low implementation complexity. Simulation results show that the encoder complexity is lower than that of existing distributed source coding schemes. Moreover, at small block size, the proposed scheme has a performance comparable to existing distributed source coding schemes.

  2. RHOCUBE: 3D density distributions modeling code

    NASA Astrophysics Data System (ADS)

    Nikutta, Robert; Agliozzo, Claudia

    2016-11-01

    RHOCUBE models 3D density distributions on a discrete Cartesian grid and their integrated 2D maps. It can be used for a range of applications, including modeling the electron number density in LBV shells and computing the emission measure. The RHOCUBE Python package provides several 3D density distributions, including a powerlaw shell, truncated Gaussian shell, constant-density torus, dual cones, and spiralling helical tubes, and can accept additional distributions. RHOCUBE provides convenient methods for shifts and rotations in 3D, and if necessary, an arbitrary number of density distributions can be combined into the same model cube and the integration ∫ dz performed through the joint density field.

  3. Distributed joint source-channel coding in wireless sensor networks.

    PubMed

    Zhu, Xuqi; Liu, Yu; Zhang, Lin

    2009-01-01

    Considering the fact that sensors are energy-limited and the wireless channel conditions in wireless sensor networks, there is an urgent need for a low-complexity coding method with high compression ratio and noise-resisted features. This paper reviews the progress made in distributed joint source-channel coding which can address this issue. The main existing deployments, from the theory to practice, of distributed joint source-channel coding over the independent channels, the multiple access channels and the broadcast channels are introduced, respectively. To this end, we also present a practical scheme for compressing multiple correlated sources over the independent channels. The simulation results demonstrate the desired efficiency.

  4. Codon Distribution in Error-Detecting Circular Codes.

    PubMed

    Fimmel, Elena; Strüngmann, Lutz

    2016-03-15

    In 1957, Francis Crick et al. suggested an ingenious explanation for the process of frame maintenance. The idea was based on the notion of comma-free codes. Although Crick's hypothesis proved to be wrong, in 1996, Arquès and Michel discovered the existence of a weaker version of such codes in eukaryote and prokaryote genomes, namely the so-called circular codes. Since then, circular code theory has invariably evoked great interest and made significant progress. In this article, the codon distributions in maximal comma-free, maximal self-complementary C³ and maximal self-complementary circular codes are discussed, i.e., we investigate in how many of such codes a given codon participates. As the main (and surprising) result, it is shown that the codons can be separated into very few classes (three, or five, or six) with respect to their frequency. Moreover, the distribution classes can be hierarchically ordered as refinements from maximal comma-free codes via maximal self-complementary C(3) codes to maximal self-complementary circular codes.

  5. Codon Distribution in Error-Detecting Circular Codes

    PubMed Central

    Fimmel, Elena; Strüngmann, Lutz

    2016-01-01

    In 1957, Francis Crick et al. suggested an ingenious explanation for the process of frame maintenance. The idea was based on the notion of comma-free codes. Although Crick’s hypothesis proved to be wrong, in 1996, Arquès and Michel discovered the existence of a weaker version of such codes in eukaryote and prokaryote genomes, namely the so-called circular codes. Since then, circular code theory has invariably evoked great interest and made significant progress. In this article, the codon distributions in maximal comma-free, maximal self-complementary C3 and maximal self-complementary circular codes are discussed, i.e., we investigate in how many of such codes a given codon participates. As the main (and surprising) result, it is shown that the codons can be separated into very few classes (three, or five, or six) with respect to their frequency. Moreover, the distribution classes can be hierarchically ordered as refinements from maximal comma-free codes via maximal self-complementary C3 codes to maximal self-complementary circular codes. PMID:26999215

  6. Error resiliency of distributed video coding in wireless video communication

    NASA Astrophysics Data System (ADS)

    Ye, Shuiming; Ouaret, Mourad; Dufaux, Frederic; Ansorge, Michael; Ebrahimi, Touradj

    2008-08-01

    Distributed Video Coding (DVC) is a new paradigm in video coding, based on the Slepian-Wolf and Wyner-Ziv theorems. DVC offers a number of potential advantages: flexible partitioning of the complexity between the encoder and decoder, robustness to channel errors due to intrinsic joint source-channel coding, codec independent scalability, and multi-view coding without communications between the cameras. In this paper, we evaluate the performance of DVC in an error-prone wireless communication environment. We also present a hybrid spatial and temporal error concealment approach for DVC. Finally, we perform a comparison with a state-of-the-art AVC/H.264 video coding scheme in the presence of transmission errors.

  7. Space-Time Network Codes Utilizing Transform-Based Coding

    DTIC Science & Technology

    2010-12-01

    1− prn if βrn = 1 prn if βrn = 0 , (17) where prn is the symbol error rate (SER) for detecting xn at Ur. For M- QAM modulation , it can be shown...time, time-division multiple access (TDMA) would be the most commonly-used technique in many applications . However, TDMA is extremely inefficient in...r 6= n, where xn is from an M- QAM constellation X. At the end of this phase, each client node Ur for r = 1, 2, ..., N possesses a set of N symbols

  8. Code CUGEL: A code to unfold Ge(Li) spectrometer polyenergetic gamma photon experimental distributions

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Born, U.

    1970-01-01

    A FORTRAN code was developed for the Univac 1108 digital computer to unfold lithium-drifted germanium semiconductor spectrometers, polyenergetic gamma photon experimental distributions. It was designed to analyze the combination continuous and monoenergetic gamma radiation field of radioisotope volumetric sources. The code generates the detector system response matrix function and applies it to monoenergetic spectral components discretely and to the continuum iteratively. It corrects for system drift, source decay, background, and detection efficiency. Results are presented in digital form for differential and integrated photon number and energy distributions, and for exposure dose.

  9. Streamlined Genome Sequence Compression using Distributed Source Coding

    PubMed Central

    Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel

    2014-01-01

    We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552

  10. Space, Time, Ether, and Kant

    NASA Astrophysics Data System (ADS)

    Wong, Wing-Chun Godwin

    This dissertation focused on Kant's conception of physical matter in the Opus postumum. In this work, Kant postulates the existence of an ether which fills the whole of space and time with its moving forces. Kant's arguments for the existence of an ether in the so-called Ubergang have been acutely criticized by commentators. Guyer, for instance, thinks that Kant pushes the technique of transcendental deduction too far in trying to deduce the empirical ether. In defense of Kant, I held that it is not the actual existence of the empirical ether, but the concept of the ether as a space-time filler that is subject to a transcendental deduction. I suggested that Kant is doing three things in the Ubergang: First, he deduces the pure concept of a space-time filler as a conceptual hybrid of the transcendental object and permanent substance to replace the category of substance in the Critique. Then he tries to prove the existence of such a space-time filler as a reworking of the First Analogy. Finally, he takes into consideration the empirical determinations of the ether by adding the concept of moving forces to the space -time filler. In reconstructing Kant's proofs, I pointed out that Kant is absolutely committed to the impossibility of action-at-a-distance. If we add this new principle of no-action-at-a-distance to the Third Analogy, the existence of a space-time filler follows. I argued with textual evidence that Kant's conception of ether satisfies the basic structure of a field: (1) the ether is a material continuum; (2) a physical quantity is definable on each point in the continuum; and (3) the ether provides a medium to support the continuous transmission of action. The thrust of Kant's conception of ether is to provide a holistic ontology for the transition to physics, which can best be understood from a field-theoretical point of view. This is the main thesis I attempted to establish in this dissertation.

  11. Distributed Joint Source-Channel Coding in Wireless Sensor Networks

    PubMed Central

    Zhu, Xuqi; Liu, Yu; Zhang, Lin

    2009-01-01

    Considering the fact that sensors are energy-limited and the wireless channel conditions in wireless sensor networks, there is an urgent need for a low-complexity coding method with high compression ratio and noise-resisted features. This paper reviews the progress made in distributed joint source-channel coding which can address this issue. The main existing deployments, from the theory to practice, of distributed joint source-channel coding over the independent channels, the multiple access channels and the broadcast channels are introduced, respectively. To this end, we also present a practical scheme for compressing multiple correlated sources over the independent channels. The simulation results demonstrate the desired efficiency. PMID:22408560

  12. A MCTF video coding scheme based on distributed source coding principles

    NASA Astrophysics Data System (ADS)

    Tagliasacchi, Marco; Tubaro, Stefano

    2005-07-01

    Motion Compensated Temporal Filtering (MCTF) has proved to be an efficient coding tool in the design of open-loop scalable video codecs. In this paper we propose a MCTF video coding scheme based on lifting where the prediction step is implemented using PRISM (Power efficient, Robust, hIgh compression Syndrome-based Multimedia coding), a video coding framework built on distributed source coding principles. We study the effect of integrating the update step at the encoder or at the decoder side. We show that the latter approach allows to improve the quality of the side information exploited during decoding. We present the analytical results obtained by modeling the video signal along the motion trajectories as a first order auto-regressive process. We show that the update step at the decoder allows to half the contribution of the quantization noise. We also include experimental results with real video data that demonstrate the potential of this approach when the video sequences are coded at low bitrates.

  13. Space-Time Data Fusion

    NASA Technical Reports Server (NTRS)

    Braverman, Amy; Nguyen, Hai; Olsen, Edward; Cressie, Noel

    2011-01-01

    Space-time Data Fusion (STDF) is a methodology for combing heterogeneous remote sensing data to optimally estimate the true values of a geophysical field of interest, and obtain uncertainties for those estimates. The input data sets may have different observing characteristics including different footprints, spatial resolutions and fields of view, orbit cycles, biases, and noise characteristics. Despite these differences all observed data can be linked to the underlying field, and therefore the each other, by a statistical model. Differences in footprints and other geometric characteristics are accounted for by parameterizing pixel-level remote sensing observations as spatial integrals of true field values lying within pixel boundaries, plus measurement error. Both spatial and temporal correlations in the true field and in the observations are estimated and incorporated through the use of a space-time random effects (STRE) model. Once the models parameters are estimated, we use it to derive expressions for optimal (minimum mean squared error and unbiased) estimates of the true field at any arbitrary location of interest, computed from the observations. Standard errors of these estimates are also produced, allowing confidence intervals to be constructed. The procedure is carried out on a fine spatial grid to approximate a continuous field. We demonstrate STDF by applying it to the problem of estimating CO2 concentration in the lower-atmosphere using data from the Atmospheric Infrared Sounder (AIRS) and the Japanese Greenhouse Gasses Observing Satellite (GOSAT) over one year for the continental US.

  14. Space-Time and Architecture

    NASA Astrophysics Data System (ADS)

    Field, F.; Goodbun, J.; Watson, V.

    Architects have a role to play in interplanetary space that has barely yet been explored. The architectural community is largely unaware of this new territory, for which there is still no agreed method of practice. There is moreover a general confusion, in scientific and related fields, over what architects might actually do there today. Current extra-planetary designs generally fail to explore the dynamic and relational nature of space-time, and often reduce human habitation to a purely functional problem. This is compounded by a crisis over the representation (drawing) of space-time. The present work returns to first principles of architecture in order to realign them with current socio-economic and technological trends surrounding the space industry. What emerges is simultaneously the basis for an ecological space architecture, and the representational strategies necessary to draw it. We explore this approach through a work of design-based research that describes the construction of Ocean; a huge body of water formed by the collision of two asteroids at the Translunar Lagrange Point (L2), that would serve as a site for colonisation, and as a resource to fuel future missions. Ocean is an experimental model for extra-planetary space design and its representation, within the autonomous discipline of architecture.

  15. Achieving H.264-like compression efficiency with distributed video coding

    NASA Astrophysics Data System (ADS)

    Milani, Simone; Wang, Jiajun; Ramchandran, Kannan

    2007-01-01

    Recently, a new class of distributed source coding (DSC) based video coders has been proposed to enable low-complexity encoding. However, to date, these low-complexity DSC-based video encoders have been unable to compress as efficiently as motion-compensated predictive coding based video codecs, such as H.264/AVC, due to insufficiently accurate modeling of video data. In this work, we examine achieving H.264-like high compression efficiency with a DSC-based approach without the encoding complexity constraint. The success of H.264/AVC highlights the importance of accurately modeling the highly non-stationary video data through fine-granularity motion estimation. This motivates us to deviate from the popular approach of approaching the Wyner-Ziv bound with sophisticated capacity-achieving channel codes that require long block lengths and high decoding complexity, and instead focus on accurately modeling video data. Such a DSC-based, compression-centric encoder is an important first step towards building a robust DSC-based video coding framework.

  16. STAR -Space Time Asymmetry Research

    NASA Astrophysics Data System (ADS)

    van Zoest, Tim; Braxmaier, Claus; Schuldt, Thilo; Allab, Mohammed; Theil, Stephan; Pelivan, Ivanka; Herrmann, Sven; Lümmerzahl, Claus; Peters, Achim; Mühle, Katharina; Wicht, Andreas; Nagel, Moritz; Kovalchuk, Evgeny; Düringshoff, Klaus; Dittus, Hansjürg

    STAR is a proposed satellite mission that aims for significantly improved tests of fundamental space-time symmetry and the foundations of special and general relativity. In total STAR comprises a series of five subsequent missions. The STAR1 mission will measure the constancy of the speed of light to one part in 1019 and derive the Kennedy Thorndike (KT) coefficient of the Mansouri-Sexl test theory to 7x10-10 . The KT experiment will be performed by compar-ison of an iodine standard with a highly stable cavity made from ultra low expansion (ULE) ceramics. With an orbital velocity of 7 km/s the sensitivity to a boost dependent violation of Lorentz invariance as modeled by the KT term in the Mansouri Sexl test theory or a Lorentz violating extension of the standard model (SME) will be significantly enhanced as compared to Earth based experiments. The low noise space environment will additionally enhance the measurement precision such that an overall improvement by a factor of 400 over current Earth based experiments is expected.

  17. Distributed magnetic field positioning system using code division multiple access

    NASA Technical Reports Server (NTRS)

    Prigge, Eric A. (Inventor)

    2003-01-01

    An apparatus and methods for a magnetic field positioning system use a fundamentally different, and advantageous, signal structure and multiple access method, known as Code Division Multiple Access (CDMA). This signal architecture, when combined with processing methods, leads to advantages over the existing technologies, especially when applied to a system with a large number of magnetic field generators (beacons). Beacons at known positions generate coded magnetic fields, and a magnetic sensor measures a sum field and decomposes it into component fields to determine the sensor position and orientation. The apparatus and methods can have a large `building-sized` coverage area. The system allows for numerous beacons to be distributed throughout an area at a number of different locations. A method to estimate position and attitude, with no prior knowledge, uses dipole fields produced by these beacons in different locations.

  18. Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing

    NASA Technical Reports Server (NTRS)

    Ozguner, Fusun

    1996-01-01

    Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.

  19. FIBWR: a steady-state core flow distribution code for boiling water reactors code verification and qualification report. Final report

    SciTech Connect

    Ansari, A.F.; Gay, R.R.; Gitnick, B.J.

    1981-07-01

    A steady-state core flow distribution code (FIBWR) is described. The ability of the recommended models to predict various pressure drop components and void distribution is shown by comparison to the experimental data. Application of the FIBWR code to the Vermont Yankee Nuclear Power Station is shown by comparison to the plant measured data.

  20. Practical distributed video coding in packet lossy channels

    NASA Astrophysics Data System (ADS)

    Qing, Linbo; Masala, Enrico; He, Xiaohai

    2013-07-01

    Improving error resilience of video communications over packet lossy channels is an important and tough task. We present a framework to optimize the quality of video communications based on distributed video coding (DVC) in practical packet lossy network scenarios. The peculiar characteristics of DVC indeed require a number of adaptations to take full advantage of its intrinsic robustness when dealing with data losses of typical real packet networks. This work proposes a new packetization scheme, an investigation of the best error-correcting codes to use in a noisy environment, a practical rate-allocation mechanism, which minimizes decoder feedback, and an improved side-information generation and reconstruction function. Performance comparisons are presented with respect to a conventional packet video communication using H.264/advanced video coding (AVC). Although currently the H.264/AVC rate-distortion performance in case of no loss is better than state-of-the-art DVC schemes, under practical packet lossy conditions, the proposed techniques provide better performance with respect to an H.264/AVC-based system, especially at high packet loss rates. Thus the error resilience of the proposed DVC scheme is superior to the one provided by H.264/AVC, especially in the case of transmission over packet lossy networks.

  1. Distributed Coding/Decoding Complexity in Video Sensor Networks

    PubMed Central

    Cordeiro, Paulo J.; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality. PMID:22736972

  2. Weight distributions for turbo codes using random and nonrandom permutations

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Divsalar, D.

    1995-01-01

    This article takes a preliminary look at the weight distributions achievable for turbo codes using random, nonrandom, and semirandom permutations. Due to the recursiveness of the encoders, it is important to distinguish between self-terminating and non-self-terminating input sequences. The non-self-terminating sequences have little effect on decoder performance, because they accumulate high encoded weight until they are artificially terminated at the end of the block. From probabilistic arguments based on selecting the permutations randomly, it is concluded that the self-terminating weight-2 data sequences are the most important consideration in the design of constituent codes; higher-weight self-terminating sequences have successively decreasing importance. Also, increasing the number of codes and, correspondingly, the number of permutations makes it more and more likely that the bad input sequences will be broken up by one or more of the permuters. It is possible to design nonrandom permutations that ensure that the minimum distance due to weight-2 input sequences grows roughly as the square root of (2N), where N is the block length. However, these nonrandom permutations amplify the bad effects of higher-weight inputs, and as a result they are inferior in performance to randomly selected permutations. But there are 'semirandom' permutations that perform nearly as well as the designed nonrandom permutations with respect to weight-2 input sequences and are not as susceptible to being foiled by higher-weight inputs.

  3. Affine conformal vectors in space-time

    NASA Astrophysics Data System (ADS)

    Coley, A. A.; Tupper, B. O. J.

    1992-05-01

    All space-times admitting a proper affine conformal vector (ACV) are found. By using a theorem of Hall and da Costa, it is shown that such space-times either (i) admit a covariantly constant vector (timelike, spacelike, or null) and the ACV is the sum of a proper affine vector and a conformal Killing vector or (ii) the space-time is 2+2 decomposable, in which case it is shown that no ACV can exist (unless the space-time decomposes further). Furthermore, it is proved that all space-times admitting an ACV and a null covariantly constant vector (which are necessarily generalized pp-wave space-times) must have Ricci tensor of Segré type {2,(1,1)}. It follows that, among space-times admitting proper ACV, the Einstein static universe is the only perfect fluid space-time, there are no non-null Einstein-Maxwell space-times, and only the pp-wave space-times are representative of null Einstein-Maxwell solutions. Otherwise, the space-times can represent anisotropic fluids and viscous heat-conducting fluids, but only with restricted equations of state in each case.

  4. Sparsey™: event recognition via deep hierarchical sparse distributed codes

    PubMed Central

    Rinkus, Gerard J.

    2014-01-01

    The visual cortex's hierarchical, multi-level organization is captured in many biologically inspired computational vision models, the general idea being that progressively larger scale (spatially/temporally) and more complex visual features are represented in progressively higher areas. However, most earlier models use localist representations (codes) in each representational field (which we equate with the cortical macrocolumn, “mac”), at each level. In localism, each represented feature/concept/event (hereinafter “item”) is coded by a single unit. The model we describe, Sparsey, is hierarchical as well but crucially, it uses sparse distributed coding (SDC) in every mac in all levels. In SDC, each represented item is coded by a small subset of the mac's units. The SDCs of different items can overlap and the size of overlap between items can be used to represent their similarity. The difference between localism and SDC is crucial because SDC allows the two essential operations of associative memory, storing a new item and retrieving the best-matching stored item, to be done in fixed time for the life of the model. Since the model's core algorithm, which does both storage and retrieval (inference), makes a single pass over all macs on each time step, the overall model's storage/retrieval operation is also fixed-time, a criterion we consider essential for scalability to the huge (“Big Data”) problems. A 2010 paper described a nonhierarchical version of this model in the context of purely spatial pattern processing. Here, we elaborate a fully hierarchical model (arbitrary numbers of levels and macs per level), describing novel model principles like progressive critical periods, dynamic modulation of principal cells' activation functions based on a mac-level familiarity measure, representation of multiple simultaneously active hypotheses, a novel method of time warp invariant recognition, and we report results showing learning/recognition of spatiotemporal

  5. Sparsey™: event recognition via deep hierarchical sparse distributed codes.

    PubMed

    Rinkus, Gerard J

    2014-01-01

    The visual cortex's hierarchical, multi-level organization is captured in many biologically inspired computational vision models, the general idea being that progressively larger scale (spatially/temporally) and more complex visual features are represented in progressively higher areas. However, most earlier models use localist representations (codes) in each representational field (which we equate with the cortical macrocolumn, "mac"), at each level. In localism, each represented feature/concept/event (hereinafter "item") is coded by a single unit. The model we describe, Sparsey, is hierarchical as well but crucially, it uses sparse distributed coding (SDC) in every mac in all levels. In SDC, each represented item is coded by a small subset of the mac's units. The SDCs of different items can overlap and the size of overlap between items can be used to represent their similarity. The difference between localism and SDC is crucial because SDC allows the two essential operations of associative memory, storing a new item and retrieving the best-matching stored item, to be done in fixed time for the life of the model. Since the model's core algorithm, which does both storage and retrieval (inference), makes a single pass over all macs on each time step, the overall model's storage/retrieval operation is also fixed-time, a criterion we consider essential for scalability to the huge ("Big Data") problems. A 2010 paper described a nonhierarchical version of this model in the context of purely spatial pattern processing. Here, we elaborate a fully hierarchical model (arbitrary numbers of levels and macs per level), describing novel model principles like progressive critical periods, dynamic modulation of principal cells' activation functions based on a mac-level familiarity measure, representation of multiple simultaneously active hypotheses, a novel method of time warp invariant recognition, and we report results showing learning/recognition of spatiotemporal patterns.

  6. Non-extensive trends in the size distribution of coding and non-coding DNA sequences in the human genome

    NASA Astrophysics Data System (ADS)

    Oikonomou, Th.; Provata, A.

    2006-03-01

    We study the primary DNA structure of four of the most completely sequenced human chromosomes (including chromosome 19 which is the most dense in coding), using non-extensive statistics. We show that the exponents governing the spatial decay of the coding size distributions vary between 5.2 ≤r ≤5.7 for the short scales and 1.45 ≤q ≤1.50 for the large scales. On the contrary, the exponents governing the spatial decay of the non-coding size distributions in these four chromosomes, take the values 2.4 ≤r ≤3.2 for the short scales and 1.50 ≤q ≤1.72 for the large scales. These results, in particular the values of the tail exponent q, indicate the existence of correlations in the coding and non-coding size distributions with tendency for higher correlations in the non-coding DNA.

  7. A Model of Classical Space-Times.

    ERIC Educational Resources Information Center

    Maudlin, Tim

    1989-01-01

    Discusses some historically important reference systems including those by Newton, Leibniz, and Galileo. Provides models illustrating space-time relationship of the reference systems. Describes building models. (YP)

  8. Visualization of scattering angular distributions with the SAP code

    NASA Astrophysics Data System (ADS)

    Fernandez, J. E.; Scot, V.; Basile, S.

    2010-07-01

    SAP (Scattering Angular distribution Plot) is a graphical tool developed at the University of Bologna to compute and plot Rayleigh and Compton differential cross-sections (atomic and electronic), form-factors (FFs) and incoherent scattering functions (SFs) for single elements, compounds and mixture of compounds, for monochromatic excitation in the range of 1-1000 keV. The computation of FFs and SFs may be performed in two ways: (a) by interpolating Hubbell's data from EPDL97 library and (b) by using semi-empirical formulas as described in the text. Two kinds of normalization permit to compare the plots of different magnitudes, by imposing a similar scale. The characteristics of the code SAP are illustrated with one example.

  9. Distributed reservation-based code division multiple access

    NASA Astrophysics Data System (ADS)

    Wieselthier, J. E.; Ephremides, A.

    1984-11-01

    The use of spread spectrum signaling, motivated primarily by its antijamming capabilities in military applications, leads naturally to the use of Code Division Multiple Access (CDMA) techniques that permit the successful simultaneous transmission by a number of users over a wideband channel. In this paper we address some of the major issues that are associated with the design of multiple access protocols for spread spectrum networks. We then propose, analyze, and evaluate a distributed reservation-based multiple access protocol that does in fact exploit CDMA properties. Especially significant is the fact that no acknowledgment or feedback information from the destination is required (thus facilitating communication with a radio-silent mode), nor is any form of coordination among the users necessary.

  10. Evaluating methods for estimating space-time paths of individuals in calculating long-term personal exposure to air pollution

    NASA Astrophysics Data System (ADS)

    Schmitz, Oliver; Soenario, Ivan; Vaartjes, Ilonca; Strak, Maciek; Hoek, Gerard; Brunekreef, Bert; Dijst, Martin; Karssenberg, Derek

    2016-04-01

    of land, the 4 digit postal code area or neighbourhood of a persons' home, circular areas around the home, and spatial probability distributions of space-time paths during commuting. Personal exposure was estimated by averaging concentrations over these space-time paths, for each individual in a cohort. Preliminary results show considerable differences of a persons' exposure using these various approaches of space-time path aggregation, presumably because air pollution shows large variation over short distances.

  11. Probability Distribution Estimation for Autoregressive Pixel-Predictive Image Coding.

    PubMed

    Weinlich, Andreas; Amon, Peter; Hutter, Andreas; Kaup, André

    2016-03-01

    Pixelwise linear prediction using backward-adaptive least-squares or weighted least-squares estimation of prediction coefficients is currently among the state-of-the-art methods for lossless image compression. While current research is focused on mean intensity prediction of the pixel to be transmitted, best compression requires occurrence probability estimates for all possible intensity values. Apart from common heuristic approaches, we show how prediction error variance estimates can be derived from the (weighted) least-squares training region and how a complete probability distribution can be built based on an autoregressive image model. The analysis of image stationarity properties further allows deriving a novel formula for weight computation in weighted least-squares proofing and generalizing ad hoc equations from the literature. For sparse intensity distributions in non-natural images, a modified image model is presented. Evaluations were done in the newly developed C++ framework volumetric, artificial, and natural image lossless coder (Vanilc), which can compress a wide range of images, including 16-bit medical 3D volumes or multichannel data. A comparison with several of the best available lossless image codecs proofs that the method can achieve very competitive compression ratios. In terms of reproducible research, the source code of Vanilc has been made public.

  12. Emission coordinates in Minkowski space-time

    SciTech Connect

    Coll, Bartolome; Ferrando, Joan J.; Morales, Juan A.

    2009-05-01

    The theory of relativistic positioning systems and their natural associated emission coordinates are essential ingredients in the analysis of navigation systems and astrometry. Here we study emission coordinates in Minkowski space-time. For any choice of the four emitters (arbitrary space-time trajectories) the relation between the corresponding emission coordinates and the inertial ones are explicitly given.

  13. Pseudo-Z symmetric space-times

    NASA Astrophysics Data System (ADS)

    Mantica, Carlo Alberto; Suh, Young Jin

    2014-04-01

    In this paper, we investigate Pseudo-Z symmetric space-time manifolds. First, we deal with elementary properties showing that the associated form Ak is closed: in the case the Ricci tensor results to be Weyl compatible. This notion was recently introduced by one of the present authors. The consequences of the Weyl compatibility on the magnetic part of the Weyl tensor are pointed out. This determines the Petrov types of such space times. Finally, we investigate some interesting properties of (PZS)4 space-time; in particular, we take into consideration perfect fluid and scalar field space-time, and interesting properties are pointed out, including the Petrov classification. In the case of scalar field space-time, it is shown that the scalar field satisfies a generalized eikonal equation. Further, it is shown that the integral curves of the gradient field are geodesics. A classical method to find a general integral is presented.

  14. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation

    PubMed Central

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2013-01-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method. PMID:23750314

  15. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.

    PubMed

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-15

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  16. Suppressing feedback in a distributed video coding system by employing real field codes

    NASA Astrophysics Data System (ADS)

    Louw, Daniel J.; Kaneko, Haruhiko

    2013-12-01

    Single-view distributed video coding (DVC) is a video compression method that allows for the computational complexity of the system to be shifted from the encoder to the decoder. The reduced encoding complexity makes DVC attractive for use in systems where processing power or energy use at the encoder is constrained, for example, in wireless devices and surveillance systems. One of the biggest challenges in implementing DVC systems is that the required rate must be known at the encoder. The conventional approach is to use a feedback channel from the decoder to control the rate. Feedback channels introduce their own difficulties such as increased latency and buffering requirements, which makes the resultant system unsuitable for some applications. Alternative approaches, which do not employ feedback, suffer from either increased encoder complexity due to performing motion estimation at the encoder, or an inaccurate rate estimate. Inaccurate rate estimates can result in a reduced average rate-distortion performance, as well as unpleasant visual artifacts. In this paper, the authors propose a single-view DVC system that does not require a feedback channel. The consequences of inaccuracies in the rate estimate are addressed by using codes defined over the real field and a decoder employing successive refinement. The result is a codec with performance that is comparable to that of a feedback-based system at low rates without the use of motion estimation at the encoder or a feedback path. The disadvantage of the approach is a reduction in average rate-distortion performance in the high-rate regime for sequences with significant motion.

  17. Recursive Generation of Space-Times

    NASA Astrophysics Data System (ADS)

    Marks, Dennis

    2015-04-01

    Space-times can be generated recursively from a time-like unit basis vector T and a space-like one S. T is unique up to sign, corresponding to particles and antiparticles. S has the form of qubits. Qubits can make quantum transitions, suggesting spontaneous generation of space-time. Recursive generation leads from 2 dimensions to 4, with grades of the resulting algebra corresponding to space-time, spin-area, momentum-energy, and action. Dimensions can be open (like space-time) or closed. A closed time-like dimension has the symmetry of electromagnetism; 3 closed space-like dimensions have the symmetry of the weak force. The 4 open dimensions and the 4 closed dimensions produce an 8-dimensional space with a symmetry that is the product of the Yang regularization of the Heisenberg-Poincaré group and the GUT regularization of the Standard Model. After 8 dimensions, the pattern of real geometric algebras repeats itself, producing a recursive lattice of spontaneously expanding space-time with the physics of the Standard Model at each point of the lattice, implying conservation laws by Noether's theorem. The laws of nature are not preexistent; rather, they are consequences of the uniformity of space-time. The uniformity of space-time is a consequence of its recursive generation.

  18. MEST- avoid next extinction by a space-time effect

    NASA Astrophysics Data System (ADS)

    Cao, Dayong

    2013-03-01

    Sun's companion-dark hole seasonal took its dark comets belt and much dark matter to impact near our earth. And some of them probability hit on our earth. So this model kept and triggered periodic mass extinctions on our earth every 25 to 27 million years. After every impaction, many dark comets with very special tilted orbits were arrested and lurked in solar system. When the dark hole-Tyche goes near the solar system again, they will impact near planets. The Tyche, dark comet and Oort Cloud have their space-time center. Because the space-time are frequency and amplitude square of wave. Because the wave (space-time) can make a field, and gas has more wave and fluctuate. So they like dense gas ball and a dark dense field. They can absorb the space-time and wave. So they are ``dark'' like the dark matter which can break genetic codes of our lives by a dark space-time effect. So the upcoming next impaction will cause current ``biodiversity loss.'' The dark matter can change dead plants and animals to coal, oil and natural gas which are used as energy, but break our living environment. According to our experiments, which consciousness can use thought waves remotely to change their systemic model between Electron Clouds and electron holes of P-N Junction and can change output voltages of solar cells by a life information technology and a space-time effect, we hope to find a new method to the orbit of the Tyche to avoid next extinction. (see Dayong Cao, BAPS.2011.APR.K1.17 and BAPS.2012.MAR.P33.14) Support by AEEA

  19. FPGA based digital phase-coding quantum key distribution system

    NASA Astrophysics Data System (ADS)

    Lu, XiaoMing; Zhang, LiJun; Wang, YongGang; Chen, Wei; Huang, DaJun; Li, Deng; Wang, Shuang; He, DeYong; Yin, ZhenQiang; Zhou, Yu; Hui, Cong; Han, ZhengFu

    2015-12-01

    Quantum key distribution (QKD) is a technology with the potential capability to achieve information-theoretic security. Phasecoding is an important approach to develop practical QKD systems in fiber channel. In order to improve the phase-coding modulation rate, we proposed a new digital-modulation method in this paper and constructed a compact and robust prototype of QKD system using currently available components in our lab to demonstrate the effectiveness of the method. The system was deployed in laboratory environment over a 50 km fiber and continuously operated during 87 h without manual interaction. The quantum bit error rate (QBER) of the system was stable with an average value of 3.22% and the secure key generation rate is 8.91 kbps. Although the modulation rate of the photon in the demo system was only 200 MHz, which was limited by the Faraday-Michelson interferometer (FMI) structure, the proposed method and the field programmable gate array (FPGA) based electronics scheme have a great potential for high speed QKD systems with Giga-bits/second modulation rate.

  20. Plasmonics at the Space-Time Limit

    NASA Astrophysics Data System (ADS)

    Aeschlimann, Martin

    The optical response of metallic nanostructures exhibits fascinating properties: local field interference effects that lead to strong variations of the near field distribution on a subwavelength scale, local field enhancement, and long lasting electronic coherences. Coherent control in general exploits the phase properties of light fields to manipulate coherent processes. Originally developed for molecular systems these concepts have recently been adapted also to nano-optical phenomena. Consequently, the combination of ultrafast laser spectroscopy, i.e. illumination with broadband coherent light sources, and near-field optics, opens a new realm for nonlinear optics on the nanoscale. To circumvent the experimental limitation of optical diffraction we use a photoemission electron microscope (PEEM) that has been proved to be a versatile tool for the investigation of near field properties of nanostructures with a spatial resolution of only a few nanometers and that allows for new spectroscopy techniques with ultrafast time resolution. We introduce a new spectroscopic method that determines nonlinear quantum-mechanical response functions beyond the optical diffraction limit. While in established coherent two-dimensional (2D) spectroscopy a four-wave-mixing response is measured using three ingoing and one outgoing wave, in 2D nanoscopy we employ four ingoing and no outgoing waves. This allows studying a broad range of phenomena not accessible otherwise such as space-time resolved coupling, transport, and Anderson localized photon modes

  1. Space-time topology and quantum gravity.

    NASA Astrophysics Data System (ADS)

    Friedman, J. L.

    Characteristic features are discussed of a theory of quantum gravity that allows space-time with a non-Euclidean topology. The review begins with a summary of the manifolds that can occur as classical vacuum space-times and as space-times with positive energy. Local structures with non-Euclidean topology - topological geons - collapse, and one may conjecture that in asymptotically flat space-times non-Euclidean topology is hiden from view. In the quantum theory, large diffeos can act nontrivially on the space of states, leading to state vectors that transform as representations of the corresponding symmetry group π0(Diff). In particular, in a quantum theory that, at energies E < EPlanck, is a theory of the metric alone, there appear to be ground states with half-integral spin, and in higher-dimensional gravity, with the kinematical quantum numbers of fundamental fermions.

  2. Space-time crystals of trapped ions.

    PubMed

    Li, Tongcang; Gong, Zhe-Xuan; Yin, Zhang-Qi; Quan, H T; Yin, Xiaobo; Zhang, Peng; Duan, L-M; Zhang, Xiang

    2012-10-19

    Spontaneous symmetry breaking can lead to the formation of time crystals, as well as spatial crystals. Here we propose a space-time crystal of trapped ions and a method to realize it experimentally by confining ions in a ring-shaped trapping potential with a static magnetic field. The ions spontaneously form a spatial ring crystal due to Coulomb repulsion. This ion crystal can rotate persistently at the lowest quantum energy state in magnetic fields with fractional fluxes. The persistent rotation of trapped ions produces the temporal order, leading to the formation of a space-time crystal. We show that these space-time crystals are robust for direct experimental observation. We also study the effects of finite temperatures on the persistent rotation. The proposed space-time crystals of trapped ions provide a new dimension for exploring many-body physics and emerging properties of matter.

  3. Space--Time from Topos Quantum Theory

    NASA Astrophysics Data System (ADS)

    Flori, Cecilia

    One of the main challenges in theoretical physics in the past 50 years has been to define a theory of quantum gravity, i.e. a theory which consistently combines general relativity and quantum theory in order to define a theory of space-time itself seen as a fluctuating field. As such, a definition of space-time is of paramount importance, but it is precisely the attainment of such a definition which is one of the main stumbling blocks in quantum gravity. One of the striking features of quantum gravity is that although both general relativity and quantum theory treat space-time as a four-dimensional (4D) manifold equipped with a metric, quantum gravity would suggest that, at the microscopic scale, space-time is somewhat discrete. Therefore the continuum structure of space-time suggested by the two main ingredients of quantum gravity seems to be thrown into discussion by quantum gravity itself. This seems quite an odd predicament, but it might suggest that perhaps a different mathematical structure other than a smooth manifold should model space-time. These considerations seem to shed doubts on the use of the continuum in general in a possible theory of quantum gravity. An alternative would be to develop a mathematical formalism for quantum gravity in which no fundamental role is played by the continuum and where a new concept of space-time, not modeled on a differentiable manifold, will emerge. This is precisely one of the aims of the topos theory approach to quantum theory and quantum gravity put forward by Isham, Butterfield, and Doering and subsequently developed by other authors. The aim of this article is to precisely elucidate how such an approach gives rise to a new definition of space-time which might be more appropriate for quantum gravity.

  4. Statistical analysis of the distribution of amino acids in Borrelia burgdorferi genome under different genetic codes

    NASA Astrophysics Data System (ADS)

    García, José A.; Alvarez, Samantha; Flores, Alejandro; Govezensky, Tzipe; Bobadilla, Juan R.; José, Marco V.

    2004-10-01

    The genetic code is considered to be universal. In order to test if some statistical properties of the coding bacterial genome were due to inherent properties of the genetic code, we compared the autocorrelation function, the scaling properties and the maximum entropy of the distribution of distances of amino acids in sequences obtained by translating protein-coding regions from the genome of Borrelia burgdorferi, under different genetic codes. Overall our results indicate that these properties are very stable to perturbations made by altering the genetic code. We also discuss the evolutionary likely implications of the present results.

  5. On the binary weight distribution of some Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1985-01-01

    Consider an (n,k) linear code with symbols from GF(2 sup M). If each code symbol is represented by a m-tuple over GF(2) using certain basis for GF(2 sup M), a binary (nm,km) linear code is obtained. The weight distribution of a binary linear code obtained in this manner is investigated. Weight enumerators for binary linear codes obtained from Reed-Solomon codes over GF(2 sup M) generated by polynomials, (X-alpha), (X-l)(X-alpha), (X-alpha)(X-alpha squared) and (X-l)(X-alpha)(X-alpha squared) and their extended codes are presented, where alpha is a primitive element of GF(2 sup M). Binary codes derived from Reed-Solomon codes are often used for correcting multiple bursts of errors.

  6. From Elastic Continua To Space-time

    NASA Astrophysics Data System (ADS)

    Tartaglia, Angelo; Radicella, Ninfa

    2010-06-01

    Since the early days of the theory of electromagnetism and of gravity the idea of space, then space-time, as a sort of physical continuum hovered the scientific community. Actually general relativity shows the strong similarity that exists between the geometrical properties of space-time and the ones of a strained elastic continuum. The bridge between geometry and the elastic potential, as well in three as in three plus one dimensions, is the strain tensor, read as the non-trivial part of the metric tensor. On the basis of this remark and exploiting appropriate multidimensional embeddings, it is possible to build a full theory of space-time, allowing to account for the accelerated expansion of the universe. How this can be obtained is the content of the paper. The theory fits the cosmic accelerated expansion data from type Ia supernovae better than the □CDM model.

  7. Space-Time Approximation with Sparse Grids

    SciTech Connect

    Griebel, M; Oeltz, D; Vassilevski, P S

    2005-04-14

    In this article we introduce approximation spaces for parabolic problems which are based on the tensor product construction of a multiscale basis in space and a multiscale basis in time. Proper truncation then leads to so-called space-time sparse grid spaces. For a uniform discretization of the spatial space of dimension d with O(N{sup d}) degrees of freedom, these spaces involve for d > 1 also only O(N{sup d}) degrees of freedom for the discretization of the whole space-time problem. But they provide the same approximation rate as classical space-time Finite Element spaces which need O(N{sup d+1}) degrees of freedoms. This makes these approximation spaces well suited for conventional parabolic and for time-dependent optimization problems. We analyze the approximation properties and the dimension of these sparse grid space-time spaces for general stable multiscale bases. We then restrict ourselves to an interpolatory multiscale basis, i.e. a hierarchical basis. Here, to be able to handle also complicated spatial domains {Omega}, we construct the hierarchical basis from a given spatial Finite Element basis as follows: First we determine coarse grid points recursively over the levels by the coarsening step of the algebraic multigrid method. Then, we derive interpolatory prolongation operators between the respective coarse and fine grid points by a least squares approach. This way we obtain an algebraic hierarchical basis for the spatial domain which we then use in our space-time sparse grid approach. We give numerical results on the convergence rate of the interpolation error of these spaces for various space-time problems with two spatial dimensions. Also implementational issues, data structures and questions of adaptivity are addressed to some extent.

  8. Space-time framework of internal measurement

    NASA Astrophysics Data System (ADS)

    Matsuno, Koichiro

    1998-07-01

    Measurement internal to material bodies is ubiquitous. The internal observer has its own local space-time framework that enables the observer to distinguish, even to a slightest degree, those material bodies fallen into that framework. Internal measurement proceeding among the internal observers come to negotiate a construction of more encompassing local framework of space and time. The construction takes place through friction among the internal observers. Emergent phenomena are related to an occurrence of enlarging the local space-time framework through the frictional negotiation among the material participants serving as the internal observers. Unless such a negotiation is obtained, the internal observers would have to move around in the local space-time frameworks of their own that are mutually incommensurable. Enhancement of material organization as demonstrated in biological evolutionary processes manifests an inexhaustible negotiation for enlarging the local space-time framework available to the internal observers. In contrast, Newtonian space-time framework, that remains absolute and all encompassing, is an asymptote at which no further emergent phenomena could be expected. It is thus ironical to expect something to emerge within the framework of Newtonian absolute space and time. Instead of being a complex and organized configuration of interaction to appear within the global space-time framework, emergent phenomena are a consequence of negotiation among the local space-time frameworks available to internal measurement. Most indicative of the negotiation of local space-time frameworks is emergence of a conscious self grounding upon the reflexive nature of perceptions, that is, a self-consciousness in short, that certainly goes beyond the Kantian transcendental subject. Accordingly, a synthetic discourse on securing consciousness upon the ground of self-consciousness can be developed, though linguistic exposition of consciousness upon self

  9. Pair creation in noncommutative space-time

    NASA Astrophysics Data System (ADS)

    Hamil, B.; Chetouani, L.

    2016-09-01

    By taking two interactions, the Volkov plane wave and a constant electromagnetic field, the probability related to the process of pair creation from the vacuum is exactly and analytically determined via the Schwinger method in noncommutative space-time. For the plane wave, it is shown that the probability is simply null and for the electromagnetic wave it is found that the expression of the probability has a similar form to that obtained by Schwinger in a commutative space-time. For a certain critical value of H, the probability is simply equal to 1.

  10. Distributed Estimation, Coding, and Scheduling in Wireless Visual Sensor Networks

    ERIC Educational Resources Information Center

    Yu, Chao

    2013-01-01

    In this thesis, we consider estimation, coding, and sensor scheduling for energy efficient operation of wireless visual sensor networks (VSN), which consist of battery-powered wireless sensors with sensing (imaging), computation, and communication capabilities. The competing requirements for applications of these wireless sensor networks (WSN)…

  11. Space, Time, Matter:. 1918-2012

    NASA Astrophysics Data System (ADS)

    Veneziano, Gabriele

    2013-12-01

    Almost a century has elapsed since Hermann Weyl wrote his famous "Space, Time, Matter" book. After recalling some amazingly premonitory writings by him and Wolfgang Pauli in the fifties, I will try to asses the present status of the problematics they were so much concerned with.

  12. Relativistic positioning in Schwarzschild space-time

    NASA Astrophysics Data System (ADS)

    Puchades, Neus; Sáez, Diego

    2015-04-01

    In the Schwarzschild space-time created by an idealized static spherically symmetric Earth, two approaches -based on relativistic positioning- may be used to estimate the user position from the proper times broadcast by four satellites. In the first approach, satellites move in the Schwarzschild space-time and the photons emitted by the satellites follow null geodesics of the Minkowski space-time asymptotic to the Schwarzschild geometry. This assumption leads to positioning errors since the photon world lines are not geodesics of any Minkowski geometry. In the second approach -the most coherent one- satellites and photons move in the Schwarzschild space-time. This approach is a first order one in the dimensionless parameter GM/R (with the speed of light c=1). The two approaches give different inertial coordinates for a given user. The differences are estimated and appropriately represented for users located inside a great region surrounding Earth. The resulting values (errors) are small enough to justify the use of the first approach, which is the simplest and the most manageable one. The satellite evolution mimics that of the GALILEO global navigation satellite system.

  13. SAMDIST: A Computer Code for Calculating Statistical Distributions for R-Matrix Resonance Parameters

    SciTech Connect

    Leal, L.C.

    1995-01-01

    The: SAMDIST computer code has been developed to calculate distribution of resonance parameters of the Reich-Moore R-matrix type. The program assumes the parameters are in the format compatible with that of the multilevel R-matrix code SAMMY. SAMDIST calculates the energy-level spacing distribution, the resonance width distribution, and the long-range correlation of the energy levels. Results of these calculations are presented in both graphic and tabular forms.

  14. A New Solution of Distributed Disaster Recovery Based on Raptor Code

    NASA Astrophysics Data System (ADS)

    Deng, Kai; Wang, Kaiyun; Ma, Danyang

    For the large cost, low data availability in the condition of multi-node storage and poor capacity of intrusion tolerance of traditional disaster recovery which is based on simple copy, this paper put forward a distributed disaster recovery scheme based on raptor codes. This article introduces the principle of raptor codes, and analyses its coding advantages, and gives a comparative analysis between this solution and traditional solutions through the aspects of redundancy, data availability and capacity of intrusion tolerance. The results show that the distributed disaster recovery solution based on raptor codes can achieve higher data availability as well as better intrusion tolerance capabilities in the premise of lower redundancy.

  15. A Cooperative Downloading Method for VANET Using Distributed Fountain Code

    PubMed Central

    Liu, Jianhang; Zhang, Wenbin; Wang, Qi; Li, Shibao; Chen, Haihua; Cui, Xuerong; Sun, Yi

    2016-01-01

    Cooperative downloading is one of the effective methods to improve the amount of downloaded data in vehicular ad hoc networking (VANET). However, the poor channel quality and short encounter time bring about a high packet loss rate, which decreases transmission efficiency and fails to satisfy the requirement of high quality of service (QoS) for some applications. Digital fountain code (DFC) can be utilized in the field of wireless communication to increase transmission efficiency. For cooperative forwarding, however, processing delay from frequent coding and decoding as well as single feedback mechanism using DFC cannot adapt to the environment of VANET. In this paper, a cooperative downloading method for VANET using concatenated DFC is proposed to solve the problems above. The source vehicle and cooperative vehicles encodes the raw data using hierarchical fountain code before they send to the client directly or indirectly. Although some packets may be lost, the client can recover the raw data, so long as it receives enough encoded packets. The method avoids data retransmission due to packet loss. Furthermore, the concatenated feedback mechanism in the method reduces the transmission delay effectively. Simulation results indicate the benefits of the proposed scheme in terms of increasing amount of downloaded data and data receiving rate. PMID:27754339

  16. A Cooperative Downloading Method for VANET Using Distributed Fountain Code.

    PubMed

    Liu, Jianhang; Zhang, Wenbin; Wang, Qi; Li, Shibao; Chen, Haihua; Cui, Xuerong; Sun, Yi

    2016-10-12

    Cooperative downloading is one of the effective methods to improve the amount of downloaded data in vehicular ad hoc networking (VANET). However, the poor channel quality and short encounter time bring about a high packet loss rate, which decreases transmission efficiency and fails to satisfy the requirement of high quality of service (QoS) for some applications. Digital fountain code (DFC) can be utilized in the field of wireless communication to increase transmission efficiency. For cooperative forwarding, however, processing delay from frequent coding and decoding as well as single feedback mechanism using DFC cannot adapt to the environment of VANET. In this paper, a cooperative downloading method for VANET using concatenated DFC is proposed to solve the problems above. The source vehicle and cooperative vehicles encodes the raw data using hierarchical fountain code before they send to the client directly or indirectly. Although some packets may be lost, the client can recover the raw data, so long as it receives enough encoded packets. The method avoids data retransmission due to packet loss. Furthermore, the concatenated feedback mechanism in the method reduces the transmission delay effectively. Simulation results indicate the benefits of the proposed scheme in terms of increasing amount of downloaded data and data receiving rate.

  17. SADDE (Scaled Absorbed Dose Distribution Evaluator): A code to generate input for VARSKIN

    SciTech Connect

    Reece, W.D.; Miller, S.D.; Durham, J.S.

    1989-01-01

    The VARSKIN computer code has been limited to the isotopes for which the scaled absorbed dose distributions were provided by the Medical Internal Radiation Dose (MIRD) Committee or to data that could be interpolated from isotopes that had similar spectra. This document describes the methodology to calculate the scaled absorbed dose distribution data for any isotope (including emissions by the daughter isotopes) and its implementation by a computer code called SADDE (Scaled Absorbed Dose Distribution Evaluator). The SADDE source code is provided along with input examples and verification calculations. 10 refs., 4 figs.

  18. Comparative Similarity in Branching Space-Times

    NASA Astrophysics Data System (ADS)

    Placek, Tomasz

    2010-12-01

    My aim in this paper is to investigate the notions of comparative similarity definable in the framework of branching space-times. A notion of this kind is required to give a rigorous Lewis-style semantics of space-time counterfactuals. In turn, the semantical analysis is needed to decide whether the recently proposed proofs of the non-locality of quantum mechanics are correct. From among the three notions of comparative similarity I select two which appear equally good as far as their intuitiveness and algebraic properties are concerned. However, the relations are not transitive, and thus cannot be used in the semantics proposed by Lewis (J. Philos. Log. 2:418-446, 1973), which requires transitivity. Yet they are adequate for the account of Lewis (J. Philos. Log. 10:217-234, 1981).

  19. Space Time Processing, Environmental-Acoustic Effects

    DTIC Science & Technology

    1987-08-15

    5) In the cases of a harmonic field which is steady or for a random field which is spatially homogeneous and temporally stationary, one can infer...relationships define the acoustic-space-time field for the class of harmonic and random functions which are spatially homogeneous and temporally stationary...When the field is homogeneous and sta- tionary, then (in large average limits) spatial and temporal average values approach the statistically

  20. Hypermotion due to space-time deformation

    NASA Astrophysics Data System (ADS)

    Fil'Chenkov, Michael; Laptev, Yuri

    2016-03-01

    A superluminal motion (hypermotion) via M. Alcubierre’s warp drive is considered. Parameters of the warp drive have been estimated. The equations of starship geodesics have been solved. The starship velocity have been shown to exceed the speed of light, with the local velocity relative to the deformed space-time being below it. Hawking’s radiation does not prove to affect the ship interior considerably. Difficulties related to a practical realization of the hypermotion are indicated.

  1. Asymptotically flat space-times: an enigma

    NASA Astrophysics Data System (ADS)

    Newman, Ezra T.

    2016-07-01

    We begin by emphasizing that we are dealing with standard Einstein or Einstein-Maxwell theory—absolutely no new physics has been inserted. The fresh item is that the well-known asymptotically flat solutions of the Einstein-Maxwell theory are transformed to a new coordinate system with surprising and (seemingly) inexplicable results. We begin with the standard description of (Null) asymptotically flat space-times described in conventional Bondi-coordinates. After transforming the variables (mainly the asymptotic Weyl tensor components) to a very special set of Newman-Unti (NU) coordinates, we find a series of relations totally mimicking standard Newtonian classical mechanics and Maxwell theory. The surprising and troubling aspect of these relations is that the associated motion and radiation does not take place in physical space-time. Instead these relations takes place in an unusual inherited complex four-dimensional manifold referred to as H-space that has no immediate relationship with space-time. In fact these relations appear in two such spaces, H-space and its dual space \\bar{H}.

  2. Utilities for master source code distribution: MAX and Friends

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.

    1988-01-01

    MAX is a program for the manipulation of FORTRAN master source code (MSC). This is a technique by which one maintains one and only one master copy of a FORTRAN program under a program developing system, which for MAX is assumed to be VAX/VMS. The master copy is not intended to be directly compiled. Instead it must be pre-processed by MAX to produce compilable instances. These instances may correspond to different code versions (for example, double precision versus single precision), different machines (for example, IBM, CDC, Cray) or different operating systems (i.e., VAX/VMS versus VAX/UNIX). The advantage os using a master source is more pronounced in complex application programs that are developed and maintained over many years and are to be transported and executed on several computer environments. The version lag problem that plagues many such programs is avoided by this approach. MAX is complemented by several auxiliary programs that perform nonessential functions. The ensemble is collectively known as MAX and Friends. All of these programs, including MAX, are executed as foreign VAX/VMS commands and can easily be hidden in customized VMS command procedures.

  3. Optical Properties of Quantum Vacuum. Space-Time Engineering

    SciTech Connect

    Gevorkyan, A. S.; Gevorkyan, A. A.

    2011-03-28

    The propagation of electromagnetic waves in the vacuum is considered taking into account quantum fluctuations in the limits of Maxwell-Langevin (ML) type stochastic differential equations. For a model of fluctuations, type of 'white noise', using ML equations a partial differential equation of second order is obtained which describes the quantum distribution of virtual particles in vacuum. It is proved that in order to satisfy observed facts, the Lamb Shift etc, the virtual particles should be quantized in unperturbed vacuum. It is shown that the quantized virtual particles in toto (approximately 86 percent) are condensed on the 'ground state' energy level. It is proved that the extension of Maxwell electrodynamics with inclusion of quantum vacuum fluctuations may be constructed on a 6D space-time continuum, where 4D is Minkowski space-time and 2D is a compactified subspace. In detail is studied of vacuum's refraction indexes under the influence of external electromagnetic fields.

  4. Quantum circuit for optimal eavesdropping in quantum key distribution using phase-time coding

    SciTech Connect

    Kronberg, D. A.; Molotkov, S. N.

    2010-07-15

    A quantum circuit is constructed for optimal eavesdropping on quantum key distribution proto- cols using phase-time coding, and its physical implementation based on linear and nonlinear fiber-optic components is proposed.

  5. Energy distribution property and energy coding of a structural neural network

    PubMed Central

    Wang, Ziyin; Wang, Rubin

    2014-01-01

    Studying neural coding through neural energy is a novel view. In this paper, based on previously proposed single neuron model, the correlation between the energy consumption and the parameters of the cortex networks (amount of neurons, coupling strength, and transform delay) under an oscillational condition were researched. We found that energy distribution varies orderly as these parameters change, and it is closely related to the synchronous oscillation of the neural network. Besides, we compared this method with traditional method of relative coefficient, which shows energy method works equal to or better than the traditional one. It is novel that the synchronous activity and neural network parameters could be researched by assessing energy distribution and consumption. Therefore, the conclusion of this paper will refine the framework of neural coding theory and contribute to our understanding of the coding mechanism of the cerebral cortex. It provides a strong theoretical foundation of a novel neural coding theory—energy coding. PMID:24600382

  6. Energy distribution property and energy coding of a structural neural network.

    PubMed

    Wang, Ziyin; Wang, Rubin

    2014-01-01

    Studying neural coding through neural energy is a novel view. In this paper, based on previously proposed single neuron model, the correlation between the energy consumption and the parameters of the cortex networks (amount of neurons, coupling strength, and transform delay) under an oscillational condition were researched. We found that energy distribution varies orderly as these parameters change, and it is closely related to the synchronous oscillation of the neural network. Besides, we compared this method with traditional method of relative coefficient, which shows energy method works equal to or better than the traditional one. It is novel that the synchronous activity and neural network parameters could be researched by assessing energy distribution and consumption. Therefore, the conclusion of this paper will refine the framework of neural coding theory and contribute to our understanding of the coding mechanism of the cerebral cortex. It provides a strong theoretical foundation of a novel neural coding theory-energy coding.

  7. A space-time neural network

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.; Shelton, Robert O.

    1991-01-01

    Introduced here is a novel technique which adds the dimension of time to the well known back propagation neural network algorithm. Cited here are several reasons why the inclusion of automated spatial and temporal associations are crucial to effective systems modeling. An overview of other works which also model spatiotemporal dynamics is furnished. A detailed description is given of the processes necessary to implement the space-time network algorithm. Several demonstrations that illustrate the capabilities and performance of this new architecture are given.

  8. The Adventures of Space-Time

    NASA Astrophysics Data System (ADS)

    Bertolami, Orfeu

    Since the nineteenth century, it is known, through the work of Lobatchevski, Riemann, and Gauss, that spaces do not need to have a vanishing curvature. This was for sure a revolution on its own, however, from the point of view of these mathematicians, the space of our day to day experience, the physical space, was still an essentially a priori concept that preceded all experience and was independent of any physical phenomena. Actually, that was also the view of Newton and Kant with respect to time, even though, for these two space-time explorers, the world was Euclidean.

  9. Syndrome Surveillance Using Parametric Space-Time Clustering

    SciTech Connect

    KOCH, MARK W.; MCKENNA, SEAN A.; BILISOLY, ROGER L.

    2002-11-01

    As demonstrated by the anthrax attack through the United States mail, people infected by the biological agent itself will give the first indication of a bioterror attack. Thus, a distributed information system that can rapidly and efficiently gather and analyze public health data would aid epidemiologists in detecting and characterizing emerging diseases, including bioterror attacks. We propose using clusters of adverse health events in space and time to detect possible bioterror attacks. Space-time clusters can indicate exposure to infectious diseases or localized exposure to toxins. Most space-time clustering approaches require individual patient data. To protect the patient's privacy, we have extended these approaches to aggregated data and have embedded this extension in a sequential probability ratio test (SPRT) framework. The real-time and sequential nature of health data makes the SPRT an ideal candidate. The result of space-time clustering gives the statistical significance of a cluster at every location in the surveillance area and can be thought of as a ''health-index'' of the people living in this area. As a surrogate to bioterrorism data, we have experimented with two flu data sets. For both databases, we show that space-time clustering can detect a flu epidemic up to 21 to 28 days earlier than a conventional periodic regression technique. We have also tested using simulated anthrax attack data on top of a respiratory illness diagnostic category. Results show we do very well at detecting an attack as early as the second or third day after infected people start becoming severely symptomatic.

  10. Space-time super-resolution.

    PubMed

    Shechtman, Eli; Caspi, Yaron; Irani, Michal

    2005-04-01

    We propose a method for constructing a video sequence of high space-time resolution by combining information from multiple low-resolution video sequences of the same dynamic scene. Super-resolution is performed simultaneously in time and in space. By "temporal super-resolution," we mean recovering rapid dynamic events that occur faster than regular frame-rate. Such dynamic events are not visible (or else are observed incorrectly) in any of the input sequences, even if these are played in "slow-motion." The spatial and temporal dimensions are very different in nature, yet are interrelated. This leads to interesting visual trade-offs in time and space and to new video applications. These include: 1) treatment of spatial artifacts (e.g., motion-blur) by increasing the temporal resolution and 2) combination of input sequences of different space-time resolutions (e.g., NTSC, PAL, and even high quality still images) to generate a high quality video sequence. We further analyze and compare characteristics of temporal super-resolution to those of spatial super-resolution. These include: How many video cameras are needed to obtain increased resolution? What is the upper bound on resolution improvement via super-resolution? What is the temporal analogue to the spatial "ringing" effect?

  11. Casimir energy in Kerr space-time

    NASA Astrophysics Data System (ADS)

    Sorge, F.

    2014-10-01

    We investigate the vacuum energy of a scalar massless field confined in a Casimir cavity moving in a circular equatorial orbit in the exact Kerr space-time geometry. We find that both the orbital motion of the cavity and the underlying space-time geometry conspire in lowering the absolute value of the (renormalized) Casimir energy ⟨ɛvac⟩ren , as measured by a comoving observer, with respect to whom the cavity is at rest. This, in turn, causes a weakening in the attractive force between the Casimir plates. In particular, we show that the vacuum energy density ⟨ɛvac⟩ren→0 when the orbital path of the Casimir cavity comes close to the corotating or counter-rotating circular null orbits (possibly geodesic) allowed by the Kerr geometry. Such an effect could be of some astrophysical interest on relevant orbits, such as the Kerr innermost stable circular orbits, being potentially related to particle confinement (as in some interquark models). The present work generalizes previous results obtained by several authors in the weak field approximation.

  12. A hypocentral version of the space-time ETAS model

    NASA Astrophysics Data System (ADS)

    Guo, Yicun; Zhuang, Jiancang; Zhou, Shiyong

    2015-10-01

    The space-time Epidemic-Type Aftershock Sequence (ETAS) model is extended by incorporating the depth component of earthquake hypocentres. The depths of the direct offspring produced by an earthquake are assumed to be independent of the epicentre locations and to follow a beta distribution, whose shape parameter is determined by the depth of the parent event. This new model is verified by applying it to the Southern California earthquake catalogue. The results show that the new model fits data better than the original epicentre ETAS model and that it provides the potential for modelling and forecasting seismicity with higher resolutions.

  13. Space-time formulation for finite element modeling of superconductors

    SciTech Connect

    Ashworth, Stephen P; Grilli, Francesco; Sirois, Frederic; Laforest, Marc

    2008-01-01

    In this paper we present a new model for computing the current density and field distributions in superconductors by means of a periodic space-time formulation for finite elements (FE). By considering a space dimension as time, we can use a static model to solve a time dependent problem. This allows overcoming one of the major problems of FE modeling of superconductors: the length of simulations, even for relatively simple cases. We present our first results and compare them to those obtained with a 'standard' time-dependent method and with analytical solutions.

  14. High-capacity quantum Fibonacci coding for key distribution

    NASA Astrophysics Data System (ADS)

    Simon, David S.; Lawrence, Nate; Trevino, Jacob; Dal Negro, Luca; Sergienko, Alexander V.

    2013-03-01

    Quantum cryptography and quantum key distribution (QKD) have been the most successful applications of quantum information processing, highlighting the unique capability of quantum mechanics, through the no-cloning theorem, to securely share encryption keys between two parties. Here, we present an approach to high-capacity, high-efficiency QKD by exploiting cross-disciplinary ideas from quantum information theory and the theory of light scattering of aperiodic photonic media. We propose a unique type of entangled-photon source, as well as a physical mechanism for efficiently sharing keys. The key-sharing protocol combines entanglement with the mathematical properties of a recursive sequence to allow a realization of the physical conditions necessary for implementation of the no-cloning principle for QKD, while the source produces entangled photons whose orbital angular momenta (OAM) are in a superposition of Fibonacci numbers. The source is used to implement a particular physical realization of the protocol by randomly encoding the Fibonacci sequence onto entangled OAM states, allowing secure generation of long keys from few photons. Unlike in polarization-based protocols, reference frame alignment is unnecessary, while the required experimental setup is simpler than other OAM-based protocols capable of achieving the same capacity and its complexity grows less rapidly with increasing range of OAM used.

  15. Double conformal space-time algebra

    NASA Astrophysics Data System (ADS)

    Easter, Robert Benjamin; Hitzer, Eckhard

    2017-01-01

    The Double Conformal Space-Time Algebra (DCSTA) is a high-dimensional 12D Geometric Algebra G 4,8that extends the concepts introduced with the Double Conformal / Darboux Cyclide Geometric Algebra (DCGA) G 8,2 with entities for Darboux cyclides (incl. parabolic and Dupin cyclides, general quadrics, and ring torus) in spacetime with a new boost operator. The base algebra in which spacetime geometry is modeled is the Space-Time Algebra (STA) G 1,3. Two Conformal Space-Time subalgebras (CSTA) G 2,4 provide spacetime entities for points, flats (incl. worldlines), and hyperbolics, and a complete set of versors for their spacetime transformations that includes rotation, translation, isotropic dilation, hyperbolic rotation (boost), planar reflection, and (pseudo)spherical inversion in rounds or hyperbolics. The DCSTA G 4,8 is a doubling product of two G 2,4 CSTA subalgebras that inherits doubled CSTA entities and versors from CSTA and adds new bivector entities for (pseudo)quadrics and Darboux (pseudo)cyclides in spacetime that are also transformed by the doubled versors. The "pseudo" surface entities are spacetime hyperbolics or other surface entities using the time axis as a pseudospatial dimension. The (pseudo)cyclides are the inversions of (pseudo)quadrics in rounds or hyperbolics. An operation for the directed non-uniform scaling (anisotropic dilation) of the bivector general quadric entities is defined using the boost operator and a spatial projection. DCSTA allows general quadric surfaces to be transformed in spacetime by the same complete set of doubled CSTA versor (i.e., DCSTA versor) operations that are also valid on the doubled CSTA point entity (i.e., DCSTA point) and the other doubled CSTA entities. The new DCSTA bivector entities are formed by extracting values from the DCSTA point entity using specifically defined inner product extraction operators. Quadric surface entities can be boosted into moving surfaces with constant velocities that display the length

  16. Fractal Signals & Space-Time Cartoons

    NASA Astrophysics Data System (ADS)

    Oetama, -Hc, Jakob, , Dr; Maksoed, Wh-

    2016-03-01

    In ``Theory of Scale Relativity'', 1991- L. Nottale states whereas ``scale relativity is a geometrical & fractal space-time theory''. It took in comparisons to ``a unified, wavelet based framework for efficiently synthetizing, analyzing ∖7 processing several broad classes of fractal signals''-Gregory W. Wornell:``Signal Processing with Fractals'', 1995. Furthers, in Fig 1.1. a simple waveform from statistically scale-invariant random process [ibid.,h 3 ]. Accompanying RLE Technical Report 566 ``Synthesis, Analysis & Processing of Fractal Signals'' as well as from Wornell, Oct 1991 herewith intended to deducts =a Δt + (1 - β Δ t) ...in Petersen, et.al: ``Scale invariant properties of public debt growth'',2010 h. 38006p2 to [1/{1- (2 α (λ) /3 π) ln (λ/r)}depicts in Laurent Nottale,1991, h 24. Acknowledgment devotes to theLates HE. Mr. BrigadierGeneral-TNI[rtd].Prof. Ir. HANDOJO.

  17. Circular motion in NUT space-time

    NASA Astrophysics Data System (ADS)

    Jefremov, Paul I.; Perlick, Volker

    2016-12-01

    We consider circular motion in the NUT (Newman-Unti-Tamburino) space-time. Among other things, we determine the location of circular time-like geodesic orbits, in particular of the innermost stable circular orbit (ISCO) and of the marginally bound circular orbit. Moreover, we discuss the von Zeipel cylinders with respect to the stationary observers and with respect to the zero angular momentum observers (ZAMOs). We also investigate the relation of von Zeipel cylinders to inertial forces, in particular in the ultra-relativistic limit. Finally, we generalise the construction of thick accretion tori (‘Polish doughnuts’) which are well known on the Schwarzschild or Kerr background to the case of the NUT metric. We argue that, in principle, an NUT source could be distinguished from a Schwarzschild or Kerr source by observing the features of circular matter flows in its neighbourhood.

  18. Spatial and Space-Time Correlations in Systems of Subpopulations with Genetic Drift and Migration

    PubMed Central

    Epperson, B. K.

    1993-01-01

    The geographic distribution of genetic variation is an important theoretical and experimental component of population genetics. Previous characterizations of genetic structure of populations have used measures of spatial variance and spatial correlations. Yet a full understanding of the causes and consequences of spatial structure requires complete characterization of the underlying space-time system. This paper examines important interactions between processes and spatial structure in systems of subpopulations with migration and drift, by analyzing correlations of gene frequencies over space and time. We develop methods for studying important features of the complete set of space-time correlations of gene frequencies for the first time in population genetics. These methods also provide a new alternative for studying the purely spatial correlations and the variance, for models with general spatial dimensionalities and migration patterns. These results are obtained by employing theorems, previously unused in population genetics, for space-time autoregressive (STAR) stochastic spatial time series. We include results on systems with subpopulation interactions that have time delay lags (temporal orders) greater than one. We use the space-time correlation structure to develop novel estimators for migration rates that are based on space-time data (samples collected over space and time) rather than on purely spatial data, for real systems. We examine the space-time and spatial correlations for some specific stepping stone migration models. One focus is on the effects of anisotropic migration rates. Partial space-time correlation coefficients can be used for identifying migration patterns. Using STAR models, the spatial, space-time, and partial space-time correlations together provide a framework with an unprecedented level of detail for characterizing, predicting and contrasting space-time theoretical distributions of gene frequencies, and for identifying features such as

  19. Development of MCNPX-ESUT computer code for simulation of neutron/gamma pulse height distribution

    NASA Astrophysics Data System (ADS)

    Abolfazl Hosseini, Seyed; Vosoughi, Naser; Zangian, Mehdi

    2015-05-01

    In this paper, the development of the MCNPX-ESUT (MCNPX-Energy Engineering of Sharif University of Technology) computer code for simulation of neutron/gamma pulse height distribution is reported. Since liquid organic scintillators like NE-213 are well suited and routinely used for spectrometry in mixed neutron/gamma fields, this type of detectors is selected for simulation in the present study. The proposed algorithm for simulation includes four main steps. The first step is the modeling of the neutron/gamma particle transport and their interactions with the materials in the environment and detector volume. In the second step, the number of scintillation photons due to charged particles such as electrons, alphas, protons and carbon nuclei in the scintillator material is calculated. In the third step, the transport of scintillation photons in the scintillator and lightguide is simulated. Finally, the resolution corresponding to the experiment is considered in the last step of the simulation. Unlike the similar computer codes like SCINFUL, NRESP7 and PHRESP, the developed computer code is applicable to both neutron and gamma sources. Hence, the discrimination of neutron and gamma in the mixed fields may be performed using the MCNPX-ESUT computer code. The main feature of MCNPX-ESUT computer code is that the neutron/gamma pulse height simulation may be performed without needing any sort of post processing. In the present study, the pulse height distributions due to a monoenergetic neutron/gamma source in NE-213 detector using MCNPX-ESUT computer code is simulated. The simulated neutron pulse height distributions are validated through comparing with experimental data (Gohil et al. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 664 (2012) 304-309.) and the results obtained from similar computer codes like SCINFUL, NRESP7 and Geant4. The simulated gamma pulse height distribution for a 137Cs

  20. Space-Time, Relativity, and Cosmology

    NASA Astrophysics Data System (ADS)

    Wudka, Jose

    2006-07-01

    Space-Time, Relativity and Cosmology provides a historical introduction to modern relativistic cosmology and traces its historical roots and evolution from antiquity to Einstein. The topics are presented in a non-mathematical manner, with the emphasis on the ideas that underlie each theory rather than their detailed quantitative consequences. A significant part of the book focuses on the Special and General theories of relativity. The tests and experimental evidence supporting the theories are explained together with their predictions and their confirmation. Other topics include a discussion of modern relativistic cosmology, the consequences of Hubble's observations leading to the Big Bang hypothesis, and an overview of the most exciting research topics in relativistic cosmology. This textbook is intended for introductory undergraduate courses on the foundations of modern physics. It is also accessible to advanced high school students, as well as non-science majors who are concerned with science issues.• Uses a historical perspective to describe the evolution of modern ideas about space and time • The main arguments are described using a completely non-mathematical approach • Ideal for physics undergraduates and high-school students, non-science majors and general readers

  1. Beyond Archimedean Space-Time Structure

    NASA Astrophysics Data System (ADS)

    Rosinger, Elemér E.; Khrennikov, Andrei

    2011-03-01

    It took two millennia after Euclid and until in the early 1880s, when we went beyond the ancient axiom of parallels, and inaugurated geometries of curved spaces. In less than one more century, General Relativity followed. At present, physical thinking is still beheld by the yet deeper and equally ancient Archimedean assumption. In view of that, it is argued with some rather easily accessible mathematical support that Theoretical Physics may at last venture into the non-Archimedean realms. In this introductory paper we stress two fundamental consequences of the non-Archimedean approach to Theoretical Physics: one of them for quantum theory and another for relativity theory. From the non-Archimedean viewpoint, the assumption of the existence of minimal quanta of light (of the fixed frequency) is an artifact of the present Archimedean mathematical basis of quantum mechanics. In the same way the assumption of the existence of the maximal velocity, the velocity of light, is a feature of the real space-time structure which is fundamentally Archimedean. Both these assumptions are not justified in corresponding non-Archimedean models.

  2. Beyond Archimedean Space-Time Structure

    SciTech Connect

    Rosinger, Elemer E.; Khrennikov, Andrei

    2011-03-28

    It took two millennia after Euclid and until in the early 1880s, when we went beyond the ancient axiom of parallels, and inaugurated geometries of curved spaces. In less than one more century, General Relativity followed. At present, physical thinking is still beheld by the yet deeper and equally ancient Archimedean assumption. In view of that, it is argued with some rather easily accessible mathematical support that Theoretical Physics may at last venture into the non-Archimedean realms. In this introductory paper we stress two fundamental consequences of the non-Archimedean approach to Theoretical Physics: one of them for quantum theory and another for relativity theory. From the non-Archimedean viewpoint, the assumption of the existence of minimal quanta of light (of the fixed frequency) is an artifact of the present Archimedean mathematical basis of quantum mechanics. In the same way the assumption of the existence of the maximal velocity, the velocity of light, is a feature of the real space-time structure which is fundamentally Archimedean. Both these assumptions are not justified in corresponding non-Archimedean models.

  3. Controls on space-time distribution of soft-sediment deformation structures: Applying palaeomagnetic dating to approach the apparent recurrence period of paleoseisms at the Concud Fault (eastern Spain)

    NASA Astrophysics Data System (ADS)

    Ezquerro, L.; Moretti, M.; Liesa, C. L.; Luzón, A.; Pueyo, E. L.; Simón, J. L.

    2016-10-01

    This work describes soft-sediment deformation structures (clastic dykes, load structures, diapirs, slumps, nodulizations or mudcracks) identified in three sections (Concud, Ramblillas and Masada Cociero) in the Iberian Range, Spain. These sections were logged from boreholes and outcrops in Upper Pliocene-Lower Pleistocene deposits of the Teruel-Concud Residual Basin, close to de Concud normal fault. Timing of the succession and hence of seismic and non-seismic SSDSs, covering a time span between 3.6 and 1.9 Ma, has been constrained from previous biostratigraphic and magnetostratigraphic information, then substantially refined from a new magnetostratigraphic study at Masada Cociero profile. Non-seismic SSDSs are relatively well-correlated between sections, while seismic ones are poorly correlated except for several clusters of structures. Between 29 and 35 seismic deformed levels have been computed for the overall stratigraphic succession. Factors controlling the lateral and vertical distribution of SSDSs are their seismic or non-seismic origin, the distance to the seismogenic source (Concud Fault), the sedimentary facies involved in deformation and the observation conditions (borehole core vs. natural outcrop). In the overall stratigraphic section, seismites show an apparent recurrence period of 56 to 108 ka. Clustering of seismic SSDSs levels within a 91-ka-long interval records a period of high paleoseismic activity with an apparent recurrence time of 4.8 to 6.1 ka, associated with increasing sedimentation rate and fault activity. Such activity pattern of the Concud Fault for the Late Pliocene-Early Pliocene, with alternating periods of faster and slower slip, is similar to that for the most recent Quaternary (last ca. 74 ka BP). Concerning the research methods, time occurrence patterns recognized for peaks of paleoseismic activity from SSDSs in boreholes are similar to those inferred from primary evidence in trenches. Consequently, apparent recurrence periods

  4. Hybrid decode-amplify-forward (HDAF) scheme in distributed Alamouti-coded cooperative network

    NASA Astrophysics Data System (ADS)

    Gurrala, Kiran Kumar; Das, Susmita

    2015-05-01

    In this article, a signal-to-noise ratio (SNR)-based hybrid decode-amplify-forward scheme in a distributed Alamouti-coded cooperative network is proposed. Considering a flat Rayleigh fading channel environment, the MATLAB simulation and analysis are carried out. In the cooperative scheme, two relays are employed, where each relay is transmitting each row Alamouti code. The selection of SNR threshold depends on the target rate information. The closed form expressions of symbol error rate (SER), the outage probability and average channel capacity with tight upper bounds are derived and compared with the simulation done in MATLAB environment. Furthermore, the impact of relay location on the SER performance is analysed. It is observed that the proposed hybrid relaying technique outperforms the individual amplify and forward and decode and forward ones in the distributed Alamouti-coded cooperative network.

  5. Space-Time Transfinite Interpolation of Volumetric Material Properties.

    PubMed

    Sanchez, Mathieu; Fryazinov, Oleg; Adzhiev, Valery; Comninos, Peter; Pasko, Alexander

    2015-02-01

    The paper presents a novel technique based on extension of a general mathematical method of transfinite interpolation to solve an actual problem in the context of a heterogeneous volume modelling area. It deals with time-dependent changes to the volumetric material properties (material density, colour, and others) as a transformation of the volumetric material distributions in space-time accompanying geometric shape transformations such as metamorphosis. The main idea is to represent the geometry of both objects by scalar fields with distance properties, to establish in a higher-dimensional space a time gap during which the geometric transformation takes place, and to use these scalar fields to apply the new space-time transfinite interpolation to volumetric material attributes within this time gap. The proposed solution is analytical in its nature, does not require heavy numerical computations and can be used in real-time applications. Applications of this technique also include texturing and displacement mapping of time-variant surfaces, and parametric design of volumetric microstructures.

  6. Performance and Application of Parallel OVERFLOW Codes on Distributed and Shared Memory Platforms

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Rizk, Yehia M.

    1999-01-01

    The presentation discusses recent studies on the performance of the two parallel versions of the aerodynamics CFD code, OVERFLOW_MPI and _MLP. Developed at NASA Ames, the serial version, OVERFLOW, is a multidimensional Navier-Stokes flow solver based on overset (Chimera) grid technology. The code has recently been parallelized in two ways. One is based on the explicit message-passing interface (MPI) across processors and uses the _MPI communication package. This approach is primarily suited for distributed memory systems and workstation clusters. The second, termed the multi-level parallel (MLP) method, is simple and uses shared memory for all communications. The _MLP code is suitable on distributed-shared memory systems. For both methods, the message passing takes place across the processors or processes at the advancement of each time step. This procedure is, in effect, the Chimera boundary conditions update, which is done in an explicit "Jacobi" style. In contrast, the update in the serial code is done in more of the "Gauss-Sidel" fashion. The programming efforts for the _MPI code is more complicated than for the _MLP code; the former requires modification of the outer and some inner shells of the serial code, whereas the latter focuses only on the outer shell of the code. The _MPI version offers a great deal of flexibility in distributing grid zones across a specified number of processors in order to achieve load balancing. The approach is capable of partitioning zones across multiple processors or sending each zone and/or cluster of several zones into a single processor. The message passing across the processors consists of Chimera boundary and/or an overlap of "halo" boundary points for each partitioned zone. The MLP version is a new coarse-grain parallel concept at the zonal and intra-zonal levels. A grouping strategy is used to distribute zones into several groups forming sub-processes which will run in parallel. The total volume of grid points in each

  7. Multiple description distributed image coding with side information for mobile wireless transmission

    NASA Astrophysics Data System (ADS)

    Wu, Min; Song, Daewon; Chen, Chang Wen

    2005-03-01

    Multiple description coding (MDC) is a source coding technique that involves coding the source information into multiple descriptions, and then transmitting them over different channels in packet network or error-prone wireless environment to achieve graceful degradation if parts of descriptions are lost at the receiver. In this paper, we proposed a multiple description distributed wavelet zero tree image coding system for mobile wireless transmission. We provide two innovations to achieve an excellent error resilient capability. First, when MDC is applied to wavelet subband based image coding, it is possible to introduce correlation between the descriptions in each subband. We consider using such a correlation as well as potentially error corrupted description as side information in the decoding to formulate the MDC decoding as a Wyner Ziv decoding problem. If only part of descriptions is lost, however, their correlation information is still available, the proposed Wyner Ziv decoder can recover the description by using the correlation information and the error corrupted description as side information. Secondly, in each description, single bitstream wavelet zero tree coding is very vulnerable to the channel errors. The first bit error may cause the decoder to discard all subsequent bits whether or not the subsequent bits are correctly received. Therefore, we integrate the multiple description scalar quantization (MDSQ) with the multiple wavelet tree image coding method to reduce error propagation. We first group wavelet coefficients into multiple trees according to parent-child relationship and then code them separately by SPIHT algorithm to form multiple bitstreams. Such decomposition is able to reduce error propagation and therefore improve the error correcting capability of Wyner Ziv decoder. Experimental results show that the proposed scheme not only exhibits an excellent error resilient performance but also demonstrates graceful degradation over the packet

  8. Temperature and entropy of Schwarzschild de Sitter space-time

    NASA Astrophysics Data System (ADS)

    Shankaranarayanan, S.

    2003-04-01

    In the light of recent interest in quantum gravity in de Sitter space, we investigate semiclassical aspects of four-dimensional Schwarzschild de Sitter space-time using the method of complex paths. The standard semiclassical techniques (such as Bogoliubov coefficients and Euclidean field theory) have been useful to study quantum effects in space-times with single horizons; however, none of these approaches seem to work for Schwarzschild de Sitter space-time or, in general, for space-times with multiple horizons. We extend the method of complex paths to space-times with multiple horizons and obtain the spectrum of particles produced in these space-times. We show that the temperature of radiation in these space-times is proportional to the effective surface gravity—the inverse harmonic sum of surface gravity of each horizon. For the Schwarzschild de Sitter space-time, we apply the method of complex paths to three different coordinate systems—spherically symmetric, Painlevé, and Lemaître. We show that the equilibrium temperature in Schwarzschild de Sitter space-time is the harmonic mean of cosmological and event horizon temperatures. We obtain Bogoliubov coefficients for space-times with multiple horizons by analyzing the mode functions of the quantum fields near the horizons. We propose a new definition of entropy for space-times with multiple horizons, analogous to the entropic definition for space-times with a single horizon. We define entropy for these space-times to be inversely proportional to the square of the effective surface gravity. We show that this definition of entropy for Schwarzschild de Sitter space-time satisfies the D-bound conjecture.

  9. Accelerating the discovery of space-time patterns of infectious diseases using parallel computing.

    PubMed

    Hohl, Alexander; Delmelle, Eric; Tang, Wenwu; Casas, Irene

    2016-11-01

    Infectious diseases have complex transmission cycles, and effective public health responses require the ability to monitor outbreaks in a timely manner. Space-time statistics facilitate the discovery of disease dynamics including rate of spread and seasonal cyclic patterns, but are computationally demanding, especially for datasets of increasing size, diversity and availability. High-performance computing reduces the effort required to identify these patterns, however heterogeneity in the data must be accounted for. We develop an adaptive space-time domain decomposition approach for parallel computation of the space-time kernel density. We apply our methodology to individual reported dengue cases from 2010 to 2011 in the city of Cali, Colombia. The parallel implementation reaches significant speedup compared to sequential counterparts. Density values are visualized in an interactive 3D environment, which facilitates the identification and communication of uneven space-time distribution of disease events. Our framework has the potential to enhance the timely monitoring of infectious diseases.

  10. Non-contact assessment of melanin distribution via multispectral temporal illumination coding

    NASA Astrophysics Data System (ADS)

    Amelard, Robert; Scharfenberger, Christian; Wong, Alexander; Clausi, David A.

    2015-03-01

    Melanin is a pigment that is highly absorptive in the UV and visible electromagnetic spectra. It is responsible for perceived skin tone, and protects against harmful UV effects. Abnormal melanin distribution is often an indicator for melanoma. We propose a novel approach for non-contact melanin distribution via multispectral temporal illumination coding to estimate the two-dimensional melanin distribution based on its absorptive characteristics. In the proposed system, a novel multispectral, cross-polarized, temporally-coded illumination sequence is synchronized with a camera to measure reflectance under both multispectral and ambient illumination. This allows us to eliminate the ambient illumination contribution from the acquired reflectance measurements, and also to determine the melanin distribution in an observed region based on the spectral properties of melanin using the Beer-Lambert law. Using this information, melanin distribution maps can be generated for objective, quantitative assessment of skin type of individuals. We show that the melanin distribution map correctly identifies areas with high melanin densities (e.g., nevi).

  11. Examination of nanoparticle dispersion using a novel GPU based radial distribution function code

    NASA Astrophysics Data System (ADS)

    Rosch, Thomas; Wade, Matthew; Phelan, Frederick

    We have developed a novel GPU-based code that rapidly calculates radial distribution function (RDF) for an entire system, with no cutoff, ensuring accuracy. Built on top of this code, we have developed tools to calculate the second virial coefficient (B2) and the structure factor from the RDF, two properties that are directly related to the dispersion of nanoparticles in nancomposite systems. We validate the RDF calculations by comparison with previously published results, and also show how our code, which takes into account bonding in polymeric systems, enables more accurate predictions of g(r) than current state of the art GPU-based RDF codes currently available for these systems. In addition, our code reduces the computational time by approximately an order of magnitude compared to CPU-based calculations. We demonstrate the application of our toolset by the examination of a coarse-grained nanocomposite system and show how different surface energies between particle and polymer lead to different dispersion states, and effect properties such as viscosity, yield strength, elasticity, and thermal conductivity.

  12. Side information and noise learning for distributed video coding using optical flow and clustering.

    PubMed

    Luong, Huynh Van; Rakêt, Lars Lau; Huang, Xin; Forchhammer, Søren

    2012-12-01

    Distributed video coding (DVC) is a coding paradigm that exploits the source statistics at the decoder side to reduce the complexity at the encoder. The coding efficiency of DVC critically depends on the quality of side information generation and accuracy of noise modeling. This paper considers transform domain Wyner-Ziv (TDWZ) coding and proposes using optical flow to improve side information generation and clustering to improve the noise modeling. The optical flow technique is exploited at the decoder side to compensate for weaknesses of block-based methods, when using motion-compensation to generate side information frames. Clustering is introduced to capture cross band correlation and increase local adaptivity in the noise modeling. This paper also proposes techniques to learn from previously decoded WZ frames. Different techniques are combined by calculating a number of candidate soft side information for low density parity check accumulate decoding. The proposed decoder side techniques for side information and noise learning (SING) are integrated in a TDWZ scheme. On test sequences, the proposed SING codec robustly improves the coding efficiency of TDWZ DVC. For WZ frames using a GOP size of 2, up to 4-dB improvement or an average (Bjøntegaard) bit-rate savings of 37% is achieved compared with DISCOVER.

  13. Space-time transformation sky brightness at a horizontal position of the sun

    NASA Astrophysics Data System (ADS)

    Galileiskii, Viktor P.; Elizarov, Alexey I.; Kokarev, Dmitrii V.; Morozov, Aleksandr M.

    2015-11-01

    This report discusses some simulation results of the angular distribution of brightness of the sky in the case of molecular scattering in the atmosphere for the benefit of the study of space-time changes of this distribution during the civil twilight.

  14. Domain structure of black hole space-times

    SciTech Connect

    Harmark, Troels

    2009-07-15

    We introduce the domain structure for stationary black hole space-times. The domain structure lives on the submanifold of fixed points of the Killing vector fields. Depending on which Killing vector field has fixed points the submanifold is naturally divided into domains. The domain structure provides invariants of the space-time, both topological and continuous. It is defined for any space-time dimension and any number of Killing vector fields. We examine the domain structure for asymptotically flat space-times and find a canonical form for the metric of such space-times. The domain structure generalizes the rod structure introduced for space-times with D-2 commuting Killing vector fields. We analyze in detail the domain structure for Minkowski space, the Schwarzschild-Tangherlini black hole and the Myers-Perry black hole in six and seven dimensions. Finally, we consider the possible domain structures for asymptotically flat black holes in six and seven dimensio0008.

  15. A distributed coding approach for stereo sequences in the tree structured Haar transform domain

    NASA Astrophysics Data System (ADS)

    Cancellaro, M.; Carli, M.; Neri, A.

    2009-02-01

    In this contribution, a novel method for distributed video coding for stereo sequences is proposed. The system encodes independently the left and right frames of the stereoscopic sequence. The decoder exploits the side information to achieve the best reconstruction of the correlated video streams. In particular, a syndrome coder approach based on a lifted Tree Structured Haar wavelet scheme has been adopted. The experimental results show the effectiveness of the proposed scheme.

  16. Scenario-Based Projections of Wounded-in-Action Patient Condition Code Distributions

    DTIC Science & Technology

    2005-09-13

    quantitative process has been developed to estimate these patient streams. Objective The objective of this research was to develop a methodology that...developed that allows the user to select one of these methods to easily calculate the patient distributions. Approach Two approaches to estimate PC code...describes a methodology that uses International Classification of Diseases, 9th Revision (ICD- 9) diagnostic data to estimate the composition of the

  17. DISTRIBUTED CONTAINER FAILURE MODELS FOR THE DUST-MS COMPUTER CODE.

    SciTech Connect

    SULLIVAN,T.; DE LEMOS,F.

    2001-02-24

    Improvements to the DUST-MS computer code have been made that permit simulation of distributed container failure rates. The new models permit instant failure of all containers within a computational volume, uniform failure of these containers over time, or a normal distribution in container failures. Incorporation of a distributed failure model requires wasteform releases to be calculated using a convolution integral. In addition, the models permit a unique time of emplacement for each modeled container and allow a fraction of the containers to fail at emplacement. Implementation of these models, verification testing, and an example problem comparing releases from a wasteform with a two-species decay chain as a function of failure distribution are presented in the paper.

  18. Space-Time Filtering, Sampling and Motion Uncertainty

    DTIC Science & Technology

    1988-06-01

    02i0 22cr consists of two parts. -I the first, we -rosec, the cascade of Space-time DOG as enerq, filLer, discuss its general properties nd show how to...Basically, this paper consists of two parts. In the first one, we present the cascade of space-time DOG as an energy filter, discuss its general...intensity based approaches versus space-time filtering, and preent the space-time DOG cascade as an energy filter. In section 3 we analyse sampling issues

  19. Measurement error and outcome distributions: Methodological issues in regression analyses of behavioral coding data.

    PubMed

    Holsclaw, Tracy; Hallgren, Kevin A; Steyvers, Mark; Smyth, Padhraic; Atkins, David C

    2015-12-01

    Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased Type I and Type II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in online supplemental materials.

  20. Measurement error and outcome distributions: Methodological issues in regression analyses of behavioral coding data

    PubMed Central

    Holsclaw, Tracy; Hallgren, Kevin A.; Steyvers, Mark; Smyth, Padhraic; Atkins, David C.

    2015-01-01

    Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non-normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased type-I and type-II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally-technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in supplementary materials. PMID:26098126

  1. Spatial correlation-based side information refinement for distributed video coding

    NASA Astrophysics Data System (ADS)

    Taieb, Mohamed Haj; Chouinard, Jean-Yves; Wang, Demin

    2013-12-01

    Distributed video coding (DVC) architecture designs, based on distributed source coding principles, have benefitted from significant progresses lately, notably in terms of achievable rate-distortion performances. However, a significant performance gap still remains when compared to prediction-based video coding schemes such as H.264/AVC. This is mainly due to the non-ideal exploitation of the video sequence temporal correlation properties during the generation of side information (SI). In fact, the decoder side motion estimation provides only an approximation of the true motion. In this paper, a progressive DVC architecture is proposed, which exploits the spatial correlation of the video frames to improve the motion-compensated temporal interpolation (MCTI). Specifically, Wyner-Ziv (WZ) frames are divided into several spatially correlated groups that are then sent progressively to the receiver. SI refinement (SIR) is performed as long as these groups are being decoded, thus providing more accurate SI for the next groups. It is shown that the proposed progressive SIR method leads to significant improvements over the Discover DVC codec as well as other SIR schemes recently introduced in the literature.

  2. ETRANS: an energy transport system optimization code for distributed networks of solar collectors

    SciTech Connect

    Barnhart, J.S.

    1980-09-01

    The optimization code ETRANS was developed at the Pacific Northwest Laboratory to design and estimate the costs associated with energy transport systems for distributed fields of solar collectors. The code uses frequently cited layouts for dish and trough collectors and optimizes them on a section-by-section basis. The optimal section design is that combination of pipe diameter and insulation thickness that yields the minimum annualized system-resultant cost. Among the quantities included in the costing algorithm are (1) labor and materials costs associated with initial plant construction, (2) operating expenses due to daytime and nighttime heat losses, and (3) operating expenses due to pumping power requirements. Two preliminary series of simulations were conducted to exercise the code. The results indicate that transport system costs for both dish and trough collector fields increase with field size and receiver exit temperature. Furthermore, dish collector transport systems were found to be much more expensive to build and operate than trough transport systems. ETRANS itself is stable and fast-running and shows promise of being a highly effective tool for the analysis of distributed solar thermal systems.

  3. Space-time correlations in urban sprawl.

    PubMed

    Hernando, A; Hernando, R; Plastino, A

    2014-02-06

    Understanding demographic and migrational patterns constitutes a great challenge. Millions of individual decisions, motivated by economic, political, demographic, rational and/or emotional reasons underlie the high complexity of demographic dynamics. Significant advances in quantitatively understanding such complexity have been registered in recent years, as those involving the growth of cities but many fundamental issues still defy comprehension. We present here compelling empirical evidence of a high level of regularity regarding time and spatial correlations in urban sprawl, unravelling patterns about the inertia in the growth of cities and their interaction with each other. By using one of the world's most exhaustive extant demographic data basis--that of the Spanish Government's Institute INE, with records covering 111 years and (in 2011) 45 million people, distributed among more than 8000 population nuclei--we show that the inertia of city growth has a characteristic time of 15 years, and its interaction with the growth of other cities has a characteristic distance of 80 km. Distance is shown to be the main factor that entangles two cities (60% of total correlations). The power of our current social theories is thereby enhanced.

  4. Entropy of Movement Outcome in Space-Time.

    PubMed

    Lai, Shih-Chiung; Hsieh, Tsung-Yu; Newell, Karl M

    2015-07-01

    Information entropy of the joint spatial and temporal (space-time) probability of discrete movement outcome was investigated in two experiments as a function of different movement strategies (space-time, space, and time instructional emphases), task goals (point-aiming and target-aiming) and movement speed-accuracy constraints. The variance of the movement spatial and temporal errors was reduced by instructional emphasis on the respective spatial or temporal dimension, but increased on the other dimension. The space-time entropy was lower in targetaiming task than the point aiming task but did not differ between instructional emphases. However, the joint probabilistic measure of spatial and temporal entropy showed that spatial error is traded for timing error in tasks with space-time criteria and that the pattern of movement error depends on the dimension of the measurement process. The unified entropy measure of movement outcome in space-time reveals a new relation for the speed-accuracy.

  5. Shared and Distributed Memory Parallel Security Analysis of Large-Scale Source Code and Binary Applications

    SciTech Connect

    Quinlan, D; Barany, G; Panas, T

    2007-08-30

    Many forms of security analysis on large scale applications can be substantially automated but the size and complexity can exceed the time and memory available on conventional desktop computers. Most commercial tools are understandably focused on such conventional desktop resources. This paper presents research work on the parallelization of security analysis of both source code and binaries within our Compass tool, which is implemented using the ROSE source-to-source open compiler infrastructure. We have focused on both shared and distributed memory parallelization of the evaluation of rules implemented as checkers for a wide range of secure programming rules, applicable to desktop machines, networks of workstations and dedicated clusters. While Compass as a tool focuses on source code analysis and reports violations of an extensible set of rules, the binary analysis work uses the exact same infrastructure but is less well developed into an equivalent final tool.

  6. High-capacity quantum key distribution using Chebyshev-map values corresponding to Lucas numbers coding

    NASA Astrophysics Data System (ADS)

    Lai, Hong; Orgun, Mehmet A.; Pieprzyk, Josef; Li, Jing; Luo, Mingxing; Xiao, Jinghua; Xiao, Fuyuan

    2016-11-01

    We propose an approach that achieves high-capacity quantum key distribution using Chebyshev-map values corresponding to Lucas numbers coding. In particular, we encode a key with the Chebyshev-map values corresponding to Lucas numbers and then use k-Chebyshev maps to achieve consecutive and flexible key expansion and apply the pre-shared classical information between Alice and Bob and fountain codes for privacy amplification to solve the security of the exchange of classical information via the classical channel. Consequently, our high-capacity protocol does not have the limitations imposed by orbital angular momentum and down-conversion bandwidths, and it meets the requirements for longer distances and lower error rates simultaneously.

  7. Inferential multi-spectral image compression based on distributed source coding

    NASA Astrophysics Data System (ADS)

    Wu, Xian-yun; Li, Yun-song; Wu, Cheng-ke; Kong, Fan-qiang

    2008-08-01

    Based on the analyses of the interferential multispectral imagery(IMI), a new compression algorithm based on distributed source coding is proposed. There are apparent push motions between the IMI sequences, the relative shift between two images is detected by the block match algorithm at the encoder. Our algorithm estimates the rate of each bitplane with the estimated side information frame. then our algorithm adopts a ROI coding algorithm, in which the rate-distortion lifting procedure is carried out in rate allocation stage. Using our algorithm, the FBC can be removed from the traditional scheme. The compression algorithm developed in the paper can obtain up to 3dB's gain comparing with JPEG2000 and significantly reduce the complexity and storage consumption comparing with 3D-SPIHT at the cost of slight degrade in PSNR.

  8. LineCast: line-based distributed coding and transmission for broadcasting satellite images.

    PubMed

    Wu, Feng; Peng, Xiulian; Xu, Jizheng

    2014-03-01

    In this paper, we propose a novel coding and transmission scheme, called LineCast, for broadcasting satellite images to a large number of receivers. The proposed LineCast matches perfectly with the line scanning cameras that are widely adopted in orbit satellites to capture high-resolution images. On the sender side, each captured line is immediately compressed by a transform-domain scalar modulo quantization. Without syndrome coding, the transmission power is directly allocated to quantized coefficients by scaling the coefficients according to their distributions. Finally, the scaled coefficients are transmitted over a dense constellation. This line-based distributed scheme features low delay, low memory cost, and low complexity. On the receiver side, our proposed line-based prediction is used to generate side information from previously decoded lines, which fully utilizes the correlation among lines. The quantized coefficients are decoded by the linear least square estimator from the received data. The image line is then reconstructed by the scalar modulo dequantization using the generated side information. Since there is neither syndrome coding nor channel coding, the proposed LineCast can make a large number of receivers reach the qualities matching their channel conditions. Our theoretical analysis shows that the proposed LineCast can achieve Shannon's optimum performance by using a high-dimensional modulo-lattice quantization. Experiments on satellite images demonstrate that it achieves up to 1.9-dB gain over the state-of-the-art 2D broadcasting scheme and a gain of more than 5 dB over JPEG 2000 with forward error correction.

  9. Satellites, space, time and the African trypanosomiases.

    PubMed

    Rogers, D J

    2000-01-01

    The human and animal trypanosomiases of Africa provide unique challenges to epidemiologists because of the spatial and temporal scales over which variation in transmission takes place. This chapter describes how our descriptions of the different components of transmission, from the parasites to the affected hosts, eventually developed to include geographical dimensions. It then briefly mentions two key analytical techniques used in the application of multi-temporal remotely sensed imagery to the interpretation of field data; temporal Fourier analysis for data reduction, and a variety of discriminant analytical techniques to describe the distribution and abundance of vectors and diseases. Satellite data may be used both for biological, process-based models and for statistical descriptions of vector populations and disease transmission. Examples are given of models for the tsetse Glossina morsitans in the Yankari Game Reserve, Nigeria, and in The Gambia. In both sites the satellite derived index of Land Surface Temperature (LST) is the best correlate of monthly mortality rates and is used to drive tsetse population models. The Gambia model is then supplemented with a disease transmission component; the mean infection rates of the vectors and of local cattle are satisfactorily described by the model, as are the seasonal variations of infection in the cattle. High and low spatial resolution satellite data have been used in a number of statistical studies of land cover types and tsetse habitats. In addition multi-temporal data may be related to both the incidence and prevalence of trypanosomiasis. Analysis of past and recent animal and human trypanosomiasis data from south-east Uganda supports the suggestion of the importance of cattle as a reservoir of the human disease in this area; mean infection prevalences in both human and animal hosts rise and fall in a similar fashion over the same range of increasing vegetation index values. Monthly sleeping sickness case data

  10. Flexible space-time process for seismic data

    NASA Astrophysics Data System (ADS)

    Adelfio, G.; Chiodi, M.

    2009-04-01

    Introduction Point processes are well studied objects in probability theory and a powerful tool in statistics for modeling and analyzing the distribution of real phenomena, such as the seismic one. Point processes can be specified mathematically in several ways, for instance, by considering the joint distributions of the counts of points in arbitrary sets or defining a complete intensity function. The conditional intensity function is a function of the point history and it is itself a stochastic process depending on the past up to timet. In this paper some techniques to estimate the intensity function of space-time point processes are developed, by following semi-parametric approaches and diagnostic methods to assess their goodness of fit. In particular, because of its particularly adaptive properties to anomalous behavior in data, in this paper a nonparametric estimation approach is used to interpret dependence features of seismic activity of a given area of observation; to justify the estimation approach a diagnostic method for space-time point processes is also revised. Flexible modeling and diagnostics for point processes The definition of effective stochastic models to adequately describe the seismic activity of a fixed area is of great interest in seismology, since a reliable description of earthquakes occurrence might suggest useful ideas on the mechanism of a such complex phenomena. A number of statistical models have been proposed for representing the intensity function of earthquakes. The simpler models assume that earthquakes occur in space and time according to a stationary point process, such that conditional rate becomes a constant. In seismology, however, the stationarity hypothesis might be acceptable only with respect to time, because epicenters usually display a substantial degree of spatial heterogeneity and clustering. Description of seismic events often requires the definition of more complex models than stationary Poisson process and the

  11. Gravitation theory in a fractal space-time

    SciTech Connect

    Agop, M.; Gottlieb, I.

    2006-05-15

    Assimilating the physical space-time with a fractal, a general theory is built. For a fractal dimension D=2, the virtual geodesics of this space-time implies a generalized Schroedinger type equation. Subsequently, a geometric formulation of the gravitation theory on a fractal space-time is given. Then, a connection is introduced on a tangent bundle, the connection coefficients, the Riemann curvature tensor and the Einstein field equation are calculated. It results, by means of a dilation operator, the equivalence of this model with quantum Einstein gravity.

  12. A novel method involving Matlab coding to determine the distribution of a collimated ionizing radiation beam

    NASA Astrophysics Data System (ADS)

    Ioan, M.-R.

    2016-08-01

    In ionizing radiation related experiments, precisely knowing of the involved parameters it is a very important task. Some of these experiments are involving the use of electromagnetic ionizing radiation such are gamma rays and X rays, others make use of energetic charged or not charged small dimensions particles such are protons, electrons, neutrons and even, in other cases, larger accelerated particles such are helium or deuterium nuclei are used. In all these cases the beam used to hit an exposed target must be previously collimated and precisely characterized. In this paper, a novel method to determine the distribution of the collimated beam involving Matlab coding is proposed. The method was implemented by using of some Pyrex glass test samples placed in the beam where its distribution and dimension must be determined, followed by taking high quality pictures of them and then by digital processing the resulted images. By this method, information regarding the doses absorbed in the exposed samples volume are obtained too.

  13. Prioritized Degree Distribution in Wireless Sensor Networks with a Network Coded Data Collection Method

    PubMed Central

    Wan, Jan; Xiong, Naixue; Zhang, Wei; Zhang, Qinchao; Wan, Zheng

    2012-01-01

    The reliability of wireless sensor networks (WSNs) can be greatly affected by failures of sensor nodes due to energy exhaustion or the influence of brutal external environment conditions. Such failures seriously affect the data persistence and collection efficiency. Strategies based on network coding technology for WSNs such as LTCDS can improve the data persistence without mass redundancy. However, due to the bad intermediate performance of LTCDS, a serious ‘cliff effect’ may appear during the decoding period, and source data are hard to recover from sink nodes before sufficient encoded packets are collected. In this paper, the influence of coding degree distribution strategy on the ‘cliff effect’ is observed and the prioritized data storage and dissemination algorithm PLTD-ALPHA is presented to achieve better data persistence and recovering performance. With PLTD-ALPHA, the data in sensor network nodes present a trend that their degree distribution increases along with the degree level predefined, and the persistent data packets can be submitted to the sink node according to its degree in order. Finally, the performance of PLTD-ALPHA is evaluated and experiment results show that PLTD-ALPHA can greatly improve the data collection performance and decoding efficiency, while data persistence is not notably affected. PMID:23235451

  14. Distribution of SR protein exonic splicing enhancer motifs in human protein-coding genes.

    PubMed

    Wang, Jinhua; Smith, Philip J; Krainer, Adrian R; Zhang, Michael Q

    2005-01-01

    Exonic splicing enhancers (ESEs) are pre-mRNA cis-acting elements required for splice-site recognition. We previously developed a web-based program called ESEfinder that scores any sequence for the presence of ESE motifs recognized by the human SR proteins SF2/ASF, SRp40, SRp55 and SC35 (http://rulai.cshl.edu/tools/ESE/). Using ESEfinder, we have undertaken a large-scale analysis of ESE motif distribution in human protein-coding genes. Significantly higher frequencies of ESE motifs were observed in constitutive internal protein-coding exons, compared with both their flanking intronic regions and with pseudo exons. Statistical analysis of ESE motif frequency distributions revealed a complex relationship between splice-site strength and increased or decreased frequencies of particular SR protein motifs. Comparison of constitutively and alternatively spliced exons demonstrated slightly weaker splice-site scores, as well as significantly fewer ESE motifs, in the alternatively spliced group. Our results underline the importance of ESE-mediated SR protein function in the process of exon definition, in the context of both constitutive splicing and regulated alternative splicing.

  15. Projectile Two-dimensional Coordinate Measurement Method Based on Optical Fiber Coding Fire and its Coordinate Distribution Probability

    NASA Astrophysics Data System (ADS)

    Li, Hanshan; Lei, Zhiyong

    2013-01-01

    To improve projectile coordinate measurement precision in fire measurement system, this paper introduces the optical fiber coding fire measurement method and principle, sets up their measurement model, and analyzes coordinate errors by using the differential method. To study the projectile coordinate position distribution, using the mathematical statistics hypothesis method to analyze their distributing law, firing dispersion and probability of projectile shooting the object center were put under study. The results show that exponential distribution testing is relatively reasonable to ensure projectile position distribution on the given significance level. Through experimentation and calculation, the optical fiber coding fire measurement method is scientific and feasible, which can gain accurate projectile coordinate position.

  16. Judging the Space/Time Case in Parliamentary Debate.

    ERIC Educational Resources Information Center

    Williams, David E.; And Others

    1996-01-01

    Discusses criteria for judging space/time cases in parliamentary debate and comments on the controversy with regard to issues of appropriateness and adjudication. Presents four short responses to the points raised in this article. (PA)

  17. Biscalar and Bivector Green's Functions in de Sitter Space Time

    PubMed Central

    Narlikar, J. V.

    1970-01-01

    Biscalar and bivector Green's functions of wave equations are calculated explicitly in de Sitter space time. The calculation is performed by considering the electromagnetic field generated by the spontaneous creation of an electric charge. PMID:16591816

  18. Quadratic bulk viscosity and the topology of space time.

    NASA Astrophysics Data System (ADS)

    Wolf, C.

    1997-12-01

    By considering a homogeneous isotropic universe admitting quadratic bulk viscosity the author shows that if the bulk viscosity coefficient is large the effective topology of space time attains an antiintuitive interpretation in the sense that a positive curvature space time is ever-expanding. This is true for all cosmologies studied except in the case of small quadratic bulk viscosity (3γ+1-kβ ≥ 0, 3γ+1 > 0).

  19. Ricci collineation vectors in fluid space-times

    SciTech Connect

    Tsamparlis, M. ); Mason, D.P. )

    1990-07-01

    The properties of fluid space-times that admit a Ricci collineation vector (RCV) parallel to the fluid unit four-velocity vector {ital u}{sup {ital a}} are briefly reviewed. These properties are expressed in terms of the kinematic quantities of the timelike congruence generated by {ital u}{sup {ital a}}. The cubic equation derived by Oliver and Davis (Ann. Inst. Henri Poincare {bold 30}, 339 (1979)) for the equation of state {ital p}={ital p}({mu}) of a perfect fluid space-time that admits an RCV, which does not degenerate to a Killing vector, is solved for physically realistic fluids. Necessary and sufficient conditions for a fluid space-time to admit a spacelike RCV parallel to a unit vector {ital n}{sup {ital a}} orthogonal to {ital u}{sup {ital a}} are derived in terms of the expansion, shear, and rotation of the spacelike congruence generated by {ital n}{sup {ital a}}. Perfect fluid space-times are studied in detail and analogues of the results for timelike RCVs parallel to {ital u}{sup {ital a}} are obtained. Properties of imperfect fluid space-times for which the energy flux vector {ital q}{sup {ital a}} vanishes and {ital n}{sup {ital a}} is a spacelike eigenvector of the anisotropic stress tensor {pi}{sub {ital ab}} are derived. Fluid space-times with anisotropic pressure are discussed as a special case of imperfect fluid space-times for which {ital n}{sup {ital a}} is an eigenvector of {pi}{sub {ital ab}}.

  20. 25 CFR 18.104 - May a tribe include provisions in its tribal probate code regarding the distribution and descent...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... trust personalty? No. All trust personalty will be distributed in accordance with the American Indian... 25 Indians 1 2010-04-01 2010-04-01 false May a tribe include provisions in its tribal probate code regarding the distribution and descent of trust personalty? 18.104 Section 18.104 Indians BUREAU OF...

  1. 25 CFR 18.104 - May a tribe include provisions in its tribal probate code regarding the distribution and descent...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... trust personalty? No. All trust personalty will be distributed in accordance with the American Indian... 25 Indians 1 2011-04-01 2011-04-01 false May a tribe include provisions in its tribal probate code regarding the distribution and descent of trust personalty? 18.104 Section 18.104 Indians BUREAU OF...

  2. Unraveling the distributed neural code of facial identity through spatiotemporal pattern analysis.

    PubMed

    Nestor, Adrian; Plaut, David C; Behrmann, Marlene

    2011-06-14

    Face individuation is one of the most impressive achievements of our visual system, and yet uncovering the neural mechanisms subserving this feat appears to elude traditional approaches to functional brain data analysis. The present study investigates the neural code of facial identity perception with the aim of ascertaining its distributed nature and informational basis. To this end, we use a sequence of multivariate pattern analyses applied to functional magnetic resonance imaging (fMRI) data. First, we combine information-based brain mapping and dynamic discrimination analysis to locate spatiotemporal patterns that support face classification at the individual level. This analysis reveals a network of fusiform and anterior temporal areas that carry information about facial identity and provides evidence that the fusiform face area responds with distinct patterns of activation to different face identities. Second, we assess the information structure of the network using recursive feature elimination. We find that diagnostic information is distributed evenly among anterior regions of the mapped network and that a right anterior region of the fusiform gyrus plays a central role within the information network mediating face individuation. These findings serve to map out and characterize a cortical system responsible for individuation. More generally, in the context of functionally defined networks, they provide an account of distributed processing grounded in information-based architectures.

  3. Effect of error distribution in channel coding failure on MPEG wireless transmission

    NASA Astrophysics Data System (ADS)

    Robert, P. M.; Darwish, Ahmed M.; Reed, Jeffrey H.

    1998-12-01

    This paper examines the interaction between digital video and channel coding in a wireless communication system. Digital video is a high-bandwidth, computationally intensive application. The recent allocation of large tracks of spectrum by the FCC has made possible the design and implementation of personal wireless digital video devices for several applications, from personal communications to surveillance. A simulation tool was developed to explore the video/channel coding relationship. This tool simulates a packet-based digital wireless transmission in various noise and interference environments. The basic communications system models the DAVIC (Digital Audio-Visual Council) layout for the LMDS (Local Multipoint Distribution Service) system and includes several error control algorithms and a packetizing algorithm that is MPEG-compliant. The Bit-Error-Rate (BER) is a basic metric used in digital communications system design. This work presents simulation results that prove that BER is not a sufficient metric to predict video quality based on channel parameters. Evidence will be presented to show that the relative positioning of bit errors, regardless of absolute positioning and the relative occurrence of these bit error bursts are the main factors that must be observed in a physical layer to design a digital video wireless system.

  4. Proof-of-principle experiment of reference-frame-independent quantum key distribution with phase coding

    PubMed Central

    Liang, Wen-Ye; Wang, Shuang; Li, Hong-Wei; Yin, Zhen-Qiang; Chen, Wei; Yao, Yao; Huang, Jing-Zheng; Guo, Guang-Can; Han, Zheng-Fu

    2014-01-01

    We have demonstrated a proof-of-principle experiment of reference-frame-independent phase coding quantum key distribution (RFI-QKD) over an 80-km optical fiber. After considering the finite-key bound, we still achieve a distance of 50 km. In this scenario, the phases of the basis states are related by a slowly time-varying transformation. Furthermore, we developed and realized a new decoy state method for RFI-QKD systems with weak coherent sources to counteract the photon-number-splitting attack. With the help of a reference-frame-independent protocol and a Michelson interferometer with Faraday rotator mirrors, our system is rendered immune to the slow phase changes of the interferometer and the polarization disturbances of the channel, making the procedure very robust. PMID:24402550

  5. Proof-of-principle experiment of reference-frame-independent quantum key distribution with phase coding.

    PubMed

    Liang, Wen-Ye; Wang, Shuang; Li, Hong-Wei; Yin, Zhen-Qiang; Chen, Wei; Yao, Yao; Huang, Jing-Zheng; Guo, Guang-Can; Han, Zheng-Fu

    2014-01-09

    We have demonstrated a proof-of-principle experiment of reference-frame-independent phase coding quantum key distribution (RFI-QKD) over an 80-km optical fiber. After considering the finite-key bound, we still achieve a distance of 50 km. In this scenario, the phases of the basis states are related by a slowly time-varying transformation. Furthermore, we developed and realized a new decoy state method for RFI-QKD systems with weak coherent sources to counteract the photon-number-splitting attack. With the help of a reference-frame-independent protocol and a Michelson interferometer with Faraday rotator mirrors, our system is rendered immune to the slow phase changes of the interferometer and the polarization disturbances of the channel, making the procedure very robust.

  6. Proof-of-principle experiment of reference-frame-independent quantum key distribution with phase coding

    NASA Astrophysics Data System (ADS)

    Liang, Wen-Ye; Wang, Shuang; Li, Hong-Wei; Yin, Zhen-Qiang; Chen, Wei; Yao, Yao; Huang, Jing-Zheng; Guo, Guang-Can; Han, Zheng-Fu

    2014-01-01

    We have demonstrated a proof-of-principle experiment of reference-frame-independent phase coding quantum key distribution (RFI-QKD) over an 80-km optical fiber. After considering the finite-key bound, we still achieve a distance of 50 km. In this scenario, the phases of the basis states are related by a slowly time-varying transformation. Furthermore, we developed and realized a new decoy state method for RFI-QKD systems with weak coherent sources to counteract the photon-number-splitting attack. With the help of a reference-frame-independent protocol and a Michelson interferometer with Faraday rotator mirrors, our system is rendered immune to the slow phase changes of the interferometer and the polarization disturbances of the channel, making the procedure very robust.

  7. Space-Time Correlations and Dynamic Coupling in Turbulent Flows

    NASA Astrophysics Data System (ADS)

    He, Guowei; Jin, Guodong; Yang, Yue

    2017-01-01

    Space-time correlation is a staple method for investigating the dynamic coupling of spatial and temporal scales of motion in turbulent flows. In this article, we review the space-time correlation models in both the Eulerian and Lagrangian frames of reference, which include the random sweeping and local straining models for isotropic and homogeneous turbulence, Taylor's frozen-flow model and the elliptic approximation model for turbulent shear flows, and the linear-wave propagation model and swept-wave model for compressible turbulence. We then focus on how space-time correlations are used to develop time-accurate turbulence models for the large-eddy simulation of turbulence-generated noise and particle-laden turbulence. We briefly discuss their applications to two-point closures for Kolmogorov's universal scaling of energy spectra and to the reconstruction of space-time energy spectra from a subset of spatial and temporal signals in experimental measurements. Finally, we summarize the current understanding of space-time correlations and conclude with future issues for the field.

  8. Neutrons Flux Distributions of the Pu-Be Source and its Simulation by the MCNP-4B Code

    NASA Astrophysics Data System (ADS)

    Faghihi, F.; Mehdizadeh, S.; Hadad, K.

    Neutron Fluence rate of a low intense Pu-Be source is measured by Neutron Activation Analysis (NAA) of 197Au foils. Also, the neutron fluence rate distribution versus energy is calculated using the MCNP-4B code based on ENDF/B-V library. Theoretical simulation as well as our experimental performance are a new experience for Iranians to make reliability with the code for further researches. In our theoretical investigation, an isotropic Pu-Be source with cylindrical volume distribution is simulated and relative neutron fluence rate versus energy is calculated using MCNP-4B code. Variation of the fast and also thermal neutrons fluence rate, which are measured by NAA method and MCNP code, are compared.

  9. A potential foundation for emergent space-time

    NASA Astrophysics Data System (ADS)

    Knuth, Kevin H.; Bahreyni, Newshaw

    2014-11-01

    We present a novel derivation of both the Minkowski metric and Lorentz transformations from the consistent quantification of a causally ordered set of events with respect to an embedded observer. Unlike past derivations, which have relied on assumptions such as the existence of a 4-dimensional manifold, symmetries of space-time, or the constant speed of light, we demonstrate that these now familiar mathematics can be derived as the unique means to consistently quantify a network of events. This suggests that space-time need not be physical, but instead the mathematics of space and time emerges as the unique way in which an observer can consistently quantify events and their relationships to one another. The result is a potential foundation for emergent space-time.

  10. FLRW cosmology in Weyl-integrable space-time

    SciTech Connect

    Gannouji, Radouane; Nandan, Hemwati; Dadhich, Naresh E-mail: hntheory@yahoo.co.in

    2011-11-01

    We investigate the Weyl space-time extension of general relativity (GR) for studying the FLRW cosmology through focusing and defocusing of the geodesic congruences. We have derived the equations of evolution for expansion, shear and rotation in the Weyl space-time. In particular, we consider the Starobinsky modification, f(R) = R+βR{sup 2}−2Λ, of gravity in the Einstein-Palatini formalism, which turns out to reduce to the Weyl integrable space-time (WIST) with the Weyl vector being a gradient. The modified Raychaudhuri equation takes the form of the Hill-type equation which is then analysed to study the formation of the caustics. In this model, it is possible to have a Big Bang singularity free cyclic Universe but unfortunately the periodicity turns out to be extremely short.

  11. Space-Time Diffeomorphisms in Noncommutative Gauge Theories

    NASA Astrophysics Data System (ADS)

    Rosenbaum, Marcos; Vergara, J. David; Juarez, L. Román

    2008-07-01

    In previous work [Rosenbaum M. et al., J. Phys. A: Math. Theor. 40 (2007), 10367-10382] we have shown how for canonical parametrized field theories, where space-time is placed on the same footing as the other fields in the theory, the representation of space-time diffeomorphisms provides a very convenient scheme for analyzing the induced twisted deformation of these diffeomorphisms, as a result of the space-time noncommutativity. However, for gauge field theories (and of course also for canonical geometrodynamics) where the Poisson brackets of the constraints explicitely depend on the embedding variables, this Poisson algebra cannot be connected directly with a representation of the complete Lie algebra of space-time diffeomorphisms, because not all the field variables turn out to have a dynamical character [Isham C.J., Kuchar K.V., Ann. Physics 164 (1985), 288-315, 316-333]. Nonetheless, such an homomorphic mapping can be rec! uperated by first modifying the original action and then adding additional constraints in the formalism in order to retrieve the original theory, as shown by Kuchar and Stone for the case of the parametrized Maxwell field in [Kuchar K.V., Stone S.L., Classical Quantum Gravity 4 (1987), 319-328]. Making use of a combination of all of these ideas, we are therefore able to apply our canonical reparametrization approach in order to derive the deformed Lie algebra of the noncommutative space-time diffeomorphisms as well as to consider how gauge transformations act on the twisted algebras of gauge and particle fields. Thus, hopefully, adding clarification on some outstanding issues in the literature concerning the symmetries for gauge theories in noncommutative space-times.

  12. Modal and temporal logics for abstract space-time structures

    NASA Astrophysics Data System (ADS)

    Uckelman, Sara L.; Uckelman, Joel

    In the fourth century BC, the Greek philosopher Diodoros Chronos gave a temporal definition of necessity. Because it connects modality and temporality, this definition is of interest to philosophers working within branching time or branching space-time models. This definition of necessity can be formalized and treated within a logical framework. We give a survey of the several known modal and temporal logics of abstract space-time structures based on the real numbers and the integers, considering three different accessibility relations between spatio-temporal points.

  13. Probing dense granular materials by space-time dependent perturbations.

    PubMed

    Kondic, L; Dybenko, O M; Behringer, R P

    2009-04-01

    The manner in which signals propagate through dense granular systems in both space and time is not well understood. In order to probe this process, we carry out discrete element simulations of the system response to excitations where we control the driving frequency and wavelength independently. Fourier analysis shows that properties of the signal depend strongly on the space-time scales of the perturbation. The features of the response provide a test bed for models that predict statistical and continuum space-time properties. We illustrate this connection between microscale physics and macroscale behavior by comparing the system response to a simple elastic model with damping.

  14. Constructing infrared finite propagators in inflating space-time

    SciTech Connect

    Rajaraman, Arvind; Kumar, Jason; Leblond, Louis

    2010-07-15

    The usual (Bunch-Davies) Feynman propagator of a massless field is not well defined in an expanding universe due to the presence of infrared divergences. We propose a new propagator which yields IR finite answers to any correlation function. The key point is that in a de Sitter space-time there is an ambiguity in the zero mode of the propagator. This ambiguity can be used to cancel the apparent divergences which arise in some loop calculations in eternally (or semieternally) inflating space-time. We refer to this process as zero-mode modification. The residual ambiguity is fixed by observational measurement.

  15. Confinement from gluodynamics in curved space-time

    SciTech Connect

    Gaete, Patricio; Spallucci, Euro

    2008-01-15

    We determine the static potential for a heavy quark-antiquark pair from gluodynamics in curved space-time. Our calculation is done within the framework of the gauge-invariant, path-dependent, variables formalism. The potential energy is the sum of a Yukawa and a linear potential, leading to the confinement of static charges.

  16. Hermitian realizations of κ-Minkowski space-time

    NASA Astrophysics Data System (ADS)

    Kovačević, Domagoj; Meljanac, Stjepan; Samsarov, Andjelo; Škoda, Zoran

    2015-01-01

    General realizations, star products and plane waves for κ-Minkowski space-time are considered. Systematic construction of general Hermitian realization is presented, with special emphasis on noncommutative plane waves and Hermitian star product. Few examples are elaborated and possible physical applications are mentioned.

  17. Fermions in a Kerr-Newman space-time

    SciTech Connect

    Dariescu, M.A.; Dariescu, C.; Gottlieb, I.

    1995-10-01

    The aim of this paper is to put the U(I)-gauge theory of fermions in the space-time described by a Kerr-Newman metric. The field equations have rather complicated expressions essentially different from the Minkowskian spacetime.

  18. Joint space-time geostatistical model for air quality surveillance

    NASA Astrophysics Data System (ADS)

    Russo, A.; Soares, A.; Pereira, M. J.

    2009-04-01

    Air pollution and peoples' generalized concern about air quality are, nowadays, considered to be a global problem. Although the introduction of rigid air pollution regulations has reduced pollution from industry and power stations, the growing number of cars on the road poses a new pollution problem. Considering the characteristics of the atmospheric circulation and also the residence times of certain pollutants in the atmosphere, a generalized and growing interest on air quality issues led to research intensification and publication of several articles with quite different levels of scientific depth. As most natural phenomena, air quality can be seen as a space-time process, where space-time relationships have usually quite different characteristics and levels of uncertainty. As a result, the simultaneous integration of space and time is not an easy task to perform. This problem is overcome by a variety of methodologies. The use of stochastic models and neural networks to characterize space-time dispersion of air quality is becoming a common practice. The main objective of this work is to produce an air quality model which allows forecasting critical concentration episodes of a certain pollutant by means of a hybrid approach, based on the combined use of neural network models and stochastic simulations. A stochastic simulation of the spatial component with a space-time trend model is proposed to characterize critical situations, taking into account data from the past and a space-time trend from the recent past. To identify near future critical episodes, predicted values from neural networks are used at each monitoring station. In this paper, we describe the design of a hybrid forecasting tool for ambient NO2 concentrations in Lisbon, Portugal.

  19. Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry

    1998-01-01

    This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.

  20. On distributed memory MPI-based parallelization of SPH codes in massive HPC context

    NASA Astrophysics Data System (ADS)

    Oger, G.; Le Touzé, D.; Guibert, D.; de Leffe, M.; Biddiscombe, J.; Soumagne, J.; Piccinali, J.-G.

    2016-03-01

    Most of particle methods share the problem of high computational cost and in order to satisfy the demands of solvers, currently available hardware technologies must be fully exploited. Two complementary technologies are now accessible. On the one hand, CPUs which can be structured into a multi-node framework, allowing massive data exchanges through a high speed network. In this case, each node is usually comprised of several cores available to perform multithreaded computations. On the other hand, GPUs which are derived from the graphics computing technologies, able to perform highly multi-threaded calculations with hundreds of independent threads connected together through a common shared memory. This paper is primarily dedicated to the distributed memory parallelization of particle methods, targeting several thousands of CPU cores. The experience gained clearly shows that parallelizing a particle-based code on moderate numbers of cores can easily lead to an acceptable scalability, whilst a scalable speedup on thousands of cores is much more difficult to obtain. The discussion revolves around speeding up particle methods as a whole, in a massive HPC context by making use of the MPI library. We focus on one particular particle method which is Smoothed Particle Hydrodynamics (SPH), one of the most widespread today in the literature as well as in engineering.

  1. The spatial distribution of fixed mutations within genes coding for proteins

    NASA Technical Reports Server (NTRS)

    Holmquist, R.; Goodman, M.; Conroy, T.; Czelusniak, J.

    1983-01-01

    An examination has been conducted of the extensive amino acid sequence data now available for five protein families - the alpha crystallin A chain, myoglobin, alpha and beta hemoglobin, and the cytochromes c - with the goal of estimating the true spatial distribution of base substitutions within genes that code for proteins. In every case the commonly used Poisson density failed to even approximate the experimental pattern of base substitution. For the 87 species of beta hemoglobin examined, for example, the probability that the observed results were from a Poisson process was the minuscule 10 to the -44th. Analogous results were obtained for the other functional families. All the data were reasonably, but not perfectly, described by the negative binomial density. In particular, most of the data were described by one of the very simple limiting forms of this density, the geometric density. The implications of this for evolutionary inference are discussed. It is evident that most estimates of total base substitutions between genes are badly in need of revision.

  2. Real-time distributed video coding for 1K-pixel visual sensor networks

    NASA Astrophysics Data System (ADS)

    Hanca, Jan; Deligiannis, Nikos; Munteanu, Adrian

    2016-07-01

    Many applications in visual sensor networks (VSNs) demand the low-cost wireless transmission of video data. In this context, distributed video coding (DVC) has proven its potential to achieve state-of-the-art compression performance while maintaining low computational complexity of the encoder. Despite their proven capabilities, current DVC solutions overlook hardware constraints, and this renders them unsuitable for practical implementations. This paper introduces a DVC architecture that offers highly efficient wireless communication in real-world VSNs. The design takes into account the severe computational and memory constraints imposed by practical implementations on low-resolution visual sensors. We study performance-complexity trade-offs for feedback-channel removal, propose learning-based techniques for rate allocation, and investigate various simplifications of side information generation yielding real-time decoding. The proposed system is evaluated against H.264/AVC intra, Motion-JPEG, and our previously designed DVC prototype for low-resolution visual sensors. Extensive experimental results on various data show significant improvements in multiple configurations. The proposed encoder achieves real-time performance on a 1k-pixel visual sensor mote. Real-time decoding is performed on a Raspberry Pi single-board computer or a low-end notebook PC. To the best of our knowledge, the proposed codec is the first practical DVC deployment on low-resolution VSNs.

  3. Trajectory Data Analyses for Pedestrian Space-time Activity Study

    PubMed Central

    Qi, Feng; Du, Fei

    2013-01-01

    It is well recognized that human movement in the spatial and temporal dimensions has direct influence on disease transmission1-3. An infectious disease typically spreads via contact between infected and susceptible individuals in their overlapped activity spaces. Therefore, daily mobility-activity information can be used as an indicator to measure exposures to risk factors of infection. However, a major difficulty and thus the reason for paucity of studies of infectious disease transmission at the micro scale arise from the lack of detailed individual mobility data. Previously in transportation and tourism research detailed space-time activity data often relied on the time-space diary technique, which requires subjects to actively record their activities in time and space. This is highly demanding for the participants and collaboration from the participants greatly affects the quality of data4. Modern technologies such as GPS and mobile communications have made possible the automatic collection of trajectory data. The data collected, however, is not ideal for modeling human space-time activities, limited by the accuracies of existing devices. There is also no readily available tool for efficient processing of the data for human behavior study. We present here a suite of methods and an integrated ArcGIS desktop-based visual interface for the pre-processing and spatiotemporal analyses of trajectory data. We provide examples of how such processing may be used to model human space-time activities, especially with error-rich pedestrian trajectory data, that could be useful in public health studies such as infectious disease transmission modeling. The procedure presented includes pre-processing, trajectory segmentation, activity space characterization, density estimation and visualization, and a few other exploratory analysis methods. Pre-processing is the cleaning of noisy raw trajectory data. We introduce an interactive visual pre-processing interface as well as an

  4. Trajectory data analyses for pedestrian space-time activity study.

    PubMed

    Qi, Feng; Du, Fei

    2013-02-25

    It is well recognized that human movement in the spatial and temporal dimensions has direct influence on disease transmission(1-3). An infectious disease typically spreads via contact between infected and susceptible individuals in their overlapped activity spaces. Therefore, daily mobility-activity information can be used as an indicator to measure exposures to risk factors of infection. However, a major difficulty and thus the reason for paucity of studies of infectious disease transmission at the micro scale arise from the lack of detailed individual mobility data. Previously in transportation and tourism research detailed space-time activity data often relied on the time-space diary technique, which requires subjects to actively record their activities in time and space. This is highly demanding for the participants and collaboration from the participants greatly affects the quality of data(4). Modern technologies such as GPS and mobile communications have made possible the automatic collection of trajectory data. The data collected, however, is not ideal for modeling human space-time activities, limited by the accuracies of existing devices. There is also no readily available tool for efficient processing of the data for human behavior study. We present here a suite of methods and an integrated ArcGIS desktop-based visual interface for the pre-processing and spatiotemporal analyses of trajectory data. We provide examples of how such processing may be used to model human space-time activities, especially with error-rich pedestrian trajectory data, that could be useful in public health studies such as infectious disease transmission modeling. The procedure presented includes pre-processing, trajectory segmentation, activity space characterization, density estimation and visualization, and a few other exploratory analysis methods. Pre-processing is the cleaning of noisy raw trajectory data. We introduce an interactive visual pre-processing interface as well as an

  5. Quantum gravity effects in Myers-Perry space-times

    NASA Astrophysics Data System (ADS)

    Litim, Daniel F.; Nikolakopoulos, Konstantinos

    2014-04-01

    We study quantum gravity effects for Myers-Perry black holes assuming that the leading contributions arise from the renormalization group evolution of Newton's coupling. Provided that gravity weakens following the asymptotic safety conjecture, we find that quantum effects lift a degeneracy of higher-dimensional black holes, and dominate over kinematical ones induced by rotation, particularly for small black hole mass, large angular momentum, and higher space-time dimensionality. Quantum-corrected space-times display inner and outer horizons, and show the existence of a black hole of smallest mass in any dimension. Ultra-spinning solutions no longer persist. Thermodynamic properties including temperature, specific heat, the Komar integrals, and aspects of black hole mechanics are studied as well. Observing a softening of the ring singularity, we also discuss the validity of classical energy conditions.

  6. Space-time evolution and CMB anisotropies from quantum gravity

    SciTech Connect

    Hamada, Ken-ji; Horata, Shinichi; Yukawa, Tetsuyuki

    2006-12-15

    We propose an evolutional scenario of the universe which starts from quantum states with conformal invariance, passing through the inflationary era, and then makes a transition to the conventional Einstein space-time. The space-time dynamics is derived from the renormalizable higher-derivative quantum gravity on the basis of a conformal gravity in four dimensions. Based on the linear perturbation theory in the inflationary background, we simulate evolutions of gravitational scalar, vector, and tensor modes, and evaluate the spectra at the transition point located at the beginning of the big bang. The obtained spectra cover the range of the primordial spectra for explaining the anisotropies in the homogeneous cosmic microwave background.

  7. k-Inflation in noncommutative space-time

    NASA Astrophysics Data System (ADS)

    Feng, Chao-Jun; Li, Xin-Zhou; Liu, Dao-Jun

    2015-02-01

    The power spectra of the scalar and tensor perturbations in the noncommutative k-inflation model are calculated in this paper. In this model, all the modes created when the stringy space-time uncertainty relation is satisfied, and they are generated inside the sound/Hubble horizon during inflation for the scalar/tensor perturbations. It turns out that a linear term describing the noncommutative space-time effect contributes to the power spectra of the scalar and tensor perturbations. Confronting the general noncommutative k-inflation model with latest results from Planck and BICEP2, and taking and as free parameters, we find that it is well consistent with observations. However, for the two specific models, i.e. the tachyon and DBI inflation models, it is found that the DBI model is not favored, while the tachyon model lies inside the contour, when the e-folding number is assumed to be around.

  8. Effect of Heat on Space-Time Correlations in Jets

    NASA Technical Reports Server (NTRS)

    Bridges, James

    2006-01-01

    Measurements of space-time correlations of velocity, acquired in jets from acoustic Mach number 0.5 to 1.5 and static temperature ratios up to 2.7 are presented and analyzed. Previous reports of these experiments concentrated on the experimental technique and on validating the data. In the present paper the dataset is analyzed to address the question of how space-time correlations of velocity are different in cold and hot jets. The analysis shows that turbulent kinetic energy intensities, lengthscales, and timescales are impacted by the addition of heat, but by relatively small amounts. This contradicts the models and assumptions of recent aeroacoustic theory trying to predict the noise of hot jets. Once the change in jet potential core length has been factored out, most one- and two-point statistics collapse for all hot and cold jets.

  9. Micro-Macro Duality and Space-Time Emergence

    SciTech Connect

    Ojima, Izumi

    2011-03-28

    The microscopic origin of space-time geometry is explained on the basis of an emergence process associated with the condensation of infinite number of microscopic quanta responsible for symmetry breakdown, which implements the basic essence of 'Quantum-Classical Correspondence' and of the forcing method in physical and mathematical contexts, respectively. From this viewpoint, the space-time dependence of physical quantities arises from the 'logical extension' to change 'constant objects' into 'variable objects' by tagging the order parameters associated with the condensation onto ''constant objects''; the logical direction here from a value y to a domain variable x(to materialize the basic mechanism behind the Gel'fand isomorphism) is just opposite to that common in the usual definition of a function f : x->f(x) from its domain variable x to a value y = f(x).

  10. Measuring Space-Time Geometry over the Ages

    SciTech Connect

    Stebbins, Albert; /Fermilab

    2012-05-01

    Theorists are often told to express things in the 'observational plane'. One can do this for space-time geometry, considering 'visual' observations of matter in our universe by a single observer over time, with no assumptions about isometries, initial conditions, nor any particular relation between matter and geometry, such as Einstein's equations. Using observables as coordinates naturally leads to a parametrization of space-time geometry in terms of other observables, which in turn prescribes an observational program to measure the geometry. Under the assumption of vorticity-free matter flow we describe this observational program, which includes measurements of gravitational lensing, proper motion, and redshift drift. Only 15% of the curvature information can be extracted without long time baseline observations, and this increases to 35% with observations that will take decades. The rest would likely require centuries of observations. The formalism developed is exact, non-perturbative, and more general than the usual cosmological analysis.

  11. Space time ETAS models and an improved extension

    NASA Astrophysics Data System (ADS)

    Ogata, Yosihiko; Zhuang, Jiancang

    2006-02-01

    For sensitive detection of anomalous seismicity such as quiescence and activation in a given region, we need a suitable statistical reference model that represents a normal seismic activity in the region. The regional occurrence rate of the earthquakes is modeled as a function of previous activity, the specific form of which is based on empirical laws in time and space such as the modified Omori formula and the Utsu-Seki scaling law of aftershock area against magnitude, respectively. This manuscript summarizes the development of the epidemic type aftershock sequence (ETAS) model and proposes an extended version of the best fitted space-time model that was suggested in Ogata [Ogata, Y., 1998. Space-time point-process models for earthquake occurrences, Ann. Inst. Statist. Math., 50: 379-402.]. This model indicates significantly better fit to seismicity in various regions in and around Japan.

  12. Convexity and the Euclidean Metric of Space-Time

    NASA Astrophysics Data System (ADS)

    Kalogeropoulos, Nikolaos

    2017-02-01

    We address the question about the reasons why the "Wick-rotated", positive-definite, space-time metric obeys the Pythagorean theorem. An answer is proposed based on the convexity and smoothness properties of the functional spaces purporting to provide the kinematic framework of approaches to quantum gravity. We employ moduli of convexity and smoothness which are eventually extremized by Hilbert spaces. We point out the potential physical significance that functional analytical dualities play in this framework. Following the spirit of the variational principles employed in classical and quantum Physics, such Hilbert spaces dominate in a generalized functional integral approach. The metric of space-time is induced by the inner product of such Hilbert spaces.

  13. Corrected Hawking Temperature in Snyder's Quantized Space-time

    NASA Astrophysics Data System (ADS)

    Ma, Meng-Sen; Liu, Fang; Zhao, Ren

    2015-06-01

    In the quantized space-time of Snyder, generalized uncertainty relation and commutativity are both included. In this paper we analyze the possible form for the corrected Hawking temperature and derive it from the both effects. It is shown that the corrected Hawking temperature has a form similar to the one of noncommutative geometry inspired Schwarzschild black hole, however with an requirement for the noncommutative parameter 𝜃 and the minimal length a.

  14. Causality in noncommutative two-sheeted space-times

    NASA Astrophysics Data System (ADS)

    Franco, Nicolas; Eckstein, Michał

    2015-10-01

    We investigate the causal structure of two-sheeted space-times using the tools of Lorentzian spectral triples. We show that the noncommutative geometry of these spaces allows for causal relations between the two sheets. The computation is given in detail when the sheet is a 2- or 4-dimensional globally hyperbolic spin manifold. The conclusions are then generalised to a point-dependent distance between the two sheets resulting from the fluctuations of the Dirac operator.

  15. Uniqueness of Kerr space-time near null infinity

    SciTech Connect

    Wu Xiaoning; Bai Shan

    2008-12-15

    We reexpress the Kerr metric in standard Bondi-Sachs coordinates near null infinity I{sup +}. Using the uniqueness result of the characteristic initial value problem, we prove the Kerr metric is the only asymptotically flat, stationary, axially symmetric, type-D solution of the vacuum Einstein equation. The Taylor series of Kerr space-time is expressed in terms of Bondi-Sachs coordinates, and the Newman-Penrose constants have been calculated.

  16. Detecting space-time cancer clusters using residential histories

    NASA Astrophysics Data System (ADS)

    Jacquez, Geoffrey M.; Meliker, Jaymie R.

    2007-04-01

    Methods for analyzing geographic clusters of disease typically ignore the space-time variability inherent in epidemiologic datasets, do not adequately account for known risk factors (e.g., smoking and education) or covariates (e.g., age, gender, and race), and do not permit investigation of the latency window between exposure and disease. Our research group recently developed Q-statistics for evaluating space-time clustering in cancer case-control studies with residential histories. This technique relies on time-dependent nearest neighbor relationships to examine clustering at any moment in the life-course of the residential histories of cases relative to that of controls. In addition, in place of the widely used null hypothesis of spatial randomness, each individual's probability of being a case is instead based on his/her risk factors and covariates. Case-control clusters will be presented using residential histories of 220 bladder cancer cases and 440 controls in Michigan. In preliminary analyses of this dataset, smoking, age, gender, race and education were sufficient to explain the majority of the clustering of residential histories of the cases. Clusters of unexplained risk, however, were identified surrounding the business address histories of 10 industries that emit known or suspected bladder cancer carcinogens. The clustering of 5 of these industries began in the 1970's and persisted through the 1990's. This systematic approach for evaluating space-time clustering has the potential to generate novel hypotheses about environmental risk factors. These methods may be extended to detect differences in space-time patterns of any two groups of people, making them valuable for security intelligence and surveillance operations.

  17. Class of Einstein-Maxwell-dilaton-axion space-times

    SciTech Connect

    Matos, Tonatiuh; Miranda, Galaxia; Sanchez-Sanchez, Ruben; Wiederhold, Petra

    2009-06-15

    We use the harmonic maps ansatz to find exact solutions of the Einstein-Maxwell-dilaton-axion (EMDA) equations. The solutions are harmonic maps invariant to the symplectic real group in four dimensions Sp(4,R){approx}O(5). We find solutions of the EMDA field equations for the one- and two-dimensional subspaces of the symplectic group. Specially, for illustration of the method, we find space-times that generalize the Schwarzschild solution with dilaton, axion, and electromagnetic fields.

  18. Experimental constraints on the exotic shearing of space-time

    NASA Astrophysics Data System (ADS)

    Richardson, Jonathan William

    The Holometer program is a search for first experimental evidence that space-time has quantum structure. The detector consists of a pair of co-located 40-m power-recycled interferometers whose outputs are read out synchronously at 50 MHz, achieving sensitivity to spatially-correlated fluctuations in differential position on time scales shorter than the light-crossing time of the instruments. Unlike gravitational wave interferometers, which time-resolve transient geometrical disturbances in the spatial background, the Holometer is searching for a universal, stationary quantization noise of the background itself. This dissertation presents the final results of the Holometer Phase I search, an experiment configured for sensitivity to exotic coherent shearing fluctuations of space-time. Measurements of high-frequency cross-spectra of the interferometer signals obtain sensitivity to spatially-correlated effects far exceeding any previous measurement, in a broad frequency band extending to 7.6 MHz, twice the inverse light-crossing time of the apparatus. This measurement is the statistical aggregation of 2.1 petabytes of 2-byte differential position measurements obtained over a month-long exposure time. At 3-sigma significance, it places an upper limit on the coherence scale of spatial shear two orders of magnitude below the Planck length. The result demonstrates the viability of this novel spatially-correlated interferometric detection technique to reach unprecedented sensitivity to coherent deviations of space-time from classicality, opening the door for direct experimental tests of theories of relational quantum gravity.

  19. Experimental Constraints of the Exotic Shearing of Space-Time

    SciTech Connect

    Richardson, Jonathan William

    2016-08-01

    The Holometer program is a search for rst experimental evidence that space-time has quantum structure. The detector consists of a pair of co-located 40-m power-recycled interferometers whose outputs are read out synchronously at 50 MHz, achieving sensitivity to spatiallycorrelated uctuations in dierential position on time scales shorter than the light-crossing time of the instruments. Unlike gravitational wave interferometers, which time-resolve transient geometrical disturbances in the spatial background, the Holometer is searching for a universal, stationary quantization noise of the background itself. This dissertation presents the nal results of the Holometer Phase I search, an experiment congured for sensitivity to exotic coherent shearing uctuations of space-time. Measurements of high-frequency cross-spectra of the interferometer signals obtain sensitivity to spatially-correlated eects far exceeding any previous measurement, in a broad frequency band extending to 7.6 MHz, twice the inverse light-crossing time of the apparatus. This measurement is the statistical aggregation of 2.1 petabytes of 2-byte dierential position measurements obtained over a month-long exposure time. At 3 signicance, it places an upper limit on the coherence scale of spatial shear two orders of magnitude below the Planck length. The result demonstrates the viability of this novel spatially-correlated interferometric detection technique to reach unprecedented sensitivity to coherent deviations of space-time from classicality, opening the door for direct experimental tests of theories of relational quantum gravity.

  20. Relativistic helicity and link in Minkowski space-time

    SciTech Connect

    Yoshida, Z.; Kawazura, Y.; Yokoyama, T.

    2014-04-15

    A relativistic helicity has been formulated in the four-dimensional Minkowski space-time. Whereas the relativistic distortion of space-time violates the conservation of the conventional helicity, the newly defined relativistic helicity conserves in a barotropic fluid or plasma, dictating a fundamental topological constraint. The relation between the helicity and the vortex-line topology has been delineated by analyzing the linking number of vortex filaments which are singular differential forms representing the pure states of Banach algebra. While the dimension of space-time is four, vortex filaments link, because vorticities are primarily 2-forms and the corresponding 2-chains link in four dimension; the relativistic helicity measures the linking number of vortex filaments that are proper-time cross-sections of the vorticity 2-chains. A thermodynamic force yields an additional term in the vorticity, by which the vortex filaments on a reference-time plane are no longer pure states. However, the vortex filaments on a proper-time plane remain to be pure states, if the thermodynamic force is exact (barotropic), thus, the linking number of vortex filaments conserves.

  1. A MAPLE Package for Energy-Momentum Tensor Assessment in Curved Space-Time

    SciTech Connect

    Murariu, Gabriel; Praisler, Mirela

    2010-01-21

    One of the most interesting problem which remain unsolved, since the birth of the General Theory of Relativity (GR), is the energy-momentum localization. All our reflections are within the Lagrange formalism of the field theory. The concept of the energy-momentum tensor for gravitational interactions has a long history. To find a generally accepted expression, there have been different attempts. This paper is dedicated to the investigation of the energy-momentum problem in the theory of General Relativity. We use Einstein [1], Landau-Lifshitz [2], Bergmann-Thomson [3] and Moller's [4] prescriptions to evaluate energy-momentum distribution. In order to cover the huge volume of computation and, bearing in mind to make a general approaching for different space-time configurations, a MAPLE application to succeed in studying the energy momentum tensor was built. In the second part of the paper for two space-time configuration, the comparative results were presented.

  2. Advancing Underwater Acoustic Communication for Autonomous Distributed Networks via Sparse Channel Sensing, Coding, and Navigation Support

    DTIC Science & Technology

    2011-09-30

    channel interference mitigation for underwater acoustic MIMO-OFDM. 3) Turbo equalization for OFDM modulated physical layer network coding. 4) Blind CFO...Localization and tracking of underwater physical systems. 7) NAMS: A networked acoustic modem system for underwater applications . 8) OFDM receiver design in...3) Turbo Equalization for OFDM Modulated Physical Layer Network Coding. We have investigated a practical orthogonal frequency division multiplexing

  3. MØLLER Energy-Momentum Prescription for a Locally Rotationally Symmetric Space-Time

    NASA Astrophysics Data System (ADS)

    Aydogdu, Oktay

    The energy distribution in the Locally Rotationally Symmetric (LRS) Bianchi type II space-time is obtained by considering the Møller energy-momentum definition in both Einstein's theory of general relativity and teleparallel theory of relativity. The energy distribution which includes both the matter and gravitational field is found to be zero in both of these different gravitation theories. This result agrees with previous works of Cooperstock and Israelit, Rosen, Johri et al., Banerjee and Sen, Vargas, and Aydogdu and Salti. Our result — the total energy of the universe is zero — supports the view points of Albrow and Tryon.

  4. The Joint Space-Time Statistics Of Macroweather Precipitation, Space-Time Statistical Factorization And Macroweather Models

    NASA Astrophysics Data System (ADS)

    Lovejoy, S.; de Lima, I. P.

    2015-12-01

    Over the range of time scales from about 10 days to 30-100 years, in addition to the familiar weather and climate regimes, there is an intermediate "macroweather" regime characterized by negative temporal fluctuation exponents: implying that fluctuations tend to cancel each other out, that averages tend to converge. We show theoretically and numerically that macroweather precipitation can be modeled by a stochastic weather-climate model (the Climate Extended Fractionally Integrated Flux, model, CEFIF) first proposed for macroweather temperatures and we show numerically that a four parameter space-time CEFIF model can approximately reproduce eight or so empirical space-time exponents. In spite of this success, CEFIF is theoretically and numerically difficult to manage. We therefore propose a simplified stochastic model in which the temporal behavior is modeled as a fractional Gaussian noise but the spatial behaviour as a multifractal (climate) cascade: a spatial extension of the recently introduced ScaLIng Macroweather Model, SLIMM. Both the CEFIF and this spatial SLIMM model have a property often implicitly assumed by climatologists: that climate statistics can be "homogenized" by normalizing them with the standard deviation of the anomalies. Physically, it means that the spatial macroweather variability corresponds to different climate zones that multiplicatively modulate the local, temporal statistics. This simplified macroweather model provides a framework for macroweather forecasting that exploits the system's long range memory and spatial correlations. We test factorization and the model with the help of three centennial, global scale precipitation products that we analyze jointly in space-time.

  5. Comparison of experimental pulse-height distributions in germanium detectors with integrated-tiger-series-code predictions

    SciTech Connect

    Beutler, D.E.; Halbleib, J.A. ); Knott, D.P. )

    1989-12-01

    This paper reports pulse-height distributions in two different types of Ge detectors measured for a variety of medium-energy x-ray bremsstrahlung spectra. These measurements have been compared to predictions using the integrated tiger series (ITS) Monte Carlo electron/photon transport code. In general, the authors find excellent agreement between experiments and predictions using no free parameters. These results demonstrate that the ITS codes can predict the combined bremsstrahlung production and energy deposition with good precision (within measurement uncertainties). The one region of disagreement observed occurs for low-energy (<50 keV) photons using low-energy bremsstrahlung spectra. In this case the ITS codes appear to underestimate the produced and/or absorbed radiation by almost an order of magnitude.

  6. BOOK REVIEW Exact Space-Times in Einstein's General Relativity Exact Space-Times in Einstein's General Relativity

    NASA Astrophysics Data System (ADS)

    Lake, Kayll

    2010-12-01

    The title immediately brings to mind a standard reference of almost the same title [1]. The authors are quick to point out the relationship between these two works: they are complementary. The purpose of this work is to explain what is known about a selection of exact solutions. As the authors state, it is often much easier to find a new solution of Einstein's equations than it is to understand it. Even at first glance it is very clear that great effort went into the production of this reference. The book is replete with beautifully detailed diagrams that reflect deep geometric intuition. In many parts of the text there are detailed calculations that are not readily available elsewhere. The book begins with a review of basic tools that allows the authors to set the notation. Then follows a discussion of Minkowski space with an emphasis on the conformal structure and applications such as simple cosmic strings. The next two chapters give an in-depth review of de Sitter space and then anti-de Sitter space. Both chapters contain a remarkable collection of useful diagrams. The standard model in cosmology these days is the ICDM model and whereas the chapter on the Friedmann-Lemaître-Robertson-Walker space-times contains much useful information, I found the discussion of the currently popular a representation rather too brief. After a brief but interesting excursion into electrovacuum, the authors consider the Schwarzschild space-time. This chapter does mention the Swiss cheese model but the discussion is too brief and certainly dated. Space-times related to Schwarzschild are covered in some detail and include not only the addition of charge and the cosmological constant but also the addition of radiation (the Vaidya solution). Just prior to a discussion of the Kerr space-time, static axially symmetric space-times are reviewed. Here one can find a very interesting discussion of the Curzon-Chazy space-time. The chapter on rotating black holes is rather brief and, for

  7. Is there a space-time continuum in olfaction?

    PubMed

    Leon, Michael; Johnson, Brett A

    2009-07-01

    The coding of olfactory stimuli across a wide range of organisms may rely on fundamentally similar mechanisms in which a complement of specific odorant receptors on olfactory sensory neurons respond differentially to airborne chemicals to initiate the process by which specific odors are perceived. The question that we address in this review is the role of specific neurons in mediating this sensory system--an identity code--relative to the role that temporally specific responses across many neurons play in producing an olfactory perception--a temporal code. While information coded in specific neurons may be converted into a temporal code, it is also possible that temporal codes exist in the absence of response specificity for any particular neuron or subset of neurons. We review the data supporting these ideas, and we discuss the research perspectives that could help to reveal the mechanisms by which odorants become perceptions.

  8. Re-Examination of Globally Flat Space-Time

    PubMed Central

    Feldman, Michael R.

    2013-01-01

    In the following, we offer a novel approach to modeling the observed effects currently attributed to the theoretical concepts of “dark energy,” “dark matter,” and “dark flow.” Instead of assuming the existence of these theoretical concepts, we take an alternative route and choose to redefine what we consider to be inertial motion as well as what constitutes an inertial frame of reference in flat space-time. We adopt none of the features of our current cosmological models except for the requirement that special and general relativity be local approximations within our revised definition of inertial systems. Implicit in our ideas is the assumption that at “large enough” scales one can treat objects within these inertial systems as point-particles having an insignificant effect on the curvature of space-time. We then proceed under the assumption that time and space are fundamentally intertwined such that time- and spatial-translational invariance are not inherent symmetries of flat space-time (i.e., observable clock rates depend upon both relative velocity and spatial position within these inertial systems) and take the geodesics of this theory in the radial Rindler chart as the proper characterization of inertial motion. With this commitment, we are able to model solely with inertial motion the observed effects expected to be the result of “dark energy,” “dark matter,” and “dark flow.” In addition, we examine the potential observable implications of our theory in a gravitational system located within a confined region of an inertial reference frame, subsequently interpreting the Pioneer anomaly as support for our redefinition of inertial motion. As well, we extend our analysis into quantum mechanics by quantizing for a real scalar field and find a possible explanation for the asymmetry between matter and antimatter within the framework of these redefined inertial systems. PMID:24250790

  9. New Efficient Sparse Space Time Algorithms for Superparameterization on Mesoscales

    SciTech Connect

    Xing, Yulong; Majda, Andrew J.; Grabowski, Wojciech W.

    2009-12-01

    Superparameterization (SP) is a large-scale modeling system with explicit representation of small-scale and mesoscale processes provided by a cloud-resolving model (CRM) embedded in each column of a large-scale model. New efficient sparse space-time algorithms based on the original idea of SP are presented. The large-scale dynamics are unchanged, but the small-scale model is solved in a reduced spatially periodic domain to save the computation cost following a similar idea applied by one of the authors for aquaplanet simulations. In addition, the time interval of integration of the small-scale model is reduced systematically for the same purpose, which results in a different coupling mechanism between the small- and large-scale models. The new algorithms have been applied to a stringent two-dimensional test suite involving moist convection interacting with shear with regimes ranging from strong free and forced squall lines to dying scattered convection as the shear strength varies. The numerical results are compared with the CRM and original SP. It is shown here that for all of the regimes of propagation and dying scattered convection, the large-scale variables such as horizontal velocity and specific humidity are captured in a statistically accurate way (pattern correlations above 0.75) based on space-time reduction of the small-scale models by a factor of 1/3; thus, the new efficient algorithms for SP result in a gain of roughly a factor of 10 in efficiency while retaining a statistical accuracy on the large-scale variables. Even the models with 1/6 reduction in space-time with a gain of 36 in efficiency are able to distinguish between propagating squall lines and dying scattered convection with a pattern correlation above 0.6 for horizontal velocity and specific humidity. These encouraging results suggest the possibility of using these efficient new algorithms for limited-area mesoscale ensemble forecasting.

  10. Re-Examination of Globally Flat Space-Time

    NASA Astrophysics Data System (ADS)

    Feldman, Michael R.

    2013-11-01

    In the following, we offer a novel approach to modeling the observed effects currently attributed to the theoretical concepts of "dark energy," "dark matter," and "dark flow." Instead of assuming the existence of these theoretical concepts, we take an alternative route and choose to redefine what we consider to be inertial motion as well as what constitutes an inertial frame of reference in flat space-time. We adopt none of the features of our current cosmological models except for the requirement that special and general relativity be local approximations within our revised definition of inertial systems. Implicit in our ideas is the assumption that at "large enough" scales one can treat objects within these inertial systems as point-particles having an insignificant effect on the curvature of space-time. We then proceed under the assumption that time and space are fundamentally intertwined such that time- and spatial-translational invariance are not inherent symmetries of flat space-time (i.e., observable clock rates depend upon both relative velocity and spatial position within these inertial systems) and take the geodesics of this theory in the radial Rindler chart as the proper characterization of inertial motion. With this commitment, we are able to model solely with inertial motion the observed effects expected to be the result of "dark energy," "dark matter," and "dark flow." In addition, we examine the potential observable implications of our theory in a gravitational system located within a confined region of an inertial reference frame, subsequently interpreting the Pioneer anomaly as support for our redefinition of inertial motion. As well, we extend our analysis into quantum mechanics by quantizing for a real scalar field and find a possible explanation for the asymmetry between matter and antimatter within the framework of these redefined inertial systems.

  11. Re-examination of globally flat space-time.

    PubMed

    Feldman, Michael R

    2013-01-01

    In the following, we offer a novel approach to modeling the observed effects currently attributed to the theoretical concepts of "dark energy," "dark matter," and "dark flow." Instead of assuming the existence of these theoretical concepts, we take an alternative route and choose to redefine what we consider to be inertial motion as well as what constitutes an inertial frame of reference in flat space-time. We adopt none of the features of our current cosmological models except for the requirement that special and general relativity be local approximations within our revised definition of inertial systems. Implicit in our ideas is the assumption that at "large enough" scales one can treat objects within these inertial systems as point-particles having an insignificant effect on the curvature of space-time. We then proceed under the assumption that time and space are fundamentally intertwined such that time- and spatial-translational invariance are not inherent symmetries of flat space-time (i.e., observable clock rates depend upon both relative velocity and spatial position within these inertial systems) and take the geodesics of this theory in the radial Rindler chart as the proper characterization of inertial motion. With this commitment, we are able to model solely with inertial motion the observed effects expected to be the result of "dark energy," "dark matter," and "dark flow." In addition, we examine the potential observable implications of our theory in a gravitational system located within a confined region of an inertial reference frame, subsequently interpreting the Pioneer anomaly as support for our redefinition of inertial motion. As well, we extend our analysis into quantum mechanics by quantizing for a real scalar field and find a possible explanation for the asymmetry between matter and antimatter within the framework of these redefined inertial systems.

  12. Modeling of space-time focusing of localized nondiffracting pulses

    NASA Astrophysics Data System (ADS)

    Zamboni-Rached, Michel; Besieris, Ioannis M.

    2016-10-01

    In this paper we develop a method capable of modeling the space-time focusing of nondiffracting pulses. These pulses can possess arbitrary peak velocities and, in addition to being resistant to diffraction, can have their peak intensities and focusing positions chosen a priori. More specifically, we can choose multiple locations (spatial ranges) of space and time focalization; also, the pulse intensities can be chosen in advance. The pulsed wave solutions presented here can have very interesting applications in many different fields, such as free-space optical communications, remote sensing, medical apparatus, etc.

  13. Video stabilization using space-time video completion

    NASA Astrophysics Data System (ADS)

    Voronin, V.; Frantc, V.; Marchuk, V.; Shrayfel, I.; Gapon, N.; Agaian, S.

    2016-05-01

    This paper proposes a video stabilization method using space-time video completion for effective static and dynamic textures reconstruction instead of frames cropping. The proposed method can produce full-frame videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. We propose to use a set of descriptors that encapsulate the information of periodical motion of objects necessary to reconstruct missing/corrupted frames. The background is filled-in by extending spatial texture synthesis techniques using set of 3D patches. Experimental results demonstrate the effectiveness of the proposed method in the task of full-frame video stabilization.

  14. Gauge invariant perturbations of Petrov type D space-times

    NASA Astrophysics Data System (ADS)

    Whiting, Bernard; Shah, Abhay

    2016-03-01

    The Regge-Wheeler and Zerilli equations are satisfied by gauge invariant perturbations of the Schwarzschild black hole geometry. Both the perturbation of the imaginary part of Ψ2 (a component of the Weyl curvature), and its time derivative, are gauge invariant and solve the Regge-Wheeler equation with different sources. The Ψ0 and Ψ4 perturbations of the Weyl curvature are not only gauge, but also tetrad, invariant. We explore the framework in which these results hold, and consider what generalizations may extend to the Kerr geometry, and presumably to Petrov type D space-times in general. NSF Grants PHY 1205906 and 1314529, ERC (EU) FP7 Grant 304978.

  15. MAPLE Procedures For Boson Fields System On Curved Space - Time

    SciTech Connect

    Murariu, Gabriel

    2007-04-23

    Systems of interacting boson fields are an important subject in the last years. From the problem of dark matter to boson stars' study, boson fields are involved. In the general configuration, it is considered a Klein-Gordon-Maxwell-Einstein fields system for a complex scalar field minimally coupled to a gravitational one. The necessity of studying a larger number of space-time configurations and the huge volume of computations for each particular situation are some reasons for building a MAPLE procedures set for this kind of systems.

  16. Dirac equation on coordinate dependent noncommutative space-time

    NASA Astrophysics Data System (ADS)

    Kupriyanov, V. G.

    2014-05-01

    In this paper we discuss classical aspects of spinor field theory on the coordinate dependent noncommutative space-time. The noncommutative Dirac equation describing spinning particle in an external vector field and the corresponding action principle are proposed. The specific choice of a star product allows us to derive a conserved noncommutative probability current and to obtain the energy-momentum tensor for free noncommutative spinor field. Finally, we consider a free noncommutative Dirac fermion and show that if the Poisson structure is Lorentz-covariant, the standard energy-momentum dispersion relation remains valid.

  17. Particle propagation and effective space-time in gravity's rainbow

    NASA Astrophysics Data System (ADS)

    Garattini, Remo; Mandanici, Gianluca

    2012-01-01

    Based on the results obtained in our previous study on gravity’s rainbow, we determine the quantum corrections to the space-time metric for the Schwarzschild and the de Sitter background, respectively. We analyze how quantum fluctuations alter these metrics, inducing modifications on the propagation of test particles. Significantly enough, we find that quantum corrections can become relevant not only for particles approaching the Planck energy but, due to the one-loop contribution, even for low-energy particles as far as Planckian length scales are considered. We briefly compare our results with others obtained in similar studies and with the recent experimental OPERA announcement of superluminal neutrino propagation.

  18. Founding Gravitation in 4D Euclidean Space-Time Geometry

    SciTech Connect

    Winkler, Franz-Guenter

    2010-11-24

    The Euclidean interpretation of special relativity which has been suggested by the author is a formulation of special relativity in ordinary 4D Euclidean space-time geometry. The natural and geometrically intuitive generalization of this view involves variations of the speed of light (depending on location and direction) and a Euclidean principle of general covariance. In this article, a gravitation model by Jan Broekaert, which implements a view of relativity theory in the spirit of Lorentz and Poincare, is reconstructed and shown to fulfill the principles of the Euclidean approach after an appropriate reinterpretation.

  19. Space-Time, Phenomenology, and the Picture Theory of Language

    NASA Astrophysics Data System (ADS)

    Grelland, Hans Herlof

    To estimate Minkowski's introduction of space-time in relativity, the case is made for the view that abstract language and mathematics carries meaning not only by its connections with observation but as pictures of facts. This view is contrasted to the more traditional intuitionism of Hume, Mach, and Husserl. Einstein's attempt at a conceptual reconstruction of space and time as well as Husserl's analysis of the loss of meaning in science through increasing abstraction is analysed. Wittgenstein's picture theory of language is used to explain how meaning is conveyed by abstract expressions, with the Minkowski space as a case.

  20. Naked singularities in higher dimensional Vaidya space-times

    SciTech Connect

    Ghosh, S. G.; Dadhich, Naresh

    2001-08-15

    We investigate the end state of the gravitational collapse of a null fluid in higher-dimensional space-times. Both naked singularities and black holes are shown to be developing as the final outcome of the collapse. The naked singularity spectrum in a collapsing Vaidya region (4D) gets covered with the increase in dimensions and hence higher dimensions favor a black hole in comparison to a naked singularity. The cosmic censorship conjecture will be fully respected for a space of infinite dimension.

  1. Harmonic Analysis on the Space-Time Gauge Continuum

    NASA Astrophysics Data System (ADS)

    Bleecker, David D.

    1983-06-01

    The classical Kaluza-Klein unified field theory has previously been extended to unify and geometrize gravitational and gauge fields, through a study of the geometry of a bundle space P over space-time. Here, we examine the physical relevance of the Laplace operator on the complex-valued functions on P. The spectrum and eigenspaces are shown (via the Peter-Weyl theorem) to determine the possible masses of any type of particle field. In the Euclidean case, we prove that zero-mass particles necessarily come in infinite families. Also, lower bounds on masses of particles of a given type are obtained in terms of the curvature of P.

  2. Canonical quantization of general relativity in discrete space-times.

    PubMed

    Gambini, Rodolfo; Pullin, Jorge

    2003-01-17

    It has long been recognized that lattice gauge theory formulations, when applied to general relativity, conflict with the invariance of the theory under diffeomorphisms. We analyze discrete lattice general relativity and develop a canonical formalism that allows one to treat constrained theories in Lorentzian signature space-times. The presence of the lattice introduces a "dynamical gauge" fixing that makes the quantization of the theories conceptually clear, albeit computationally involved. The problem of a consistent algebra of constraints is automatically solved in our approach. The approach works successfully in other field theories as well, including topological theories. A simple cosmological application exhibits quantum elimination of the singularity at the big bang.

  3. Energy Density Associated with the Bianchi Type-II Space-Time

    NASA Astrophysics Data System (ADS)

    Aydogdu, O.; Salti, M.

    2006-01-01

    To calculate the total energy distribution (due to both matter and fields including gravitation) associated with locally rotationally symmetric (LRS) Bianchi type-II space-times. We use the Bergmann-Thomson energy-momentum complex in both general relativity and teleparallel gravity. We find that the energy density in these different gravitation theories is vanishing at all times. This result is the same as that obtained by one of the present authors who solved the problem of finding the energy-momentum in LRS Bianchi type-II by using the energy-momentum complexes of Einstein and Landau and Lifshitz. The results of this paper also are consistent with those given in the previous works of Cooperstock and Israelit, Rosen, Johri et al., Banerjee-Sen, Vargas, and Salti et al. In this paper, we perform the calculations for a non-diagonal expanding space-time to determine whether the Bergmann-Thomson energy momentum prescription is consistent with the other formulations. (We previously considered diagonal and expanding space-time models.) Our result supports the viewpoints of Albrow and Tryon.

  4. Secret information reconciliation based on punctured low-density parity-check codes for continuous-variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Jiang, Xue-Qin; Huang, Peng; Huang, Duan; Lin, Dakai; Zeng, Guihua

    2017-02-01

    Achieving information theoretic security with practical complexity is of great interest to continuous-variable quantum key distribution in the postprocessing procedure. In this paper, we propose a reconciliation scheme based on the punctured low-density parity-check (LDPC) codes. Compared to the well-known multidimensional reconciliation scheme, the present scheme has lower time complexity. Especially when the chosen punctured LDPC code achieves the Shannon capacity, the proposed reconciliation scheme can remove the information that has been leaked to an eavesdropper in the quantum transmission phase. Therefore, there is no information leaked to the eavesdropper after the reconciliation stage. This indicates that the privacy amplification algorithm of the postprocessing procedure is no more needed after the reconciliation process. These features lead to a higher secret key rate, optimal performance, and availability for the involved quantum key distribution scheme.

  5. Hybrid Raman/Brillouin-optical-time-domain-analysis-distributed optical fiber sensors based on cyclic pulse coding.

    PubMed

    Taki, M; Signorini, A; Oton, C J; Nannipieri, T; Di Pasquale, F

    2013-10-15

    We experimentally demonstrate the use of cyclic pulse coding for distributed strain and temperature measurements in hybrid Raman/Brillouin optical time-domain analysis (BOTDA) optical fiber sensors. The highly integrated proposed solution effectively addresses the strain/temperature cross-sensitivity issue affecting standard BOTDA sensors, allowing for simultaneous meter-scale strain and temperature measurements over 10 km of standard single mode fiber using a single narrowband laser source only.

  6. Visceral leishmaniasis in the state of Sao Paulo, Brazil: spatial and space-time analysis

    PubMed Central

    Cardim, Marisa Furtado Mozini; Guirado, Marluci Monteiro; Dibo, Margareth Regina; Chiaravalloti, Francisco

    2016-01-01

    ABSTRACT OBJECTIVE To perform both space and space-time evaluations of visceral leishmaniasis in humans in the state of Sao Paulo, Brazil. METHODS The population considered in the study comprised autochthonous cases of visceral leishmaniasis and deaths resulting from it in Sao Paulo, between 1999 and 2013. The analysis considered the western region of the state as its studied area. Thematic maps were created to show visceral leishmaniasis dissemination in humans in the municipality. Spatial analysis tools Kernel and Kernel ratio were used to respectively obtain the distribution of cases and deaths and the distribution of incidence and mortality. Scan statistics were used in order to identify spatial and space-time clusters of cases and deaths. RESULTS The visceral leishmaniasis cases in humans, during the studied period, were observed to occur in the western portion of Sao Paulo, and their territorial extension mainly followed the eastbound course of the Marechal Rondon highway. The incidences were characterized as two sequences of concentric ellipses of decreasing intensities. The first and more intense one was found to have its epicenter in the municipality of Castilho (where the Marechal Rondon highway crosses the border of the state of Mato Grosso do Sul) and the second one in Bauru. Mortality was found to have a similar behavior to incidence. The spatial and space-time clusters of cases were observed to coincide with the two areas of highest incidence. Both the space-time clusters identified, even without coinciding in time, were started three years after the human cases were detected and had the same duration, that is, six years. CONCLUSIONS The expansion of visceral leishmaniasis in Sao Paulo has been taking place in an eastbound direction, focusing on the role of highways, especially Marechal Rondon, in this process. The space-time analysis detected the disease occurred in cycles, in different spaces and time periods. These meetings, if considered, may

  7. Space time neural networks for tether operations in space

    NASA Technical Reports Server (NTRS)

    Lea, Robert N.; Villarreal, James A.; Jani, Yashvant; Copeland, Charles

    1993-01-01

    A space shuttle flight scheduled for 1992 will attempt to prove the feasibility of operating tethered payloads in earth orbit. due to the interaction between the Earth's magnetic field and current pulsing through the tether, the tethered system may exhibit a circular transverse oscillation referred to as the 'skiprope' phenomenon. Effective damping of skiprope motion depends on rapid and accurate detection of skiprope magnitude and phase. Because of non-linear dynamic coupling, the satellite attitude behavior has characteristic oscillations during the skiprope motion. Since the satellite attitude motion has many other perturbations, the relationship between the skiprope parameters and attitude time history is very involved and non-linear. We propose a Space-Time Neural Network implementation for filtering satellite rate gyro data to rapidly detect and predict skiprope magnitude and phase. Training and testing of the skiprope detection system will be performed using a validated Orbital Operations Simulator and Space-Time Neural Network software developed in the Software Technology Branch at NASA's Lyndon B. Johnson Space Center.

  8. Canonical quantum gravity on noncommutative space-time

    NASA Astrophysics Data System (ADS)

    Kober, Martin

    2015-06-01

    In this paper canonical quantum gravity on noncommutative space-time is considered. The corresponding generalized classical theory is formulated by using the Moyal star product, which enables the representation of the field quantities depending on noncommuting coordinates by generalized quantities depending on usual coordinates. But not only the classical theory has to be generalized in analogy to other field theories. Besides, the necessity arises to replace the commutator between the gravitational field operator and its canonical conjugated quantity by a corresponding generalized expression on noncommutative space-time. Accordingly the transition to the quantum theory has also to be performed in a generalized way and leads to extended representations of the quantum theoretical operators. If the generalized representations of the operators are inserted to the generalized constraints, one obtains the corresponding generalized quantum constraints including the Hamiltonian constraint as dynamical constraint. After considering quantum geometrodynamics under incorporation of a coupling to matter fields, the theory is transferred to the Ashtekar formalism. The holonomy representation of the gravitational field as it is used in loop quantum gravity opens the possibility to calculate the corresponding generalized area operator.

  9. Brain system for mental orientation in space, time, and person

    PubMed Central

    Peer, Michael; Salomon, Roy; Goldberg, Ilan; Blanke, Olaf; Arzy, Shahar

    2015-01-01

    Orientation is a fundamental mental function that processes the relations between the behaving self to space (places), time (events), and person (people). Behavioral and neuroimaging studies have hinted at interrelations between processing of these three domains. To unravel the neurocognitive basis of orientation, we used high-resolution 7T functional MRI as 16 subjects compared their subjective distance to different places, events, or people. Analysis at the individual-subject level revealed cortical activation related to orientation in space, time, and person in a precisely localized set of structures in the precuneus, inferior parietal, and medial frontal cortex. Comparison of orientation domains revealed a consistent order of cortical activity inside the precuneus and inferior parietal lobes, with space orientation activating posterior regions, followed anteriorly by person and then time. Core regions at the precuneus and inferior parietal lobe were activated for multiple orientation domains, suggesting also common processing for orientation across domains. The medial prefrontal cortex showed a posterior activation for time and anterior for person. Finally, the default-mode network, identified in a separate resting-state scan, was active for all orientation domains and overlapped mostly with person-orientation regions. These findings suggest that mental orientation in space, time, and person is managed by a specific brain system with a highly ordered internal organization, closely related to the default-mode network. PMID:26283353

  10. Space-time reference with an optical link

    NASA Astrophysics Data System (ADS)

    Berceau, P.; Taylor, M.; Kahn, J.; Hollberg, L.

    2016-07-01

    We describe a concept for realizing a high performance space-time reference using a stable atomic clock in a precisely defined orbit and synchronizing the orbiting clock to high-accuracy atomic clocks on the ground. The synchronization would be accomplished using a two-way lasercom link between ground and space. The basic approach is to take advantage of the highest-performance cold-atom atomic clocks at national standards laboratories on the ground and to transfer that performance to an orbiting clock that has good stability and that serves as a ‘frequency-flywheel’ over time-scales of a few hours. The two-way lasercom link would also provide precise range information and thus precise orbit determination. With a well-defined orbit and a synchronized clock, the satellite could serve as a high-accuracy space-time reference, providing precise time worldwide, a valuable reference frame for geodesy, and independent high-accuracy measurements of GNSS clocks. Under reasonable assumptions, a practical system would be able to deliver picosecond timing worldwide and millimeter orbit determination, and could serve as an enabling subsystem for other proposed space-gravity missions, which are briefly reviewed.

  11. Modeling velocity space-time correlations in wind farms

    NASA Astrophysics Data System (ADS)

    Lukassen, Laura J.; Stevens, Richard J. A. M.; Meneveau, Charles; Wilczek, Michael

    2016-11-01

    Turbulent fluctuations of wind velocities cause power-output fluctuations in wind farms. The statistics of velocity fluctuations can be described by velocity space-time correlations in the atmospheric boundary layer. In this context, it is important to derive simple physics-based models. The so-called Tennekes-Kraichnan random sweeping hypothesis states that small-scale velocity fluctuations are passively advected by large-scale velocity perturbations in a random fashion. In the present work, this hypothesis is used with an additional mean wind velocity to derive a model for the spatial and temporal decorrelation of velocities in wind farms. It turns out that in the framework of this model, space-time correlations are a convolution of the spatial correlation function with a temporal decorrelation kernel. In this presentation, first results on the comparison to large eddy simulations will be presented and the potential of the approach to characterize power output fluctuations of wind farms will be discussed. Acknowledgements: 'Fellowships for Young Energy Scientists' (YES!) of FOM, the US National Science Foundation Grant IIA 1243482, and support by the Max Planck Society.

  12. Relativistic space-time positioning: principles and strategies

    NASA Astrophysics Data System (ADS)

    Tartaglia, Angelo

    2013-11-01

    Starting from the description of space- time as a curved four-dimensional manifold, null Gaussian coordinates systems as appropriate for relativistic positioning will be discussed. Different approaches and strategies will be reviewed, implementing the null coordinates with both continuous and pulsating electromagnetic signals. In particular, methods based on purely local measurements of proper time intervals between pulses will be expounded and the various possible sources of uncertainty will be analyzed. As sources of pulses both artificial and natural emitters will be considered. The latter will concentrate on either radio- or X ray-emitting pulsars, discussing advantages and drawbacks. As for artificial emitters, various solutions will be presented, from satellites orbiting the Earth to broadcasting devices carried both by spacecrafts and celestial bodies of the solar system. In general the accuracy of the positioning is expected to be limited, besides the instabilities and drift of the sources, by the precision of the local clock, but in any case in long journeys systematic cumulated errors will tend to become dominant. The problem can be kept under control properly using a high level of redundancy in the procedure for the calculation of the coordinates of the receiver and by mixing a number of different and complementary strategies. Finally various possibilities for doing fundamental physics experiments by means of space-time topography techniques will shortly be presented and discussed.

  13. Multipoint-to-Point and Point-to-Multipoint Space-Time Network Coding

    DTIC Science & Technology

    2010-03-01

    on July 06,2010 at 13:53:22 UTC from IEEE Xplore . Restrictions apply. Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting...Authorized licensed use limited to: University of Maryland College Park. Downloaded on July 06,2010 at 13:53:22 UTC from IEEE Xplore . Restrictions apply...University of Maryland College Park. Downloaded on July 06,2010 at 13:53:22 UTC from IEEE Xplore . Restrictions apply. 0 5 10 15 20 25 30 10 −6 10 −5 10

  14. Many-to-Many Communications via Space-Time Network Coding (PREPRINT)

    DTIC Science & Technology

    2010-01-01

    are derived. In addition, an asymptotic SER approximation is also provided which is shown to be tight at high signal-to-noise ratio ( SNR ). Finally...In addition, an asymptotic SER approximation is also provided which is shown to be tight at high signal-to-noise ratio ( SNR ). Finally, the analytical...matched filtering operation is applied on each of the received signals yj,i, in the form of (√ Psjh ∗ j,i/N0 ) yj,i. Therefore, the SNR at the output

  15. A Systematic Approach to Design of Space-Time Block Coded MIMO Systems

    DTIC Science & Technology

    2006-06-01

    MonteCarlo ,” which determines how many Monte Carlo iterations will be used for the simulation. Typically, twelve Monte Carlo iterations are chosen for...100; % total number of bits to be transmitted for the given SNR value MonteCarlo = 1; % number of runs to be simulated EbNo_db = 0:5:30; % SNR...total_bit_errors_monte_4 = []; for run = 1: MonteCarlo % Monte Carlo iteration % resetting ’total bit errors array’ for all SNR values

  16. An Insight into Space-Time Block Codes using Hurwitz-Radon Families of Matrices

    DTIC Science & Technology

    2008-01-01

    1 ¼ ðk 2 2 þ t22ÞI8 and M2M T 2 ¼ ðk 2 2 þ t22Þ 1ð1þ k21 þ t21Þ 2I8. Furthermore, MT1 M2 ¼ ðk 2 2 þ t 2 2Þ 1ðAT2 k2 þ A T 3 t2ÞðA4A T 6 k2 þ A4A T 7... A4A T 6 k2 þ A4A T 7 t2 þ A6A T 7 ðk1t2 þ k2t1Þ...Because of AT1 A6A T 7 A0 ¼ AT5 A2A T 4 A3 ¼ A T 2 A4A T 5 A3, we can write ½245673 ¼ ½245367 ¼ ½1670½67 ¼ ½01. Therefore, we can also express b1A T

  17. Space-Time Coding Using Algebraic Number Theory for Broadband Wireless Communications

    DTIC Science & Technology

    2008-05-31

    order 2. (III) Robust Phase Unwrapping, Chinese Remainder Theorem, and Their Applications in SAR Imaging of Moving Targets (i) A Sharpened Dynamic...signal processing. Motivated from the phase unwrapping algorithm, we have then obtained a robust CRT. (iii) New SAR Techniques for Fast and Slowly...Moving Target Imaging and Location: We have obtained non-uniform antenna array synthetic aperture radar (NUAA- SAR ) 7 where an antenna array is arranged

  18. Extreme wave analysis in the space-time domain: from observations to applications

    NASA Astrophysics Data System (ADS)

    Barbariol, Francesco; Alves, Jose-Henrique; Benetazzo, Alvise; Bergamasco, Filippo; Carniel, Sandro; Chao, Yung Y.; Chawla, Arun; Ricchi, Antonio; Sclavo, Mauro

    2016-04-01

    this end, analytical directional spectra that explicitly depend upon the wind forcing (e.g. Pierson-Moskowitz or JONSWAP frequency spectra, combined with a cos2 directional distribution) have been integrated to provide kinematic and geometric parameters of the sea state as a function of the wind speed and fetch length. Then, the SWAN numerical wave model has been modified in order to compute kinematic and geometric properties of the sea state, and run under different wave-current conditions and bathymetric gradients. In doing so, it has been possible to estimate the contribution to the space-time extremes variation due to the wind inputs, to current speed and to depth gradients. Weather forecasting applications consist of using spectra simulated by wave forecasting models to compute space-time extremes. In this context, we have recently implemented the space-time extremes computation (according to the second order Fedele model) within the WAVEWATCH III numerical wave model. New output products (i.e. the maximum expected crest and wave heights) have been validated using space-time stereo-photogrammetric measurements, proving the concept that powerful tools that provide space-time extremes forecasts over extended domains may be developed for applications beneficial to the marine community.

  19. Direct space-time observation of pulse tunneling in an electromagnetic band gap

    SciTech Connect

    Doiron, Serge; Hache, Alain; Winful, Herbert G.

    2007-08-15

    We present space-time-resolved measurements of electromagnetic pulses tunneling through a coaxial electromagnetic band gap structure. The results show that during the tunneling process the field distribution inside the barrier is an exponentially decaying standing wave whose amplitude increases and decreases as it slowly follows the temporal evolution of the input pulse. At no time is a pulse maximum found inside the barrier, and hence the transmitted peak is not the incident peak that has propagated to the exit. The results support the quasistatic interpretation of tunneling dynamics and confirm that the group delay is not the traversal time of the input pulse peak.

  20. Surveying Multidisciplinary Aspects in Real-Time Distributed Coding for Wireless Sensor Networks

    PubMed Central

    Braccini, Carlo; Davoli, Franco; Marchese, Mario; Mongelli, Maurizio

    2015-01-01

    Wireless Sensor Networks (WSNs), where a multiplicity of sensors observe a physical phenomenon and transmit their measurements to one or more sinks, pertain to the class of multi-terminal source and channel coding problems of Information Theory. In this category, “real-time” coding is often encountered for WSNs, referring to the problem of finding the minimum distortion (according to a given measure), under transmission power constraints, attainable by encoding and decoding functions, with stringent limits on delay and complexity. On the other hand, the Decision Theory approach seeks to determine the optimal coding/decoding strategies or some of their structural properties. Since encoder(s) and decoder(s) possess different information, though sharing a common goal, the setting here is that of Team Decision Theory. A more pragmatic vision rooted in Signal Processing consists of fixing the form of the coding strategies (e.g., to linear functions) and, consequently, finding the corresponding optimal decoding strategies and the achievable distortion, generally by applying parametric optimization techniques. All approaches have a long history of past investigations and recent results. The goal of the present paper is to provide the taxonomy of the various formulations, a survey of the vast related literature, examples from the authors' own research, and some highlights on the inter-play of the different theories. PMID:25633597

  1. Surveying multidisciplinary aspects in real-time distributed coding for Wireless Sensor Networks.

    PubMed

    Braccini, Carlo; Davoli, Franco; Marchese, Mario; Mongelli, Maurizio

    2015-01-27

    Wireless Sensor Networks (WSNs), where a multiplicity of sensors observe a physical phenomenon and transmit their measurements to one or more sinks, pertain to the class of multi-terminal source and channel coding problems of Information Theory. In this category, "real-time" coding is often encountered for WSNs, referring to the problem of finding the minimum distortion (according to a given measure), under transmission power constraints, attainable by encoding and decoding functions, with stringent limits on delay and complexity. On the other hand, the Decision Theory approach seeks to determine the optimal coding/decoding strategies or some of their structural properties. Since encoder(s) and decoder(s) possess different information, though sharing a common goal, the setting here is that of Team Decision Theory. A more pragmatic vision rooted in Signal Processing consists of fixing the form of the coding strategies (e.g., to linear functions) and, consequently, finding the corresponding optimal decoding strategies and the achievable distortion, generally by applying parametric optimization techniques. All approaches have a long history of past investigations and recent results. The goal of the present paper is to provide the taxonomy of the various formulations, a survey of the vast related literature, examples from the authors' own research, and some highlights on the inter-play of the different theories.

  2. On the relative energy associated with space-times of diagonal metrics

    NASA Astrophysics Data System (ADS)

    Korunur, Murat; Salti, Mustafa; Havare, Ali

    2007-05-01

    In order to evaluate the energy distribution (due to matter and fields including gravitation) associated with a space-time model of generalized diagonal metric, we consider the Einstein, Bergmann--Thomson and Landau--Lifshitz energy and/or momentum definitions both in Einstein's theory of general relativity and the teleparallel gravity (the tetrad theory of gravitation). We find same energy distribution using Einstein and Bergmann--Thomson formulations, but we also find that the energy--momentum prescription of Landau--Lifshitz disagree in general with these definitions. We also give eight different well-known space-time models as examples, and considering these models and using our results, we calculate the energy distributions associated with them. Furthermore, we show that for the Bianchi Type-I models all the formulations give the same result. This result agrees with the previous works of Cooperstock--Israelit, Rosen, Johri et al, Banerjee--Sen, Xulu, Vargas and Salti et al and supports the viewpoints of Albrow and Tryon.

  3. The twistor approach to space-time structures

    NASA Astrophysics Data System (ADS)

    Penrose, Roger

    2005-11-01

    An outline of twistor theory is presented. Initial motivations (from 1963) are given for this type of non-local geometry, as an intended scheme for unifying quantum theory and space-time structure. Basic twistor geometry and algebra is exhibited, and it is shown that this provides a complex-manifold description of classical (spinning) massless particles. Simple quantum commutation rules lead to a concise representation of massless particle wavefunctions, in terms of contour integrals or (more profoundly) holomorphic 1st cohomology. Non-linear versions give elegant representations of anti-self-dual Einstein (or Yang-Mills) fields, describing left-handed non-linear gravitons (or Yang-Mills particles). A brief outline of the current status of the "googly problem" is provided, whereby the right-handed particles would also be incorporated.

  4. Euclidean space-time diffeomorphisms and their Fueter subgroups

    SciTech Connect

    Guersey, F.; Jiang, W. )

    1992-02-01

    Holomorphic Fueter functions of the position quaternion form a subgroup of Euclidean space-time diffeomorphisms. An {ital O}(4) covariant treatment of such mappings is presented with the quaternionic argument {ital x} being replaced by either {ital {bar p}x} or {ital x{bar p}} involving self-dual and anti-self-dual structures and {ital p} denoting an arbitrary Euclidean time direction. An infinite group (the quasiconformal group) is exhibited that admits the conformal group SO(5,1) as a subgroup, in analogy to the two-dimensional case in which the Moebius group SO(3,1) is a subgroup of the infinite Virasoro group. The ensuing (3+1) covariant decomposition of diffeomorphisms suggests covariant gauges that throw the metric and the stress tensors in standard forms suitable for canonical quantization, leading to improved'' energy-momentum tensors. Other possible applications to current algebra and gravity will be mentioned.

  5. Multipole structure of current vectors in curved space-time

    NASA Astrophysics Data System (ADS)

    Harte, Abraham I.

    2007-01-01

    A method is presented which allows the exact construction of conserved (i.e., divergence-free) current vectors from appropriate sets of multipole moments. Physically, such objects may be taken to represent the flux of particles or electric charge inside some classical extended body. Several applications are discussed. In particular, it is shown how to easily write down the class of all smooth and spatially bounded currents with a given total charge. This implicitly provides restrictions on the moments arising from the smoothness of physically reasonable vector fields. We also show that requiring all of the moments to be constant in an appropriate sense is often impossible. This likely limits the applicability of the Ehlers-Rudolph-Dixon notion of quasirigid motion. A simple condition is also derived that allows currents to exist in two different space-times with identical sets of multipole moments (in a natural sense).

  6. Curved Space-Times by Crystallization of Liquid Fiber Bundles

    NASA Astrophysics Data System (ADS)

    Hélein, Frédéric; Vey, Dimitri

    2017-01-01

    Motivated by the search for a Hamiltonian formulation of Einstein equations of gravity which depends in a minimal way on choices of coordinates, nor on a choice of gauge, we develop a multisymplectic formulation on the total space of the principal bundle of orthonormal frames on the 4-dimensional space-time. This leads quite naturally to a new theory which takes place on 10-dimensional manifolds. The fields are pairs of ((α ,ω ),π), where (α ,ω ) is a 1-form with coefficients in the Lie algebra of the Poincaré group and π is an 8-form with coefficients in the dual of this Lie algebra. The dynamical equations derive from a simple variational principle and imply that the 10-dimensional manifold looks locally like the total space of a fiber bundle over a 4-dimensional base manifold. Moreover this base manifold inherits a metric and a connection which are solutions of a system of Einstein-Cartan equations.

  7. Wormhole in higher-dimensional space-time

    NASA Astrophysics Data System (ADS)

    Shinkai, Hisa-aki; Torii, Takashi

    2015-04-01

    We introduce our recent studies on wormhole, especially its stability aspect in higher-dimensional space-time both in general relativity and in Gauss-Bonnet gravity. We derived the Ellis-type wormhole solution in n-dimensional general relativity, and found existence of an unstable mode in its linear perturbation analysis. We also evolved it numerically in dualnullcoordinate system, and confirmed its instability. The wormhole throat will change into black hole horizons for the input of the (relatively) positive energy, while it will change into inflationary expansion for the (relatively) negative energy input. If we add Gauss-Bonnet terms (higher curvature correction terms in gravity), then wormhole tends to expand (or change to black hole) if the coupling constant α is positive (negative), and such bifurcation of the throat horizon is observed earlier in higher dimension.

  8. Fermion wave-mechanical operators in curved space-time

    SciTech Connect

    Cocke, W.J.; Lloyd-Hart, M. )

    1990-09-15

    In the context of a general wave-mechanical formalism, we derive explicit forms for the Hamiltonian, kinetic energy, and momentum operators for a massive fermion in curved space-time. In the two-spinor representation, the scalar products of state vectors are conserved under the Dirac equation, but the time-development Hamiltonian is in general not Hermitian for a nonstatic metric. A geodesic normal coordinate system provides an economical framework in which to interpret the results. We apply the formalism to a closed Robertson-Walker metric, for which we find the eigenvalues and eigenfunctions of the kinetic energy density. The angular momentum parts turn out to be simpler than in the usual four-spinor representation, and the radial parts involve Jacobi polynomials.

  9. Space-time statistics for decision support to smart farming.

    PubMed

    Stein, A; Hoosbeek, M R; Sterk, G

    1997-01-01

    This paper summarizes statistical procedures which are useful for precision farming at different scales. Three topics are addressed: spatial comparison of scenarios for land use, analysis of data in the space-time domain, and sampling in space and time. The first study compares six scenarios for nitrate leaching to ground water. Disjunctive cokriging reduces the computing time by 80% without loss of accuracy. The second study analyses wind erosion during four storms in a field in Niger measured with 21 devices. We investigated the use of temporal replicates to overcome the lack of spatial data. The third study analyses the effects of sampling in space and time for soil nutrient data in a Southwest African field. We concluded that statistical procedures are indispensable for decision support to smart farming.

  10. Space-Time Event Sparse Penalization for Magneto-/Electroencephalography

    PubMed Central

    Bolstad, Andrew; Van Veen, Barry; Nowak, Robert

    2009-01-01

    This article presents a new spatio-temporal method for M/EEG source reconstruction based on the assumption that only a small number of events, localized in space and/or time, are responsible for the measured signal. Each space-time event is represented using a basis function expansion which reflects the most relevant (or measurable) features of the signal. This model of neural activity leads naturally to a Bayesian likelihood function which balances the model fit to the data with the complexity of the model, where the complexity is related to the number of included events. A novel Expectation-Maximization algorithm which maximizes the likelihood function is presented. The new method is shown to be effective on several MEG simulations of neurological activity as well as data from a self-paced finger tapping experiment. PMID:19457366

  11. Spherically Symmetric Space Time with Regular de Sitter Center

    NASA Astrophysics Data System (ADS)

    Dymnikova, Irina

    We formulate the requirements which lead to the existence of a class of globally regular solutions of the minimally coupled GR equations asymptotically de Sitter at the center. The source term for this class, invariant under boosts in the radial direction, is classified as spherically symmetric vacuum with variable density and pressure Tμ ν vac associated with an r-dependent cosmological term Λ μ ν = 8π GTμ ν vac, whose asymptotic at the origin, dictated by the weak energy condition, is the Einstein cosmological term Λgμν, while asymptotic at infinity is de Sitter vacuum with λ < Λ or Minkowski vacuum. For this class of metrics the mass m defined by the standard ADM formula is related to both the de Sitter vacuum trapped at the origin and the breaking of space time symmetry. In the case of the flat asymptotic, space time symmetry changes smoothly from the de Sitter group at the center to the Lorentz group at infinity through radial boosts in between. Geometry is asymptotically de Sitter as r → 0 and asymptotically Schwarzschild at large r. In the range of masses m ≥ mcrit, the de Sitter Schwarzschild geometry describes a vacuum nonsingular black hole (ΛBH), and for m < mcrit it describes G-lump — a vacuum selfgravitating particle-like structure without horizons. In the case of de Sitter asymptotic at infinity, geometry is asymptotically de Sitter as r → 0 and asymptotically Schwarzschild de Sitter at large r. Λμν geometry describes, dependently on parameters m and q = √ {Λ /λ } and choice of coordinates, a vacuum nonsingular cosmological black hole, self-gravitating particle-like structure at the de Sitter background λgμν, and regular cosmological models with cosmological constant evolving smoothly from Λ to λ.

  12. Emergent space-time via a geometric renormalization method

    NASA Astrophysics Data System (ADS)

    Rastgoo, Saeed; Requardt, Manfred

    2016-12-01

    We present a purely geometric renormalization scheme for metric spaces (including uncolored graphs), which consists of a coarse graining and a rescaling operation on such spaces. The coarse graining is based on the concept of quasi-isometry, which yields a sequence of discrete coarse grained spaces each having a continuum limit under the rescaling operation. We provide criteria under which such sequences do converge within a superspace of metric spaces, or may constitute the basin of attraction of a common continuum limit, which hopefully may represent our space-time continuum. We discuss some of the properties of these coarse grained spaces as well as their continuum limits, such as scale invariance and metric similarity, and show that different layers of space-time can carry different distance functions while being homeomorphic. Important tools in this analysis are the Gromov-Hausdorff distance functional for general metric spaces and the growth degree of graphs or networks. The whole construction is in the spirit of the Wilsonian renormalization group (RG). Furthermore, we introduce a physically relevant notion of dimension on the spaces of interest in our analysis, which, e.g., for regular lattices reduces to the ordinary lattice dimension. We show that this dimension is stable under the proposed coarse graining procedure as long as the latter is sufficiently local, i.e., quasi-isometric, and discuss the conditions under which this dimension is an integer. We comment on the possibility that the limit space may turn out to be fractal in case the dimension is noninteger. At the end of the paper we briefly mention the possibility that our network carries a translocal far order that leads to the concept of wormhole spaces and a scale dependent dimension if the coarse graining procedure is no longer local.

  13. The joint space-time statistics of macroweather precipitation, space-time statistical factorization and macroweather models

    SciTech Connect

    Lovejoy, S.; Lima, M. I. P. de

    2015-07-15

    Over the range of time scales from about 10 days to 30–100 years, in addition to the familiar weather and climate regimes, there is an intermediate “macroweather” regime characterized by negative temporal fluctuation exponents: implying that fluctuations tend to cancel each other out so that averages tend to converge. We show theoretically and numerically that macroweather precipitation can be modeled by a stochastic weather-climate model (the Climate Extended Fractionally Integrated Flux, model, CEFIF) first proposed for macroweather temperatures and we show numerically that a four parameter space-time CEFIF model can approximately reproduce eight or so empirical space-time exponents. In spite of this success, CEFIF is theoretically and numerically difficult to manage. We therefore propose a simplified stochastic model in which the temporal behavior is modeled as a fractional Gaussian noise but the spatial behaviour as a multifractal (climate) cascade: a spatial extension of the recently introduced ScaLIng Macroweather Model, SLIMM. Both the CEFIF and this spatial SLIMM model have a property often implicitly assumed by climatologists that climate statistics can be “homogenized” by normalizing them with the standard deviation of the anomalies. Physically, it means that the spatial macroweather variability corresponds to different climate zones that multiplicatively modulate the local, temporal statistics. This simplified macroweather model provides a framework for macroweather forecasting that exploits the system's long range memory and spatial correlations; for it, the forecasting problem has been solved. We test this factorization property and the model with the help of three centennial, global scale precipitation products that we analyze jointly in space and in time.

  14. The joint space-time statistics of macroweather precipitation, space-time statistical factorization and macroweather models.

    PubMed

    Lovejoy, S; de Lima, M I P

    2015-07-01

    Over the range of time scales from about 10 days to 30-100 years, in addition to the familiar weather and climate regimes, there is an intermediate "macroweather" regime characterized by negative temporal fluctuation exponents: implying that fluctuations tend to cancel each other out so that averages tend to converge. We show theoretically and numerically that macroweather precipitation can be modeled by a stochastic weather-climate model (the Climate Extended Fractionally Integrated Flux, model, CEFIF) first proposed for macroweather temperatures and we show numerically that a four parameter space-time CEFIF model can approximately reproduce eight or so empirical space-time exponents. In spite of this success, CEFIF is theoretically and numerically difficult to manage. We therefore propose a simplified stochastic model in which the temporal behavior is modeled as a fractional Gaussian noise but the spatial behaviour as a multifractal (climate) cascade: a spatial extension of the recently introduced ScaLIng Macroweather Model, SLIMM. Both the CEFIF and this spatial SLIMM model have a property often implicitly assumed by climatologists that climate statistics can be "homogenized" by normalizing them with the standard deviation of the anomalies. Physically, it means that the spatial macroweather variability corresponds to different climate zones that multiplicatively modulate the local, temporal statistics. This simplified macroweather model provides a framework for macroweather forecasting that exploits the system's long range memory and spatial correlations; for it, the forecasting problem has been solved. We test this factorization property and the model with the help of three centennial, global scale precipitation products that we analyze jointly in space and in time.

  15. Void fraction distribution in a boiling water reactor fuel assembly and the evaluation of subchannel analysis codes

    SciTech Connect

    Inoue, Akira; Futakuchi, Masanobu; Yagi, Makoto; Mitsutake, Toru; Morooka, Shinichi

    1995-12-01

    Void fraction measurement tests for boiling water reactor (BWR) simulated nuclear fuel assemblies have been conducted using an X-ray computed tomography scanner.there are two types of fuel assemblies concerning water rods. One fuel assembly has two water rods; the other has one large water rod. The effects of the water rods on radial void fraction distributions are measured within the fuel assemblies. The results show that the water rod effect does not make a large difference in void fraction distribution. The subchannel analysis codes COBRA/BWR and THERMIT-2 were compared with subchannel-averaged void fractions. The prediction accuracy of COBRA/BWR and THERMIT-2 for the subchannel-averaged void fraction was {Delta}{alpha} = {minus}3.6%, {sigma} = 4.8% and {Delta}{alpha} = {minus}4.1%, {sigma} = 4.5%, respectively, where {Delta}{alpha} is the average of the difference measured and calculated values. The subchannel analysis codes are highly applicable for the prediction of a two-phase flow distribution within BWR fuel assemblies.

  16. Observation of quantum particles on a large space-time scale

    NASA Astrophysics Data System (ADS)

    Landau, L. J.

    1994-10-01

    A quantum particle observed on a sufficiently large space-time scale can be described by means of classical particle trajectories. The joint distribution for large-scale multiple-time position and momentum measurements on a nonrelativistic quantum particle moving freely in R v is given by straight-line trajectories with probabilities determined by the initial momentum-space wavefunction. For large-scale toroidal and rectangular regions the trajectories are geodesics. In a uniform gravitational field the trajectories are parabolas. A quantum counting process on free particles is also considered and shown to converge in the large-space-time limit to a classical counting process for particles with straight-line trajectories. If the quantum particle interacts weakly with its environment, the classical particle trajectories may undergo random jumps. In the random potential model considered here, the quantum particle evolves according to a reversible unitary one-parameter group describing elastic scattering off static randomly distributed impurities (a quantum Lorentz gas). In the large-space-time weak-coupling limit a classical stochastic process is obtained with probability one and describes a classical particle moving with constant speed in straight lines between random jumps in direction. The process depends only on the ensemble value of the covariance of the random field and not on the sample field. The probability density in phase space associated with the classical stochastic process satisfies the linear Boltzmann equation for the classical Lorentz gas, which, in the limit h→0, goes over to the linear Landau equation. Our study of the quantum Lorentz gas is based on a perturbative expansion and, as in other studies of this system, the series can be controlled only for small values of the rescaled time and for Gaussian random fields. The discussion of classical particle trajectories for nonrelativistic particles on a macroscopic spacetime scale applies also to

  17. Runaway electron distributions obtained with the CQL3D Fokker-Planck code under tokamak disruption conditions

    SciTech Connect

    Harvey, R.W.; Chan, V.S.

    1996-12-31

    Runaway of electrons to high energy during plasma disruptions occurs due to large induced toroidal electric fields which tend to maintain the toroidal plasma current, in accord with Lenz law. This has been observed in many tokamaks. Within the closed flux surfaces, the bounce-averaged CQL3D Fokker-Planck code is well suited to obtain the resulting electron distributions, nonthermal contributions to electrical conductivity, and runaway rates. The time-dependent 2D in momentum-space (p{sub {parallel}} and p{sub {perpendicular}}) distributions axe calculated on a radial array of noncircular flux surfaces, including bounce-averaging of the Fokker-Planck equation to account for toroidal trapping effects. In the steady state, the resulting distributions represent a balance between applied toroidal electric field, relativistic Coulomb collisions, and synchrotron radiation. The code can be run in a mode where the electrons are sourced at low velocity and run off the high velocity edge of the computational mesh, giving runaway rates at steady state. At small minor radius, the results closely match previous results reported by Kulsrud et al. It is found that the runaway rate has a strong dependence on inverse aspect ratio e, decreasing by a factor {approx} 5 as e increases from 0.0 to 0.3. The code can also be run with a radial diffusion and pinching term, simulating radial transport with plasma pinching to maintain a given density profile. Results show a transport reduction of runaways in the plasma center, and an enhancement towards the edge due to the electrons from the plasma center. Avalanching of runaways due to a knock-on electron source is being included.

  18. Multi-Code Ab Initio Calculation of Ionization Distributions and Radiation Losses for Tungsten in Tokamak Plasmas

    SciTech Connect

    Ralchenko, Yu.; Abdallah, J. Jr.; Colgan, J.; Fontes, C. J.; Foster, M.; Zhang, H. L.; Bar-Shalom, A.; Oreg, J.; Bauche, J.; Bauche-Arnoult, C.; Bowen, C.; Faussurier, G.; Chung, H.-K.; Hansen, S. B.; Lee, R. W.; Scott, H.; Gaufridy de Dortan, F. de; Poirier, M.; Golovkin, I.; Novikov, V.

    2009-09-10

    We present calculations of ionization balance and radiative power losses for tungsten in magnetic fusion plasmas. The simulations were performed within the framework of Non-Local Thermodynamic Equilibrium (NLTE) Code Comparison Workshops utilizing several independent collisional-radiative models. The calculations generally agree with each other; however, a clear disagreement with experimental ionization distributions at low temperatures 2 keV

  19. Space-Time Processing for Tactical Mobile Ad Hoc Networks

    DTIC Science & Technology

    2010-05-01

    Sagnik Ghosh, Bhaskar D. Rao, and James R. Zeidler; "Outage-Efficient Strategies for Multiuser MIMO Networks with Channel Distribution Information... Distributed cooperative routing and hybrid ARQ in MIMO -BLAST ad hoc networks”, submitted to IEEE Transactions on Communications, 2009. Davide...mobile ad hoc networks using polling techniques for MIMO nodes. Centralized and distributed topology control algorithms have been develop to

  20. Implicit Space-Time Conservation Element and Solution Element Schemes

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Himansu, Ananda; Wang, Xiao-Yen

    1999-01-01

    Artificial numerical dissipation is in important issue in large Reynolds number computations. In such computations, the artificial dissipation inherent in traditional numerical schemes can overwhelm the physical dissipation and yield inaccurate results on meshes of practical size. In the present work, the space-time conservation element and solution element method is used to construct new and accurate implicit numerical schemes such that artificial numerical dissipation will not overwhelm physical dissipation. Specifically, these schemes have the property that numerical dissipation vanishes when the physical viscosity goes to zero. These new schemes therefore accurately model the physical dissipation even when it is extremely small. The new schemes presented are two highly accurate implicit solvers for a convection-diffusion equation. The two schemes become identical in the pure convection case, and in the pure diffusion case. The implicit schemes are applicable over the whole Reynolds number range, from purely diffusive equations to convection-dominated equations with very small viscosity. The stability and consistency of the schemes are analysed, and some numerical results are presented. It is shown that, in the inviscid case, the new schemes become explicit and their amplification factors are identical to those of the Leapfrog scheme. On the other hand, in the pure diffusion case, their principal amplification factor becomes the amplification factor of the Crank-Nicolson scheme.

  1. Space-Time Analysis of Crime Patterns in Central London

    NASA Astrophysics Data System (ADS)

    Cheng, T.; Williams, D.

    2012-07-01

    Crime continues to cast a shadow over citizen well-being in big cities today, while also imposing huge economic and social costs. Timely understanding of how criminality emerges and how crime patterns evolve is crucial to anticipating crime, dealing with it when it occurs and developing public confidence in the police service. Every day, about 10,000 crime incidents are reported by citizens, recorded and geo-referenced in the London Metropolitan Police Service Computer Aided Dispatch (CAD) database. The unique nature of this dataset allows the patterns to be explored at particularly fine temporal granularity and at multiple spatial resolutions. This study provides a framework for the exploratory spatio-temporal analysis of crime patterns that combines visual inquiry tools (interactive animations, space-time cubes and map matrices) with cluster analysis (spatial-temporal scan statistics and the self-organizing map). This framework is tested on the CAD dataset for the London Borough of Camden in March 2010. Patterns of crime through space and time are discovered and the clustering methods were evaluated on their ability to facilitate the discovery and interpretation of these patterns.

  2. Earthquake networks based on space-time influence domain

    NASA Astrophysics Data System (ADS)

    He, Xuan; Zhao, Hai; Cai, Wei; Liu, Zheng; Si, Shuai-Zong

    2014-08-01

    A new construction method of earthquake networks based on the theory of complex networks is presented in this paper. We propose a space-time influence domain for each earthquake to quantify the subsequence of earthquakes which are directly influenced by the former earthquake. The size of the domain is according to the magnitude of earthquake. In this way, the seismic data in the region of California are mapped to a topology of earthquake network. It is discovered that the earthquake networks in different time spans behave as scale-free networks. This result can be interpreted in terms of the Gutenberg-Richter law. Discovery of small-world characteristic is also reported on the earthquake network constructed by our method. The Omori law emerges as a feature of seismicity for the out-going links of the network. These characteristics highlight a novel aspect of seismicity as a complex phenomenon and will help us to reveal the internal mechanism of seismic system.

  3. Beyond Peaceful Coexistence: The Emergence of Space, Time and Quantum

    NASA Astrophysics Data System (ADS)

    Licata, Ignazio

    A physical theory consists of a formal structure and one or more interpretations. The latter can come out from cultural and cognitive tension going far beyond any sound operational pact between theoretical constructs and empirical data. We have no reason to doubt about the consistency and efficacy of syntaxes if properly used in the right range. The formal side of Physics has grown in a strongly connected and stratified way through an almost autopoietic, self-dual procedure (let's think of the extraordinary success of the gauge theories), whereas the foundational debate is still blustering about the two pillars of such monumental construction. The general relativity (GR) and the quantum mechanics (QM), which still appear to be greatly incompatible and stopped in a limited peaceful coexistence between local causality in space-time and quantum non-locality [1]. The formidable challenges waiting for us beyond the Standard Model seem to require a new semantic consistency [2] within the two theories, so as to build a new way to look at them, to work and to relate them...

  4. Space-time adaptive numerical methods for geophysical applications.

    PubMed

    Castro, C E; Käser, M; Toro, E F

    2009-11-28

    In this paper we present high-order formulations of the finite volume and discontinuous Galerkin finite-element methods for wave propagation problems with a space-time adaptation technique using unstructured meshes in order to reduce computational cost without reducing accuracy. Both methods can be derived in a similar mathematical framework and are identical in their first-order version. In their extension to higher order accuracy in space and time, both methods use spatial polynomials of higher degree inside each element, a high-order solution of the generalized Riemann problem and a high-order time integration method based on the Taylor series expansion. The static adaptation strategy uses locally refined high-resolution meshes in areas with low wave speeds to improve the approximation quality. Furthermore, the time step length is chosen locally adaptive such that the solution is evolved explicitly in time by an optimal time step determined by a local stability criterion. After validating the numerical approach, both schemes are applied to geophysical wave propagation problems such as tsunami waves and seismic waves comparing the new approach with the classical global time-stepping technique. The problem of mesh partitioning for large-scale applications on multi-processor architectures is discussed and a new mesh partition approach is proposed and tested to further reduce computational cost.

  5. On the space-time evolution of a cholera epidemic

    NASA Astrophysics Data System (ADS)

    Bertuzzo, E.; Azaele, S.; Maritan, A.; Gatto, M.; Rodriguez-Iturbe, I.; Rinaldo, A.

    2008-01-01

    We study how river networks, acting as environmental corridors for pathogens, affect the spreading of cholera epidemics. Specifically, we compare epidemiological data from the real world with the space-time evolution of infected individuals predicted by a theoretical scheme based on reactive transport of infective agents through a biased network portraying actual river pathways. The data pertain to a cholera outbreak in South Africa which started in 2000 and affected in particular the KwaZulu-Natal province. The epidemic lasted for 2 years and involved about 140,000 confirmed cholera cases. Hydrological and demographic data have also been carefully considered. The theoretical tools relate to recent advances in hydrochory, migration fronts, and infection spreading and are novel in that nodal reactions describe the dynamics of cholera. Transport through network links provides the coupling of the nodal dynamics of infected people, who are assumed to reside at the nodes. This proves a realistic scheme. We argue that the theoretical scheme is remarkably capable of predicting actual outbreaks and, indeed, that network structures play a controlling role in the actual, rather anisotropic propagation of infections, in analogy to spreading of species or to migration processes that also use rivers as ecological corridors.

  6. Langevin Dynamics with Space-Time Periodic Nonequilibrium Forcing

    NASA Astrophysics Data System (ADS)

    Joubaud, R.; Pavliotis, G. A.; Stoltz, G.

    2015-01-01

    We present results on the ballistic and diffusive behavior of the Langevin dynamics in a periodic potential that is driven away from equilibrium by a space-time periodic driving force, extending some of the results obtained by Collet and Martinez in (J Math Biol, 56(6):765-792 2008). In the hyperbolic scaling, a nontrivial average velocity can be observed even if the external forcing vanishes in average. More surprisingly, an average velocity in the direction opposite to the forcing may develop at the linear response level—a phenomenon called negative mobility. The diffusive limit of the non-equilibrium Langevin dynamics is also studied using the general methodology of central limit theorems for additive functionals of Markov processes. To apply this methodology, which is based on the study of appropriate Poisson equations, we extend recent results on pointwise estimates of the resolvent of the generator associated with the Langevin dynamics. Our theoretical results are illustrated by numerical simulations of a two-dimensional system.

  7. Video painting with space-time-varying style parameters.

    PubMed

    Kagaya, Mizuki; Brendel, William; Deng, Qingqing; Kesterson, Todd; Todorovic, Sinisa; Neill, Patrick J; Zhang, Eugene

    2011-01-01

    Artists use different means of stylization to control the focus on different objects in the scene. This allows them to portray complex meaning and achieve certain artistic effects. Most prior work on painterly rendering of videos, however, uses only a single painting style, with fixed global parameters, irrespective of objects and their layout in the images. This often leads to inadequate artistic control. Moreover, brush stroke orientation is typically assumed to follow an everywhere continuous directional field. In this paper, we propose a video painting system that accounts for the spatial support of objects in the images or videos, and uses this information to specify style parameters and stroke orientation for painterly rendering. Since objects occupy distinct image locations and move relatively smoothly from one video frame to another, our object-based painterly rendering approach is characterized by style parameters that coherently vary in space and time. Space-time-varying style parameters enable more artistic freedom, such as emphasis/de-emphasis, increase or decrease of contrast, exaggeration or abstraction of different objects in the scene in a temporally coherent fashion.

  8. Horizons versus singularities in spherically symmetric space-times

    SciTech Connect

    Bronnikov, K. A.; Elizalde, E.; Odintsov, S. D.; Zaslavskii, O. B.

    2008-09-15

    We discuss different kinds of Killing horizons possible in static, spherically symmetric configurations and recently classified as 'usual', 'naked', and 'truly naked' ones depending on the near-horizon behavior of transverse tidal forces acting on an extended body. We obtain the necessary conditions for the metric to be extensible beyond a horizon in terms of an arbitrary radial coordinate and show that all truly naked horizons, as well as many of those previously characterized as naked and even usual ones, do not admit an extension and therefore must be considered as singularities. Some examples are given, showing which kinds of matter are able to create specific space-times with different kinds of horizons, including truly naked ones. Among them are fluids with negative pressure and scalar fields with a particular behavior of the potential. We also discuss horizons and singularities in Kantowski-Sachs spherically symmetric cosmologies and present horizon regularity conditions in terms of an arbitrary time coordinate and proper (synchronous) time. It turns out that horizons of orders 2 and higher occur in infinite proper times in the past or future, but one-way communication with regions beyond such horizons is still possible.

  9. Computationally efficient ASIC implementation of space-time block decoding

    NASA Astrophysics Data System (ADS)

    Cavus, Enver; Daneshrad, Babak

    2002-12-01

    In this paper, we describe a computationally efficient ASIC design that leads to a highly efficient power and area implementation of space-time block decoder compared to a direct implementation of the original algorithm. Our study analyzes alternative methods of evaluating as well as implementing the previously reported maximum likelihood algorithms (Tarokh et al. 1998) for a more favorable hardware design. In our previous study (Cavus et al. 2001), after defining some intermediate variables at the algorithm level, highly computationally efficient decoding approaches, namely sign and double-sign methods, are developed and their effectiveness are illustrated for 2x2, 8x3 and 8x4 systems using BPSK, QPSK, 8-PSK, or 16-QAM modulation. In this work, alternative architectures for the decoder implementation are investigated and an implementation having a low computation approach is proposed. The applied techniques at the higher algorithm and architectural levels lead to a substantial simplification of the hardware architecture and significantly reduced power consumption. The proposed architecture is being fabricated in TSMC 0.18 μ process.

  10. A global conformal extension theorem for perfect fluid Bianchi space-times

    SciTech Connect

    Luebbe, Christian Tod, Paul

    2008-12-15

    A global extension theorem is established for isotropic singularities in polytropic perfect fluid Bianchi space-times. When an extension is possible, the limiting behaviour of the physical space-time near the singularity is analysed.

  11. Interim storage of spent and disused sealed sources: optimisation of external dose distribution in waste grids using the MCNPX code.

    PubMed

    Paiva, I; Oliveira, C; Trindade, R; Portugal, L

    2005-01-01

    Radioactive sealed sources are in use worldwide in different fields of application. When no further use is foreseen for these sources, they become spent or disused sealed sources and are subject to a specific waste management scheme. Portugal does have a Radioactive Waste Interim Storage Facility where spent or disused sealed sources are conditioned in a cement matrix inside concrete drums and following the geometrical disposition of a grid. The gamma dose values around each grid depend on the drum's enclosed activity and radionuclides considered, as well as on the drums distribution in the various layers of the grid. This work proposes a method based on the Monte Carlo simulation using the MCNPX code to estimate the best drum arrangement through the optimisation of dose distribution in a grid. Measured dose rate values at 1 m distance from the surface of the chosen optimised grid were used to validate the corresponding computational grid model.

  12. Development of a δ f code for studying the effect of non-Maxwellian velocity distributions on SRS.

    NASA Astrophysics Data System (ADS)

    Brunner, S.; Valeo, E.; Krommes, J. A.

    2000-10-01

    It has been shown that non-Maxwellian velocity distributions resulting from non-classical heating and transport in laser fusion-type plasmas can significantly affect the linear kinetic response of such systems(B. B. Afeyan et al.), Phys.Rev.Lett. 80, 2322 (1998).. In particular for the electron plasma waves (EPW), the reduction in their Landau damping may have a strong effect on the gain of stimulated Raman scattering (SRS). We are presently developing a δ f code that should enable the simulation of the fully non-linear evolution of SRS, while accurately taking account of the critical non-Maxwellian tails of the background distributions. Different techniques developed for carrying out nonlocal transport simulations(S. Brunner, E. Valeo, and J. A. Krommes, Phys.Plasmas 7), 2810 (2000). will be used to provide the backgrounds to these microinstability simulations.

  13. A divide-and-conquer method for space-time series prediction

    NASA Astrophysics Data System (ADS)

    Deng, Min; Yang, Wentao; Liu, Qiliang; Zhang, Yunfei

    2017-01-01

    Space-time series can be partitioned into space-time smooth and space-time rough, which represent different scale characteristics. However, most existing methods for space-time series prediction directly address space-time series as a whole and do not consider the interaction between space-time smooth and space-time rough in the process of prediction. This will possibly affect the accuracy of space-time series prediction, because the interaction between these two components (i.e., space-time smooth and space-time rough) may cause one of them as dominant component, thus weakening the behavior of the other. Therefore, a divide-and-conquer method for space-time prediction is proposed in this paper. First, the observational fine-grained data are decomposed into two components: coarse-grained data and the residual terms of fine-grained data. These two components are then modeled, respectively. Finally, the predicted values of the fine-grained data are obtained by integrating the predicted values of the coarse-grained data with the residual terms. The experimental results of two groups of different space-time series demonstrated the effectiveness of the divide-and-conquer method.

  14. A Multimodal Approach to Coding Discourse: Collaboration, Distributed Cognition, and Geometric Reasoning

    ERIC Educational Resources Information Center

    Evans, Michael A.; Feenstra, Eliot; Ryon, Emily; McNeill, David

    2011-01-01

    Our research aims to identify children's communicative strategies when faced with the task of solving a geometric puzzle in CSCL contexts. We investigated how to identify and trace "distributed cognition" in problem-solving interactions based on discursive cohesion to objects, participants, and prior discursive content, and geometric and…

  15. Code Optimization for the Choi-Williams Distribution for ELINT Applications

    DTIC Science & Technology

    2009-12-01

    Applied Mathematics Series-55, Issued June 1964, Seventh Printing, May 1968, with corrections. [13] Oppenheim & Schafer, Digital Signal Processing ... Phillip E. Pace i REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is...PAGES 98 14. SUBJECT TERMS Choi-Williams Distribution, Signal Processing , Algorithm Optimization, C programming, Low Probability of Intercept (LPI

  16. High-Order Space-Time Methods for Conservation Laws

    NASA Technical Reports Server (NTRS)

    Huynh, H. T.

    2013-01-01

    Current high-order methods such as discontinuous Galerkin and/or flux reconstruction can provide effective discretization for the spatial derivatives. Together with a time discretization, such methods result in either too small a time step size in the case of an explicit scheme or a very large system in the case of an implicit one. To tackle these problems, two new high-order space-time schemes for conservation laws are introduced: the first is explicit and the second, implicit. The explicit method here, also called the moment scheme, achieves a Courant-Friedrichs-Lewy (CFL) condition of 1 for the case of one-spatial dimension regardless of the degree of the polynomial approximation. (For standard explicit methods, if the spatial approximation is of degree p, then the time step sizes are typically proportional to 1/p(exp 2)). Fourier analyses for the one and two-dimensional cases are carried out. The property of super accuracy (or super convergence) is discussed. The implicit method is a simplified but optimal version of the discontinuous Galerkin scheme applied to time. It reduces to a collocation implicit Runge-Kutta (RK) method for ordinary differential equations (ODE) called Radau IIA. The explicit and implicit schemes are closely related since they employ the same intermediate time levels, and the former can serve as a key building block in an iterative procedure for the latter. A limiting technique for the piecewise linear scheme is also discussed. The technique can suppress oscillations near a discontinuity while preserving accuracy near extrema. Preliminary numerical results are shown

  17. Space-time ambiguity functions for electronically scanned ISR applications

    NASA Astrophysics Data System (ADS)

    Swoboda, John; Semeter, Joshua; Erickson, Philip

    2015-05-01

    Electronically steerable array (ESA) technology has recently been applied to incoherent scatter radar (ISR) systems. These arrays allow for pulse-to-pulse steering of the antenna beam to collect data in a three-dimensional region. This is in direct contrast to dish-based antennas, where ISR acquisition is limited at any one time to observations in a two-dimensional slice. This new paradigm allows for more flexibility in the measurement of ionospheric plasma parameters. Multiple ESA-based ISR systems operate currently in the high-latitude region where the ionosphere is highly variable in both space and time. Because of the highly dynamic nature of the ionosphere in this region, it is important to differentiate between measurement-induced artifacts and the true behavior of the plasma. Often, three-dimensional ISR data produced by ESA systems are fitted in a spherical coordinate space and then the parameters are interpolated to a Cartesian grid, potentially introducing error and impacting the reconstructions of the plasma parameters. To take advantage of the new flexibility inherent in ESA systems, we present a new way of analyzing ISR observations through use of the space-time ambiguity function. The use of this new measurement ambiguity function allows us to pose the ISR observational problem in terms of a linear inverse problem whose goal is the estimate of the time domain lags of the intrinsic plasma autocorrelation function used for parameter fitting. The framework allows us to explore the impact of nonuniformity in plasma parameters in both time and space. We discuss examples of possible artifacts in high-latitude situations and discuss possible ways of reducing them and improving the quality of data products from electronically steerable ISRs.

  18. Space-time structure of long ocean swell fields

    NASA Astrophysics Data System (ADS)

    Delpey, Matthias T.; Ardhuin, Fabrice; Collard, Fabrice; Chapron, Bertrand

    2010-12-01

    The space-time structure of long-period ocean swell fields is investigated, with particular attention given to features in the direction orthogonal to the propagation direction. This study combines space-borne synthetic aperture radar (SAR) data with numerical model hindcasts and time series recorded by in situ instruments. In each data set the swell field is defined by a common storm source. The correlation of swell height time series is very high along a single great circle path with a time shift given by the deep water dispersion relation of the dominant swells. This correlation is also high for locations situated on different great circles in entire ocean basins. Given the Earth radius R, we define the distance from the source Rα and the transversal angle β so that α and β would be equal the colatitude and longitude for a storm centered on the North Pole. Outside of land influence, the swell height field at time t, Hss(α, β,t) is well approximated by a function Hss,0(t - Rα/Cg)/? times another function r2 (β), where Cg is a representative group speed. Here r2 (β) derived from SAR data is very broad, with a width at half the maximum that is larger than 70°, and varies significantly from storm to storm. Land shadows introduce further modifications so that in general r2 is a function of β and α. This separation of variables and the smoothness of the Hss field, allows the estimation of the full field of Hss from sparse measurements, such as wave mode SAR data, combined with one time series, such as that provided by a single buoy. A first crude estimation of a synthetic Hss field based on this principle already shows that swell hindcasts and forecasts can be improved by assimilating such synthetic observations.

  19. Space-time dynamics estimation from space mission tracking data

    NASA Astrophysics Data System (ADS)

    Dirkx, D.; Noomen, R.; Visser, P. N. A. M.; Gurvits, L. I.; Vermeersen, L. L. A.

    2016-03-01

    Aims: Many physical parameters that can be estimated from space mission tracking data influence both the translational dynamics and proper time rates of observers. These different proper time rates cause a variability of the time transfer observable beyond that caused by their translational (and rotational) dynamics. With the near-future implementation of transponder laser ranging, these effects will become increasingly important, and will require a re-evaluation of the common data analysis practice of using a priori time ephemerides, which is the goal of this paper. Methods: We develop a framework for the simultaneous estimation of the initial translational state and the initial proper time of an observer, with the goal of facilitating robust tracking data analysis from next-generation space missions carrying highly accurate clocks and tracking equipment. Using our approach, the influence of physical parameters on both translational and time dynamics are considered at the same level in the analysis, and mutual correlations between the signatures of the two are automatically identified. We perform a covariance analysis using our proposed method with simulated laser data from Earth-based stations to both a Mars and Mercury lander. Results: Using four years of tracking data for the Mars lander simulations, we find a difference between our results using the simultaneous space-time dynamics estimation and the classical analysis technique (with an a priori time ephemeris) of around 0.1% in formal errors and correlation coefficients. For a Mercury lander this rises to around 1% for a one-month mission and 10% for a four-year mission. By means of Monte Carlo simulations, we find that using an a priori time ephemeris of representative accuracy will result in estimation errors that are orders of magnitude above the formal error when processing highly accurate laser time transfer data.

  20. Holism and life manifestations: molecular and space-time biology.

    PubMed

    Krecek, J

    2010-01-01

    Appeals of philosophers to look for new concepts in sciences are being met with a weak response. Limited attention is paid to the relation between synthetic and analytic approach in solving problems of biology. An attempt is presented to open a discussion on a possible role of holism. The term "life manifestations" is used in accordance with phenomenology. Multicellular creatures maintain milieu intérieur to keep an aqueous milieu intracellulair in order to transform the energy of nutrients into the form utilizable for driving cellular life manifestations. Milieu intérieur enables to integrate this kind of manifestations into life manifestations of the whole multicellular creatures. The integration depends on a uniqueness and uniformity of the genome of cells, on their mutual recognition and adherence. The processes of ontogenetic development represent the natural mode of integration of cellular life manifestations. Functional systems of multicellular creatures are being established by organization of integrable cells using a wide range of developmental processes. Starting from the zygote division the new being displays all properties of a whole creature, although its life manifestations vary. Therefore, the whole organism is not only more than its parts, as supposed by holism, but also more than developmental stages of its life manifestations. Implicitly, the units of whole multicellular creature are rather molecular and developmental events than the cells per se. Holism, taking in mind the existence of molecular and space-time biology, could become a guide in looking for a new mode of the combination of analytical and synthetic reasoning in biology.

  1. Understanding human activity patterns based on space-time-semantics

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Li, Songnian

    2016-11-01

    Understanding human activity patterns plays a key role in various applications in an urban environment, such as transportation planning and traffic forecasting, urban planning, public health and safety, and emergency response. Most existing studies in modeling human activity patterns mainly focus on spatiotemporal dimensions, which lacks consideration of underlying semantic context. In fact, what people do and discuss at some places, inferring what is happening at the places, cannot be simple neglected because it is the root of human mobility patterns. We believe that the geo-tagged semantic context, representing what individuals do and discuss at a place and a specific time, drives a formation of specific human activity pattern. In this paper, we aim to model human activity patterns not only based on space and time but also with consideration of associated semantics, and attempt to prove a hypothesis that similar mobility patterns may have different motivations. We develop a spatiotemporal-semantic model to quantitatively express human activity patterns based on topic models, leading to an analysis of space, time and semantics. A case study is conducted using Twitter data in Toronto based on our model. Through computing the similarities between users in terms of spatiotemporal pattern, semantic pattern and spatiotemporal-semantic pattern, we find that only a small number of users (2.72%) have very similar activity patterns, while the majority (87.14%) show different activity patterns (i.e., similar spatiotemporal patterns and different semantic patterns, similar semantic patterns and different spatiotemporal patterns, or different in both). The population of users that has very similar activity patterns is decreased by 56.41% after incorporating semantic information in the corresponding spatiotemporal patterns, which can quantitatively prove the hypothesis.

  2. Space-time LAI variability in Northern Puglia (Italy) from SPOT VGT data.

    PubMed

    Balacco, Gabriella; Figorito, Benedetto; Tarantino, Eufemia; Gioia, Andrea; Iacobellis, Vito

    2015-07-01

    The vegetation space-time variability during 1999-2010 in the North of the Apulian region (Southern Italy) was analysed using SPOT VEGETATION (VGT) sensor data. Three bands of VEGETATION (RED, NIR and SWIR) were used to implement the vegetation index named reduced simple ratio (RSR) to derive leaf area index (LAI). The monthly average LAI is an indicator of biomass and canopy cover, while the difference between the annual maximum and minimum LAI is an indicator of annual leaf turnover. The space-time distribution of LAI at the catchment scale was analysed over the examined period to detect the consistency of vegetation dynamics in the study area. A diffuse increase of LAI was observed in the examined years that cannot be directly explained only in terms of increasing water availability. Thus, in order to explain such a general behaviour in terms of climatic factors, the analysis was performed upon stratification of land cover classes, focusing on the most widespread species: forest and wheat. An interesting ascending-descending behaviour was observed in the relationship between inter-annual increments of maximum LAI and rainfall, and in particular, a strong negative correlation was found when the rainfall amount in January and February exceeded a critical threshold of about 100 mm.

  3. Nonextensive statistics in stringy space-time foam models and entangled meson states

    SciTech Connect

    Mavromatos, Nick E.; Sarkar, Sarben

    2009-05-15

    The possibility of generation of nonextensive statistics, in the sense of Tsallis, due to space-time foam is discussed within the context of a particular kind of foam in string/brane theory, the D-particle foam model. The latter involves pointlike brane defects (D-particles), which provide the topologically nontrivial foamy structures of space-time. A stochastic Langevin equation for the velocity recoil of D-particles can be derived from the pinched approximation for a sum over genera in the calculation of the partition function of a bosonic string in the presence of heavy D-particles. The string coupling in standard perturbation theory is related to the exponential of the expectation of the dilaton. Inclusion of fluctuations of the dilaton itself and uncertainties in the string background will then necessitate fluctuations in g{sub s}. The fluctuation in the string coupling in the sum over genera typically leads to a generic structure of the Langevin equation where the coefficient of the noise term fluctuates owing to dependence on the string coupling g{sub s}. The positivity of g{sub s} leads naturally to a stochastic modeling of its distribution with a {chi} distribution. This then rigorously implies a Tsallis-type nonextensive or, more generally, a superstatistics distribution for the recoil velocity of D-particles. As a concrete and physically interesting application, we provide a rigorous estimate of an {omega}-like effect, pertinent to CPT violating modifications of the Einstein-Podolsky-Rosen correlators in entangled states of neutral kaons. In the case of D-particle foam fluctuations, which respect the Lorentz symmetry of the vacuum on average, we find that the {omega} effect may be within the range of sensitivity of future meson factories.

  4. Diverse and pervasive subcellular distributions for both coding and long noncoding RNAs

    PubMed Central

    Wilk, Ronit; Hu, Jack; Blotsky, Dmitry; Krause, Henry M.

    2016-01-01

    In a previous analysis of 2300 mRNAs via whole-mount fluorescent in situ hybridization in cellularizing Drosophila embryos, we found that 70% of the transcripts exhibited some form of subcellular localization. To see whether this prevalence is unique to early Drosophila embryos, we examined ∼8000 transcripts over the full course of embryogenesis and ∼800 transcripts in late third instar larval tissues. The numbers and varieties of new subcellular localization patterns are both striking and revealing. In the much larger cells of the third instar larva, virtually all transcripts observed showed subcellular localization in at least one tissue. We also examined the prevalence and variety of localization mechanisms for >100 long noncoding RNAs. All of these were also found to be expressed and subcellularly localized. Thus, subcellular RNA localization appears to be the norm rather than the exception for both coding and noncoding RNAs. These results, which have been annotated and made available on a recompiled database, provide a rich and unique resource for functional gene analyses, some examples of which are provided. PMID:26944682

  5. Numeral series hidden in the distribution of atomic mass of amino acids to codon domains in the genetic code.

    PubMed

    Wohlin, Åsa

    2015-03-21

    The distribution of codons in the nearly universal genetic code is a long discussed issue. At the atomic level, the numeral series 2x(2) (x=5-0) lies behind electron shells and orbitals. Numeral series appear in formulas for spectral lines of hydrogen. The question here was if some similar scheme could be found in the genetic code. A table of 24 codons was constructed (synonyms counted as one) for 20 amino acids, four of which have two different codons. An atomic mass analysis was performed, built on common isotopes. It was found that a numeral series 5 to 0 with exponent 2/3 times 10(2) revealed detailed congruency with codon-grouped amino acid side-chains, simultaneously with the division on atom kinds, further with main 3rd base groups, backbone chains and with codon-grouped amino acids in relation to their origin from glycolysis or the citrate cycle. Hence, it is proposed that this series in a dynamic way may have guided the selection of amino acids into codon domains. Series with simpler exponents also showed noteworthy correlations with the atomic mass distribution on main codon domains; especially the 2x(2)-series times a factor 16 appeared as a conceivable underlying level, both for the atomic mass and charge distribution. Furthermore, it was found that atomic mass transformations between numeral systems, possibly interpretable as dimension degree steps, connected the atomic mass of codon bases with codon-grouped amino acids and with the exponent 2/3-series in several astonishing ways. Thus, it is suggested that they may be part of a deeper reference system.

  6. Spinor waves in a space-time lattice (II)

    NASA Astrophysics Data System (ADS)

    Wouthuysen, S. A.

    1994-02-01

    In a previous note, an exceptional space-time lattice was found by a roundabout heuristic process. This process was far from convincing; here a more translucent characterization of the lattice is presented. A cornerstone is the consideration of pairs of reciprocal lattices, together with the basic symmetry ( S 4) of the metric tensor. The basic requirement is that one member of a pair of reciprocal lattices contains the other as a sublattice. One preferred lattice is discussed in some detail; it contains three copies of its reciprocal lattice, and it is the simplest example satisfying the requirements. In the expression of the metric tensor in terms of the lattice generators a possible topology on the lattice is suggested. By means of this topology, propagation of spinor waves can be formulated. This proposed—the simplest—propagation mechanism is inhibited, though, by the fact that the three sublattices are required to carry the two types of spinors alternatively. This inhibition can be lifted by introducing a second type of elementary propagation, to next nearest neighbors. If this inhibition is only feebly lifted, this would result in particles with mass small as compared to the inverse of the lattice constant, presumably the Planck mass. Including the propagation to next nearest neighbors leads to spinor waves with six components, two components for each sublattice. In the long-wavelength limit four of them obey a massive Dirac equation, while the remaining two obey a Weyl equation. These considerations conceivably provide a root for the lack of parity invariance in nature, and for the joint occurrence of pairs of massive and massless spinor waves. The construction, furthermore, allows one to accommodate just three different families of spinor waves of this type. Extension of the above arguments outside the realm of the long-wavelength limit forcibly makes the lattice concept independent of the original continuous Minkowski spacetime: the latter is no longer

  7. MCNP(TM) Release 6.1.1 beta: Creating and Testing the Code Distribution

    SciTech Connect

    Cox, Lawrence J.; Casswell, Laura

    2014-06-12

    This report documents the preparations for and testing of the production release of MCNP6™1.1 beta through RSICC at ORNL. It addresses tests on supported operating systems (Linux, MacOSX, Windows) with the supported compilers (Intel, Portland Group and gfortran). Verification and Validation test results are documented elsewhere. This report does not address in detail the overall packaging of the distribution. Specifically, it does not address the nuclear and atomic data collection, the other included software packages (MCNP5, MCNPX and MCNP6) and the collection of reference documents.

  8. Clinical CT-based calculations of dose and positron emitter distributions in proton therapy using the FLUKA Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Parodi, K.; Ferrari, A.; Sommerer, F.; Paganetti, H.

    2007-07-01

    Clinical investigations on post-irradiation PET/CT (positron emission tomography/computed tomography) imaging for in vivo verification of treatment delivery and, in particular, beam range in proton therapy are underway at Massachusetts General Hospital (MGH). Within this project, we have developed a Monte Carlo framework for CT-based calculation of dose and irradiation-induced positron emitter distributions. Initial proton beam information is provided by a separate Geant4 Monte Carlo simulation modelling the treatment head. Particle transport in the patient is performed in the CT voxel geometry using the FLUKA Monte Carlo code. The implementation uses a discrete number of different tissue types with composition and mean density deduced from the CT scan. Scaling factors are introduced to account for the continuous Hounsfield unit dependence of the mass density and of the relative stopping power ratio to water used by the treatment planning system (XiO (Computerized Medical Systems Inc.)). Resulting Monte Carlo dose distributions are generally found in good correspondence with calculations of the treatment planning program, except a few cases (e.g. in the presence of air/tissue interfaces). Whereas dose is computed using standard FLUKA utilities, positron emitter distributions are calculated by internally combining proton fluence with experimental and evaluated cross-sections yielding 11C, 15O, 14O, 13N, 38K and 30P. Simulated positron emitter distributions yield PET images in good agreement with measurements. In this paper, we describe in detail the specific implementation of the FLUKA calculation framework, which may be easily adapted to handle arbitrary phase spaces of proton beams delivered by other facilities or include more reaction channels based on additional cross-section data. Further, we demonstrate the effects of different acquisition time regimes (e.g., PET imaging during or after irradiation) on the intensity and spatial distribution of the irradiation

  9. Renormalized stress tensor in Kerr space-time: Numerical results for the Hartle-Hawking vacuum

    SciTech Connect

    Duffy, Gavin; Ottewill, Adrian C.

    2008-01-15

    We show that the pathology which afflicts the Hartle-Hawking vacuum on the Kerr black hole space-time can be regarded as due to rigid rotation of the state with the horizon in the sense that, when the region outside the speed-of-light surface is removed by introducing a mirror, there is a state with the defining features of the Hartle-Hawking vacuum. In addition, we show that, when the field is in this state, the expectation value of the energy-momentum stress tensor measured by an observer close to the horizon and rigidly rotating with it corresponds to that of a thermal distribution at the Hawking temperature rigidly rotating with the horizon.

  10. Universal space-time scaling symmetry in the dynamics of bosons across a quantum phase transition

    NASA Astrophysics Data System (ADS)

    Clark, Logan W.; Feng, Lei; Chin, Cheng

    2016-11-01

    The dynamics of many-body systems spanning condensed matter, cosmology, and beyond are hypothesized to be universal when the systems cross continuous phase transitions. The universal dynamics are expected to satisfy a scaling symmetry of space and time with the crossing rate, inspired by the Kibble-Zurek mechanism. We test this symmetry based on Bose condensates in a shaken optical lattice. Shaking the lattice drives condensates across an effectively ferromagnetic quantum phase transition. After crossing the critical point, the condensates manifest delayed growth of spin fluctuations and develop antiferromagnetic spatial correlations resulting from the sub-Poisson distribution of the spacing between topological defects. The fluctuations and correlations are invariant in scaled space-time coordinates, in support of the scaling symmetry of quantum critical dynamics.

  11. Space-time properties of Gram-Schmidt vectors in classical Hamiltonian evolution.

    PubMed

    Green, Jason R; Jellinek, Julius; Berry, R Stephen

    2009-12-01

    Not all tangent space directions play equivalent roles in the local chaotic motions of classical Hamiltonian many-body systems. These directions are numerically represented by basis sets of mutually orthogonal Gram-Schmidt vectors, whose statistical properties may depend on the chosen phase space-time domain of a trajectory. We examine the degree of stability and localization of Gram-Schmidt vector sets simulated with trajectories of a model three-atom Lennard-Jones cluster. Distributions of finite-time Lyapunov exponent and inverse participation ratio spectra formed from short-time histories reveal that ergodicity begins to emerge on different time scales for trajectories spanning different phase-space regions, in a narrow range of total energy and history length. Over a range of history lengths, the most localized directions were typically the most unstable and corresponded to atomic configurations near potential landscape saddles.

  12. Inference of cowpox virus transmission rates between wild rodent host classes using space-time interaction.

    PubMed

    Carslake, David; Bennett, Malcolm; Hazel, Sarah; Telfer, Sandra; Begon, Michael

    2006-04-07

    There have been virtually no studies of 'who acquires infection from whom' in wildlife populations, but patterns of transmission within and between different classes of host are likely to be reflected in the spatiotemporal distribution of infection among those host classes. Here, we use a modified form of K-function analysis to test for space-time interaction among bank voles and wood mice infectious with cowpox virus. There was no evidence for transmission between the two host species, supporting previous evidence that they act as separate reservoirs for cowpox. Among wood mice, results suggested that transmission took place primarily between individuals of the opposite sex, raising the possibility that cowpox is sexually transmitted in this species. Results for bank voles indicated that infected females might be a more important source of infection to either sex than are males. The suggestion of different modes of transmission in the two species is itself consistent with the apparent absence of transmission between species.

  13. Universal space-time scaling symmetry in the dynamics of bosons across a quantum phase transition.

    PubMed

    Clark, Logan W; Feng, Lei; Chin, Cheng

    2016-11-04

    The dynamics of many-body systems spanning condensed matter, cosmology, and beyond are hypothesized to be universal when the systems cross continuous phase transitions. The universal dynamics are expected to satisfy a scaling symmetry of space and time with the crossing rate, inspired by the Kibble-Zurek mechanism. We test this symmetry based on Bose condensates in a shaken optical lattice. Shaking the lattice drives condensates across an effectively ferromagnetic quantum phase transition. After crossing the critical point, the condensates manifest delayed growth of spin fluctuations and develop antiferromagnetic spatial correlations resulting from the sub-Poisson distribution of the spacing between topological defects. The fluctuations and correlations are invariant in scaled space-time coordinates, in support of the scaling symmetry of quantum critical dynamics.

  14. Implementation of polarization-coded free-space BB84 quantum key distribution

    NASA Astrophysics Data System (ADS)

    Kim, Y.-S.; Jeong, Y.-C.; Kim, Y.-H.

    2008-06-01

    We report on the implementation of a Bennett-Brassard 1984 quantum key distribution protocol over a free-space optical path on an optical table. Attenuated laser pulses and Pockels cells driven by a pseudorandom number generator are employed to prepare polarization-encoded photons. The sifted key generation rate of 23.6 kbits per second and the quantum bit error rate (QBER) of 3% have been demonstrated at the average photon number per pulse μ = 0.16. This QBER is sufficiently low to extract final secret keys from shared sifted keys via error correction and privacy amplification. We also tested the long-distance capability of our system by adding optical losses to the quantum channel and found that the QBER remains the same regardless of the loss.

  15. Unusual space-time patterning of the Fallon, Nevada leukemia cluster: Evidence of an infectious etiology.

    PubMed

    Francis, Stephen S; Selvin, Steve; Yang, Wei; Buffler, Patricia A; Wiemels, Joseph L

    2012-04-05

    The town of Fallon within Churchill County, Nevada exhibited an unusually high incidence of childhood leukemia during the years 1997-2003. We examined the temporal and spatial patterning of the leukemia case homes in comparison to the distribution of the general population at risk, other cancer incidence, and features of land use. Leukemia cases were predominantly diagnosed during the early to mid summer, exhibiting a seasonal bias. Leukemia cases lived outside of the "developed/urban" area of Fallon, predominantly in the "agriculture/pasture" region of Churchill County, circumscribing downtown Fallon. This pattern was different from the distribution of the underlying population (p-value<0.01) and different from the distribution of other cancers, which were evenly distributed when compared to the population (p-value=0.74). The unusual space-time patterning of childhood leukemia is consistent with the involvement of an infectious disease. A possible mode of transmission for such an infectious disease is by means of a vector, and mosquitoes are abundant in Churchill County outside of the urban area of Fallon. This region harbors a US Navy base, and a temporally concordant increase in military wide childhood leukemia rates suggests the base a possible source of the virus. Taken together, our current understanding of the etiology of childhood leukemia, the rural structure combined with temporal and geospatial patterning of these leukemia cases, and the high degree of population mixing in Fallon, suggest a possible infectious cause.

  16. Variable continental distribution of polymorphisms in the coding regions of DNA-repair genes.

    PubMed

    Mathonnet, Géraldine; Labuda, Damian; Meloche, Caroline; Wambach, Tina; Krajinovic, Maja; Sinnett, Daniel

    2003-01-01

    DNA-repair pathways are critical for maintaining the integrity of the genetic material by protecting against mutations due to exposure-induced damages or replication errors. Polymorphisms in the corresponding genes may be relevant in genetic epidemiology by modifying individual cancer susceptibility or therapeutic response. We report data on the population distribution of potentially functional variants in XRCC1, APEX1, ERCC2, ERCC4, hMLH1, and hMSH3 genes among groups representing individuals of European, Middle Eastern, African, Southeast Asian and North American descent. The data indicate little interpopulation differentiation in some of these polymorphisms and typical FST values ranging from 10 to 17% at others. Low FST was observed in APEX1 and hMSH3 exon 23 in spite of their relatively high minor allele frequencies, which could suggest the effect of balancing selection. In XRCC1, hMSH3 exon 21 and hMLH1 Africa clusters either with Middle East and Europe or with Southeast Asia, which could be related to the demographic history of human populations, whereby human migrations and genetic drift rather than selection would account for the observed differences.

  17. On the problem of non-zero word error rates for fixed-rate error correction codes in continuous variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Johnson, Sarah J.; Lance, Andrew M.; Ong, Lawrence; Shirvanimoghaddam, Mahyar; Ralph, T. C.; Symul, Thomas

    2017-02-01

    The maximum operational range of continuous variable quantum key distribution protocols has shown to be improved by employing high-efficiency forward error correction codes. Typically, the secret key rate model for such protocols is modified to account for the non-zero word error rate of such codes. In this paper, we demonstrate that this model is incorrect: firstly, we show by example that fixed-rate error correction codes, as currently defined, can exhibit efficiencies greater than unity. Secondly, we show that using this secret key model combined with greater than unity efficiency codes, implies that it is possible to achieve a positive secret key over an entanglement breaking channel—an impossible scenario. We then consider the secret key model from a post-selection perspective, and examine the implications for key rate if we constrain the forward error correction codes to operate at low word error rates.

  18. Black holes in loop quantum gravity: the complete space-time.

    PubMed

    Gambini, Rodolfo; Pullin, Jorge

    2008-10-17

    We consider the quantization of the complete extension of the Schwarzschild space-time using spherically symmetric loop quantum gravity. We find an exact solution corresponding to the semiclassical theory. The singularity is eliminated but the space-time still contains a horizon. Although the solution is known partially numerically and therefore a proper global analysis is not possible, a global structure akin to a singularity-free Reissner-Nordström space-time including a Cauchy horizon is suggested.

  19. Space-time analysis of visceral leishmaniasis in the State of Maranhão, Brazil.

    PubMed

    Furtado, Aline Santos; Nunes, Flavia Baluz Bezerra de Farias; dos Santos, Alcione Miranda; Caldas, Arlene de Jesus Mendes

    2015-12-01

    This study analyzed the spatial and temporal distribution of cases of visceral leishmaniasis in the State of Maranhão in the period from 2000 to 2009. Based on the number of reported cases, thematic maps were prepared to show the evolution of the geographical distribution of the disease in the state. The MCMC method was used for estimating the parameters of the Bayesian model for space-time identification of risk areas. From 2000 to 2009 there were 5389 reported cases of visceral leishmaniasis, distributed in all 18 Regional Health Units in the state, with the highest indices in the cities of Caxias, Imperatriz, Presidente Dutra and Chapadinha. The Regional Health Units with the highest relative risks per biennium were: Caxias and Barra do Corda (2000-2001), Imperatriz and President Dutra (2002-2003), Imperatriz and Caxias (2004-2005), Presidente Dutra and Codó (2006-2007) and Imperatriz and Caxias (2008-2009). There was considerable geographic expansion of visceral leishmaniasis in Maranhão, thus highlighting the need to adopt more effective measures for prevention and control of the disease in the state.

  20. Gust Acoustic Response of a Single Airfoil Using the Space-Time CE/SE Method

    NASA Technical Reports Server (NTRS)

    Scott, James (Technical Monitor); Wang, X. Y.; Chang, S. C.; Himansu, A.; Jorgenson, P. C. E.

    2003-01-01

    A 2D parallel Euler code based on the space-time conservation element and solution element (CE/SE) method is validated by solving the benchmark problem I in Category 3 of the Third CAA Workshop. This problem concerns the acoustic field generated by the interaction of a convected harmonic vortical gust with a single airfoil. Three gust frequencies, two gust configurations, and three airfoil geometries are considered. Numerical results at both near and far fields are presented and compared with the analytical solutions, a frequency-domain solver GUST3D solutions, and a time-domain high-order Discontinuous Spectral Element Method (DSEM) solutions. It is shown that the CE/SE solutions agree well with the GUST3D solution for the lowest frequency, while there are discrepancies between CE/SE and GUST3D solutions for higher frequencies. However, the CE/SE solution is in good agreement with the DSEM solution for these higher frequencies. It demonstrates that the CE/SE method can produce accurate results of CAA problems involving complex geometries by using unstructured meshes.

  1. Mapping atomic motions with ultrabright electrons: towards fundamental limits in space-time resolution.

    PubMed

    Manz, Stephanie; Casandruc, Albert; Zhang, Dongfang; Zhong, Yinpeng; Loch, Rolf A; Marx, Alexander; Hasegawa, Taisuke; Liu, Lai Chung; Bayesteh, Shima; Delsim-Hashemi, Hossein; Hoffmann, Matthias; Felber, Matthias; Hachmann, Max; Mayet, Frank; Hirscht, Julian; Keskin, Sercan; Hada, Masaki; Epp, Sascha W; Flöttmann, Klaus; Miller, R J Dwayne

    2015-01-01

    The long held objective of directly observing atomic motions during the defining moments of chemistry has been achieved based on ultrabright electron sources that have given rise to a new field of atomically resolved structural dynamics. This class of experiments requires not only simultaneous sub-atomic spatial resolution with temporal resolution on the 100 femtosecond time scale but also has brightness requirements approaching single shot atomic resolution conditions. The brightness condition is in recognition that chemistry leads generally to irreversible changes in structure during the experimental conditions and that the nanoscale thin samples needed for electron structural probes pose upper limits to the available sample or "film" for atomic movies. Even in the case of reversible systems, the degree of excitation and thermal effects require the brightest sources possible for a given space-time resolution to observe the structural changes above background. Further progress in the field, particularly to the study of biological systems and solution reaction chemistry, requires increased brightness and spatial coherence, as well as an ability to tune the electron scattering cross-section to meet sample constraints. The electron bunch density or intensity depends directly on the magnitude of the extraction field for photoemitted electron sources and electron energy distribution in the transverse and longitudinal planes of electron propagation. This work examines the fundamental limits to optimizing these parameters based on relativistic electron sources using re-bunching cavity concepts that are now capable of achieving 10 femtosecond time scale resolution to capture the fastest nuclear motions. This analysis is given for both diffraction and real space imaging of structural dynamics in which there are several orders of magnitude higher space-time resolution with diffraction methods. The first experimental results from the Relativistic Electron Gun for Atomic

  2. Scaling properties and fractality in the distribution of coding segments in eukaryotic genomes revealed through a block entropy approach

    NASA Astrophysics Data System (ADS)

    Athanasopoulou, Labrini; Athanasopoulos, Stavros; Karamanos, Kostas; Almirantis, Yannis

    2010-11-01

    Statistical methods, including block entropy based approaches, have already been used in the study of long-range features of genomic sequences seen as symbol series, either considering the full alphabet of the four nucleotides or the binary purine or pyrimidine character set. Here we explore the alternation of short protein-coding segments with long noncoding spacers in entire chromosomes, focusing on the scaling properties of block entropy. In previous studies, it has been shown that the sizes of noncoding spacers follow power-law-like distributions in most chromosomes of eukaryotic organisms from distant taxa. We have developed a simple evolutionary model based on well-known molecular events (segmental duplications followed by elimination of most of the duplicated genes) which reproduces the observed linearity in log-log plots. The scaling properties of block entropy H(n) have been studied in several works. Their findings suggest that linearity in semilogarithmic scale characterizes symbol sequences which exhibit fractal properties and long-range order, while this linearity has been shown in the case of the logistic map at the Feigenbaum accumulation point. The present work starts with the observation that the block entropy of the Cantor-like binary symbol series scales in a similar way. Then, we perform the same analysis for the full set of human chromosomes and for several chromosomes of other eukaryotes. A similar but less extended linearity in semilogarithmic scale, indicating fractality, is observed, while randomly formed surrogate sequences clearly lack this type of scaling. Genomic sequences always present entropy values much lower than their random surrogates. Symbol sequences produced by the aforementioned evolutionary model follow the scaling found in genomic sequences, thus corroborating the conjecture that “segmental duplication-gene elimination” dynamics may have contributed to the observed long rangeness in the coding or noncoding alternation in

  3. Threshold exceedance risk assessment in complex space-time systems

    NASA Astrophysics Data System (ADS)

    Angulo, José M.; Madrid, Ana E.; Romero, José L.

    2015-04-01

    Environmental and health impact risk assessment studies most often involve analysis and characterization of complex spatio-temporal dynamics. Recent developments in this context are addressed, among other objectives, to proper representation of structural heterogeneities, heavy-tailed processes, long-range dependence, intermittency, scaling behavior, etc. Extremal behaviour related to spatial threshold exceedances can be described in terms of geometrical characteristics and distribution patterns of excursion sets, which are the basis for construction of risk-related quantities, such as in the case of evolutionary study of 'hotspots' and long-term indicators of occurrence of extremal episodes. Derivation of flexible techniques, suitable for both the application under general conditions and the interpretation on singularities, is important for practice. Modern risk theory, a developing discipline motivated by the need to establish solid general mathematical-probabilistic foundations for rigorous definition and characterization of risk measures, has led to the introduction of a variety of classes and families, ranging from some conceptually inspired by specific fields of applications, to some intended to provide generality and flexibility to risk analysts under parametric specifications, etc. Quantile-based risk measures, such as Value-at-Risk (VaR), Average Value-at-Risk (AVaR), and generalization to spectral measures, are of particular interest for assessment under very general conditions. In this work, we study the application of quantile-based risk measures in the spatio-temporal context in relation to certain geometrical characteristics of spatial threshold exceedance sets. In particular, we establish a closed-form relationship between VaR, AVaR, and the expected value of threshold exceedance areas and excess volumes. Conditional simulation allows us, by means of empirical global and local spatial cumulative distributions, the derivation of various statistics of

  4. EPOCH code simulation of a non-thermal distribution driven by neutral beam injection in a high-beta plasma

    NASA Astrophysics Data System (ADS)

    Necas, A.; Tajima, T.; Nicks, S.; Magee, R.; Clary, R.; Roche, T.; Tri Alpha Energy Team

    2016-10-01

    In Tri Alpha Energy's C-2U experiment, advanced beam-driven field-reversed configuration (FRC) plasmas were sustained via tangential neutral beam injection. The dominant fast ion population made a dramatic impact on the overall plasma performance. To explain an experimentally observed anomalous neutron signal (100x thermonuclear), we use EPOCH PIC code to simulate possible beam driven non-destructive instabilities that transfer energy from fast ions to the plasma, causing phase space bunching. We propose that the hydrogen beam ion population drives collective modes in the deuterium target plasma, giving rise to the instability and increased fusion rate. The instability changes character from electrostatic in the low beta edge to fully electromagnetic in the core, with an associated reduction in growth rates. The DD reactivity enhancement is calculated using a two-body correlation function and compared to the experimentally observed neutron yield. The high-energy tails in the distributions of the plasma deuterons and beam protons are observed via a mass-resolving Neutral Particle Analyzer (NPA) diagnostic. This observation is qualitatively consistent with EPOCH simulation of the beam-plasma instability.

  5. Application of CORSIKA Simulation Code to Study Lateral and Longitudinal Distribution of Fluorescence Light in Cosmic Ray Extensive Air Showers

    NASA Astrophysics Data System (ADS)

    Bagheri, Zahra; Davoudifar, Pantea; Rastegarzadeh, Gohar; Shayan, Milad

    2017-03-01

    In this paper, we used CORSIKA code to understand the characteristics of cosmic ray induced showers at extremely high energy as a function of energy, detector distance to shower axis, number, and density of secondary charged particles and the nature particle producing the shower. Based on the standard properties of the atmosphere, lateral and longitudinal development of the shower for photons and electrons has been investigated. Fluorescent light has been collected by the detector for protons, helium, oxygen, silicon, calcium and iron primary cosmic rays in different energies. So we have obtained a number of electrons per unit area, distance to the shower axis, shape function of particles density, percentage of fluorescent light, lateral distribution of energy dissipated in the atmosphere and visual field angle of detector as well as size of the shower image. We have also shown that location of highest percentage of fluorescence light is directly proportional to atomic number of elements. Also we have shown when the distance from shower axis increases and the shape function of particles density decreases severely. At the first stages of development, shower axis distance from detector is high and visual field angle is small; then with shower moving toward the Earth, angle increases. Overall, in higher energies, the fluorescent light method has more efficiency. The paper provides standard calibration lines for high energy showers which can be used to determine the nature of the particles.

  6. On the Weyl and Ricci tensors of Generalized Robertson-Walker space-times

    NASA Astrophysics Data System (ADS)

    Mantica, Carlo Alberto; Molinari, Luca Guido

    2016-10-01

    We prove theorems about the Ricci and the Weyl tensors on Generalized Robertson-Walker space-times of dimension n ≥ 3. In particular, we show that the concircular vector introduced by Chen decomposes the Ricci tensor as a perfect fluid term plus a term linear in the contracted Weyl tensor. The Weyl tensor is harmonic if and only if it is annihilated by Chen's vector, and any of the two conditions is necessary and sufficient for the Generalized Robertson-Walker (GRW) space-time to be a quasi-Einstein (perfect fluid) manifold. Finally, the general structure of the Riemann tensor for Robertson-Walker space-times is given, in terms of Chen's vector. In n = 4, a GRW space-time with harmonic Weyl tensor is a Robertson-Walker space-time.

  7. A space-time discontinuous Galerkin method for the incompressible Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Rhebergen, Sander; Cockburn, Bernardo; van der Vegt, Jaap J. W.

    2013-01-01

    We introduce a space-time discontinuous Galerkin (DG) finite element method for the incompressible Navier-Stokes equations. Our formulation can be made arbitrarily high-order accurate in both space and time and can be directly applied to deforming domains. Different stabilizing approaches are discussed which ensure stability of the method. A numerical study is performed to compare the effect of the stabilizing approaches, to show the method's robustness on deforming domains and to investigate the behavior of the convergence rates of the solution. Recently we introduced a space-time hybridizable DG (HDG) method for incompressible flows [S. Rhebergen, B. Cockburn, A space-time hybridizable discontinuous Galerkin method for incompressible flows on deforming domains, J. Comput. Phys. 231 (2012) 4185-4204]. We will compare numerical results of the space-time DG and space-time HDG methods. This constitutes the first comparison between DG and HDG methods.

  8. Experimental model of topological defects in Minkowski space-time based on disordered ferrofluid: magnetic monopoles, cosmic strings and the space-time cloak.

    PubMed

    Smolyaninov, Igor I; Smolyaninova, Vera N; Smolyaninov, Alexei I

    2015-08-28

    In the presence of an external magnetic field, cobalt nanoparticle-based ferrofluid forms a self-assembled hyperbolic metamaterial. The wave equation, which describes propagation of extraordinary light inside the ferrofluid, exhibits 2+1 dimensional Lorentz symmetry. The role of time in the corresponding effective three-dimensional Minkowski space-time is played by the spatial coordinate directed along the periodic nanoparticle chains aligned by the magnetic field. Here, we present a microscopic study of point, linear, planar and volume defects of the nanoparticle chain structure and demonstrate that they may exhibit strong similarities with such Minkowski space-time defects as magnetic monopoles, cosmic strings and the recently proposed space-time cloaks. Experimental observations of such defects are described.

  9. Design and Simulation of Material-Integrated Distributed Sensor Processing with a Code-Based Agent Platform and Mobile Multi-Agent Systems

    PubMed Central

    Bosse, Stefan

    2015-01-01

    Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques. PMID:25690550

  10. Design and simulation of material-integrated distributed sensor processing with a code-based agent platform and mobile multi-agent systems.

    PubMed

    Bosse, Stefan

    2015-02-16

    Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques.

  11. Determination of neutron flux distribution by using ANISN, a one-dimensional discrete S sub n ordinates transport code with anisotropic scattering

    NASA Technical Reports Server (NTRS)

    Ghorai, S. K.

    1983-01-01

    The purpose of this project was to use a one-dimensional discrete coordinates transport code called ANISN in order to determine the energy-angle-spatial distribution of neutrons in a 6-feet cube rock box which houses a D-T neutron generator at its center. The project was two-fold. The first phase of the project involved adaptation of the ANISN code written for an IBM 360/75/91 computer to the UNIVAC system at JSC. The second phase of the project was to use the code with proper geometry, source function and rock material composition in order to determine the neutron flux distribution around the rock box when a 14.1 MeV neutron generator placed at its center is activated.

  12. Retrospective space-time cluster analysis of whooping cough, re-emergence in Barcelona, Spain, 2000-2011.

    PubMed

    Solano, Rubén; Gómez-Barroso, Diana; Simón, Fernando; Lafuente, Sarah; Simón, Pere; Rius, Cristina; Gorrindo, Pilar; Toledo, Diana; Caylà, Joan A

    2014-05-01

    A retrospective, space-time study of whooping cough cases reported to the Public Health Agency of Barcelona, Spain between the years 2000 and 2011 is presented. It is based on 633 individual whooping cough cases and the 2006 population census from the Spanish National Statistics Institute, stratified by age and sex at the census tract level. Cluster identification was attempted using space-time scan statistic assuming a Poisson distribution and restricting temporal extent to 7 days and spatial distance to 500 m. Statistical calculations were performed with Stata 11 and SatScan and mapping was performed with ArcGis 10.0. Only clusters showing statistical significance (P <0.05) were mapped. The most likely cluster identified included five census tracts located in three neighbourhoods in central Barcelona during the week from 17 to 23 August 2011. This cluster included five cases compared with the expected level of 0.0021 (relative risk = 2436, P <0.001). In addition, 11 secondary significant space-time clusters were detected with secondary clusters occurring at different times and localizations. Spatial statistics is felt to be useful by complementing epidemiological surveillance systems through visualizing excess in the number of cases in space and time and thus increase the possibility of identifying outbreaks not reported by the surveillance system.

  13. Natural world physical, brain operational, and mind phenomenal space-time.

    PubMed

    Fingelkurts, Andrew A; Fingelkurts, Alexander A; Neves, Carlos F H

    2010-06-01

    Concepts of space and time are widely developed in physics. However, there is a considerable lack of biologically plausible theoretical frameworks that can demonstrate how space and time dimensions are implemented in the activity of the most complex life-system - the brain with a mind. Brain activity is organized both temporally and spatially, thus representing space-time in the brain. Critical analysis of recent research on the space-time organization of the brain's activity pointed to the existence of so-called operational space-time in the brain. This space-time is limited to the execution of brain operations of differing complexity. During each such brain operation a particular short-term spatio-temporal pattern of integrated activity of different brain areas emerges within related operational space-time. At the same time, to have a fully functional human brain one needs to have a subjective mental experience. Current research on the subjective mental experience offers detailed analysis of space-time organization of the mind. According to this research, subjective mental experience (subjective virtual world) has definitive spatial and temporal properties similar to many physical phenomena. Based on systematic review of the propositions and tenets of brain and mind space-time descriptions, our aim in this review essay is to explore the relations between the two. To be precise, we would like to discuss the hypothesis that via the brain operational space-time the mind subjective space-time is connected to otherwise distant physical space-time reality.

  14. Transient Analyses for a Molten Salt Transmutation Reactor Using the Extended SIMMER-III Code

    SciTech Connect

    Wang, Shisheng; Rineiski, Andrei; Maschek, Werner; Ignatiev, Victor

    2006-07-01

    Recent developments extending the capabilities of the SIMMER-III code for the dealing with transient and accidents in Molten Salt Reactors (MSRs) are presented. These extensions refer to the movable precursor modeling within the space-time dependent neutronics framework of SIMMER-III, to the molten salt flow modeling, and to new equations of state for various salts. An important new SIMMER-III feature is that the space-time distribution of the various precursor families with different decay constants can be computed and took into account in neutron/reactivity balance calculations and, if necessary, visualized. The system is coded and tested for a molten salt transmuter. This new feature is also of interest in core disruptive accidents of fast reactors when the core melts and the molten fuel is redistributed. (authors)

  15. Optimal design of hydraulic head monitoring networks using space-time geostatistics

    NASA Astrophysics Data System (ADS)

    Herrera, G. S.; Júnez-Ferreira, H. E.

    2013-05-01

    This paper presents a new methodology for the optimal design of space-time hydraulic head monitoring networks and its application to the Valle de Querétaro aquifer in Mexico. The selection of the space-time monitoring points is done using a static Kalman filter combined with a sequential optimization method. The Kalman filter requires as input a space-time covariance matrix, which is derived from a geostatistical analysis. A sequential optimization method that selects the space-time point that minimizes a function of the variance, in each step, is used. We demonstrate the methodology applying it to the redesign of the hydraulic head monitoring network of the Valle de Querétaro aquifer with the objective of selecting from a set of monitoring positions and times, those that minimize the spatiotemporal redundancy. The database for the geostatistical space-time analysis corresponds to information of 273 wells located within the aquifer for the period 1970-2007. A total of 1,435 hydraulic head data were used to construct the experimental space-time variogram. The results show that from the existing monitoring program that consists of 418 space-time monitoring points, only 178 are not redundant. The implied reduction of monitoring costs was possible because the proposed method is successful in propagating information in space and time.

  16. Space-time quantitative source apportionment of soil heavy metal concentration increments.

    PubMed

    Yang, Yong; Christakos, George; Guo, Mingwu; Xiao, Lu; Huang, Wei

    2017-04-01

    Assessing the space-time trends and detecting the sources of heavy metal accumulation in soils have important consequences in the prevention and treatment of soil heavy metal pollution. In this study, we collected soil samples in the eastern part of the Qingshan district, Wuhan city, Hubei Province, China, during the period 2010-2014. The Cd, Cu, Pb and Zn concentrations in soils exhibited a significant accumulation during 2010-2014. The spatiotemporal Kriging technique, based on a quantitative characterization of soil heavy metal concentration variations in terms of non-separable variogram models, was employed to estimate the spatiotemporal soil heavy metal distribution in the study region. Our findings showed that the Cd, Cu, and Zn concentrations have an obvious incremental tendency from the southwestern to the central part of the study region. However, the Pb concentrations exhibited an obvious tendency from the northern part to the central part of the region. Then, spatial overlay analysis was used to obtain absolute and relative concentration increments of adjacent 1- or 5-year periods during 2010-2014. The spatial distribution of soil heavy metal concentration increments showed that the larger increments occurred in the center of the study region. Lastly, the principal component analysis combined with the multiple linear regression method were employed to quantify the source apportionment of the soil heavy metal concentration increments in the region. Our results led to the conclusion that the sources of soil heavy metal concentration increments should be ascribed to industry, agriculture and traffic. In particular, 82.5% of soil heavy metal concentration increment during 2010-2014 was ascribed to industrial/agricultural activities sources. Using STK and SOA to obtain the spatial distribution of heavy metal concentration increments in soils. Using PCA-MLR to quantify the source apportionment of soil heavy metal concentration increments.

  17. Space-time distribution of ignimbrite volcanism in the southern SMO: From Eocene to Pliocene

    NASA Astrophysics Data System (ADS)

    Nieto-Obregon, J.; Aguirre-Diaz, G. J.

    2004-12-01

    A distinct variation in the age of the ignimbrites of the Sierra Madre Occidental (SMO) is observed in the southern portion, which includes the area between Tepic, Nayarit (-105° W) and Aguascalientes, Ags (-102° W). Older, high-grade ignimbrites are Eocene and occur as scattered outcrops. These are in turn covered by a widespread and voluminous sequence of high-grade ignimbrites and silicic to intermediate lavas that ranges in age from Middle Oligocene to Middle Miocene. The peak of this ignimbrite volcanism was at about 21 Ma to 22 Ma, but there is evidence showing that it initiated since about 30 Ma and ended at about 17.5 Ma. This ignimbrite and lava sequence is in turn covered by another series of lavas, predominantly mafic to intermediate, in the southern part of the area. This latest volcanism represents the initiation of the Mexican Volcanic Belt. Ignimbrite volcanism apparently initiated at the NE part of the study area, and migrated to the SW with time, that is from the area Presa Calles to the valley of Bolaños. Isotopic ages reported on these rocks, cluster in various groups reflecting the time evolution of volcanism. Rocks older than 30 Ma tend to occur on the raised blocks of Sierra de El Laurel and Northern Sierra de Morones, in the eastern part of the area. The interval from 30 to 20 Ma comprises a discontinuous set of ages that are concentrated in the blocks of Southern Sierra de Morones, Tlaltenango, Bolaños and the area around Cinco Minas-San Pedro Analco-Hostotipaquillo. An apparent gap of ages occurs between 12 to 18 Ma, followed by a predominantly mafic volcanism scattered mainly to the south of the area, that represents the transition of SMO to MVB. Finally mafic volcanism of the MVB of 3 to 4 Ma is present in the south, in the area excavated on the vicinity of Rio Grande de Santiago. A similar migration pattern has been reported in general for the whole SMO by Aguirre-Diaz and Labarthe-Hernandez (2003), from NE Chihuahua to SW Nayarit between ca. 50 Ma to 18 Ma. Thus, in this study we confirm this pattern in a more local scale. These authors also mention that such large ignimbrite units may have been produced by the extrusion of pyroclastic material through linear conduits. In the Aguascalientes area, we have found linear fissure vents for the local ignimbrites, confirming this fact in this area.

  18. Separability of Gravitational Perturbation in Generalized Kerr-Nut Sitter Space-Time

    NASA Astrophysics Data System (ADS)

    Oota, Takeshi; Yasui, Yukinori

    Generalized Kerr-NUT-de Sitter space-time is the most general space-time which admits a rank-2 closed conformal Killing-Yano tensor. It contains the higher-dimensional Kerr-de Sitter black holes with partially equal angular momenta. We study the separability of gravitational perturbations in the generalized Kerr-NUT-de Sitter space-time. We show that a certain type of tensor perturbations admits the separation of variables. The linearized perturbation equations for the Einstein condition are transformed into the ordinary differential equations of Fuchs type.

  19. Exponential rational function method for space-time fractional differential equations

    NASA Astrophysics Data System (ADS)

    Aksoy, Esin; Kaplan, Melike; Bekir, Ahmet

    2016-04-01

    In this paper, exponential rational function method is applied to obtain analytical solutions of the space-time fractional Fokas equation, the space-time fractional Zakharov Kuznetsov Benjamin Bona Mahony, and the space-time fractional coupled Burgers' equations. As a result, some exact solutions for them are successfully established. These solutions are constructed in fractional complex transform to convert fractional differential equations into ordinary differential equations. The fractional derivatives are described in Jumarie's modified Riemann-Liouville sense. The exact solutions obtained by the proposed method indicate that the approach is easy to implement and effective.

  20. Simple linear technique for the measurement of space-time coupling in ultrashort optical pulses.

    PubMed

    Dorrer, Christophe; Walmsley, Ian A

    2002-11-01

    We demonstrate a simple sensitive linear technique that quantifies the spatiotemporal coupling in the electric field of an ultrashort optical pulse. The space-time uniformity of the field can be determined with only time-stationary filters and square-law integrating detectors, even if it is impossible to measure the temporal electric field in this way. A degree of spatiotemporal uniformity is defined and can be used with the demonstrated diagnostic to quantify space-time coupling. Experimental measurements of space-time coupling due to linear and nonlinear focusing, refraction, and diffraction are presented.

  1. Mobile phone usage in complex urban systems: a space-time, aggregated human activity study

    NASA Astrophysics Data System (ADS)

    Tranos, Emmanouil; Nijkamp, Peter

    2015-04-01

    The present study aims to demonstrate the importance of digital data for investigating space-time dynamics of aggregated human activity in urban systems. Such dynamics can be monitored and modelled using data from mobile phone operators regarding mobile telephone usage. Using such an extensive dataset from the city of Amsterdam, this paper introduces space-time explanatory models of aggregated human activity patterns. Various modelling experiments and results are presented, which demonstrate that mobile telephone data are a good proxy of the space-time dynamics of aggregated human activity in the city.

  2. The Oppenheimer-Snyder space-time with a cosmological constant

    NASA Astrophysics Data System (ADS)

    Nakao, Ken-Ichi

    1992-10-01

    We investigate the Oppenheimer-Snyder space-time with a positive cosmological constant A. The interior of the dust sphere is described by the closed Friedmann-Robertson-Walker space-time while the exterior is the Schwarzschild-de Sitter space-time. Due to the cosmological constant A, when the gravitational massM o of the dust sphere is very large, there is no collapsing solution with the de Sitter-like asymptotic region which expands exponentially in the expanding universe frame. This fact suggests that the very large initial inhomogeneity does not necessarily lead to the failure of the cosmic no hair conjecture.

  3. An Approach to Integrate a Space-Time GIS Data Model with High Performance Computers

    SciTech Connect

    Wang, Dali; Zhao, Ziliang; Shaw, Shih-Lung

    2011-01-01

    In this paper, we describe an approach to integrate a Space-Time GIS data model on a high performance computing platform. The Space-Time GIS data model has been developed on a desktop computing environment. We use the Space-Time GIS data model to generate GIS module, which organizes a series of remote sensing data. We are in the process of porting the GIS module into an HPC environment, in which the GIS modules handle large dataset directly via parallel file system. Although it is an ongoing project, authors hope this effort can inspire further discussions on the integration of GIS on high performance computing platforms.

  4. Specific features of space-time variations of ozone during the development of intensive tropical disturbances

    NASA Technical Reports Server (NTRS)

    Nerushev, Alexander F.; Vasiliev, Victor I.

    1994-01-01

    An analysis of specific features of space-time variations of ozone in the tropical areas which has been performed on the basis of processing of the results of special expedition studies in the Atlantic and Pacific in 1987-1990 and the data of observations at the stations of the world ozonometric network over the 25-year period. The existence of a cause-and-effect relation has been revealed between the processes determining tropical cyclone (TC) development, and specific features of variations of the total content of ozone (TCO) and the vertical distribution of ozone (VDO) in the regions of TC action. Characteristic features of day-to-day and daily variations of TCO during TC development have been found. On the periphery of a developing TC, 1-4 days before it reaches the stage of storm, TCO increases, on average, by 5-8 percent, and a substantial increase in the concentration of ozone occurs in the middle and upper troposphere. The most probable physical mechanisms relating the observed specific features of ozone variations to TC evolution have been suggested. A hypothesis of the possibility of using ozone as an indicator for early prediction of TC development has been substantiated.

  5. Space-time resolved measurements of spontaneous magnetic fields in laser-produced plasma

    SciTech Connect

    Pisarczyk, T.; Chodukowski, T.; Kalinowska, Z.; Borodziuk, S.; Gus'kov, S. Yu.; Dudzak, R.; Dostal, J.; Krousky, E.; Ullschmied, J.; Hrebicek, J.; Medrik, T.; Golasowski, J.; Pfeifer, M.; Skala, J.; Demchenko, N. N.; Korneev, Ph.; Kalal, M.; Renner, O.; Smid, M.; Pisarczyk, P.

    2015-10-15

    The first space-time resolved spontaneous magnetic field (SMF) measurements realized on Prague Asterix Laser System are presented. The SMF was generated as a result of single laser beam (1.315 μm) interaction with massive planar targets made of materials with various atomic numbers (plastic and Cu). Measured SMF confirmed azimuthal geometry and their maximum amplitude reached the value of 10 MG at the laser energy of 250 J for both target materials. It was demonstrated that spatial distributions of these fields are associated with the character of the ablative plasma expansion which clearly depends on the target material. To measure the SMF, the Faraday effect was employed causing rotation of the vector of polarization of the linearly polarized diagnostic beam. The rotation angle was determined together with the phase shift using a novel design of a two-channel polaro-interferometer. To obtain sufficiently high temporal resolution, the polaro-interferometer was irradiated by Ti:Sa laser pulse with the wavelength of 808 nm and the pulse duration of 40 fs. The results of measurements were compared with theoretical analysis.

  6. Characterizing the space-time structure of rainfall in the Sahel with a view to estimating IDAF curves

    NASA Astrophysics Data System (ADS)

    Panthou, G.; Vischel, T.; Lebel, T.; Quantin, G.; Molinié, G.

    2014-07-01

    Intensity-duration-area-frequency (IDAF) curves are increasingly demanded for characterizing the severity of storms and for designing hydraulic structures. Their computation requires inferring areal rainfall distributions over the range of space-time scales that are the most relevant for hydrological studies at catchment scale. In this study, IDAF curves are computed for the first time in West Africa, based on the data provided by the AMMA-CATCH Niger network, composed of 30 recording rain gauges having operated since 1990 over a 16 000 km2 area in South West Niger. The IDAF curves are obtained by separately considering the time (IDF) and space (Areal Reduction Factor - ARF) components of the extreme rainfall distribution. Annual maximum intensities are extracted for resolutions between 1 and 24 h in time and from point (rain-gauge) to 2500 km2 in space. The IDF model used is based on the concept of scale invariance (simple scaling) which allows the normalization of the different temporal resolutions of maxima series to which a global GEV is fitted. This parsimonious framework allows using the concept of dynamic scaling to describe the ARF. The results show that coupling a simple scaling in space and time with a dynamical scaling relating space and time allows modeling satisfactorily the effect of space-time aggregation on the distribution of extreme rainfall.

  7. Space-Time Controls on Carbon Sequestration Over Large-Scale Amazon Basin

    NASA Technical Reports Server (NTRS)

    Smith, Eric A.; Cooper, Harry J.; Gu, Jiujing; Grose, Andrew; Norman, John; daRocha, Humberto R.; Starr, David O. (Technical Monitor)

    2002-01-01

    A major research focus of the LBA Ecology Program is an assessment of the carbon budget and the carbon sequestering capacity of the large scale forest-pasture system that dominates the Amazonia landscape, and its time-space heterogeneity manifest in carbon fluxes across the large scale Amazon basin ecosystem. Quantification of these processes requires a combination of in situ measurements, remotely sensed measurements from space, and a realistically forced hydrometeorological model coupled to a carbon assimilation model, capable of simulating details within the surface energy and water budgets along with the principle modes of photosynthesis and respiration. Here we describe the results of an investigation concerning the space-time controls of carbon sources and sinks distributed over the large scale Amazon basin. The results are derived from a carbon-water-energy budget retrieval system for the large scale Amazon basin, which uses a coupled carbon assimilation-hydrometeorological model as an integrating system, forced by both in situ meteorological measurements and remotely sensed radiation fluxes and precipitation retrieval retrieved from a combination of GOES, SSM/I, TOMS, and TRMM satellite measurements. Brief discussion concerning validation of (a) retrieved surface radiation fluxes and precipitation based on 30-min averaged surface measurements taken at Ji-Parana in Rondonia and Manaus in Amazonas, and (b) modeled carbon fluxes based on tower CO2 flux measurements taken at Reserva Jaru, Manaus and Fazenda Nossa Senhora. The space-time controls on carbon sequestration are partitioned into sets of factors classified by: (1) above canopy meteorology, (2) incoming surface radiation, (3) precipitation interception, and (4) indigenous stomatal processes varied over the different land covers of pristine rainforest, partially, and fully logged rainforests, and pasture lands. These are the principle meteorological, thermodynamical, hydrological, and biophysical

  8. Influence of the input database in detecting fire space-time clusters

    NASA Astrophysics Data System (ADS)

    Pereira, Mário; Costa, Ricardo; Tonini, Marj; Vega Orozco, Carmen; Parente, Joana

    2015-04-01

    Fire incidence variability is influenced by local environmental variables such as topography, land use, vegetation and weather conditions. These induce a cluster pattern of the fire events distribution. The space-time permutation scan statistics (STPSS) method developed by Kulldorff et al. (2005) and implemented in the SaTScanTM software (http://www.satscan.org/) proves to be able to detect space-time clusters in many different fields, even when using incomplete and/or inaccurate input data. Nevertheless, the dependence of the STPSS method on the different characteristics of different datasets describing the same environmental phenomenon has not been studied yet. In this sense, the objective of this study is to assess the robustness of the STPSS for detecting real clusters using different input datasets and to justify the obtained results. This study takes advantage of the existence of two very different official fire datasets currently available for Portugal, both provided by the Institute for the Conservation of Nature and Forests. The first one is the aggregated Portuguese Rural Fire Database PRFD (Pereira et al., 2011), which is based on ground measurements and provides detailed information about the ignition and extinction date/time and the area burnt by each fire in forest, scrubs and agricultural areas. However, in the PRFD, the fire location of each fire is indicated by the name of smallest administrative unit (the parish) where the ignition occurred. Consequently, since the application of the STPSS requires the geographic coordinates of the events, the centroid of the parishes was considered. The second fire dataset is the national mapping burnt areas (NMBA), which is based on satellite measurements and delivered in shape file format. The NMBA provides a detailed spatial information (shape and size of each fire) but the temporal information is restricted to the year of occurrence. Besides these differences, the two datasets cover different periods, they

  9. Einstein--Weyl space-times with geodesic and shear-free neutrino rays: asymptotic behaviour

    SciTech Connect

    Kolassis, C.A.; Santos, N.O.

    1987-02-15

    We consider a neutrino field with geodesic and shear-free rays, in interaction with a gravitational field according to the Einstein--Weyl field equations. Furthermore we suppose that there exists a Killing vector r/sup ..mu../ whose magnitude is almost everywhere bounded at the future and past endpoints of the neutrino rays. The implications of the asymptotic behavior of r/sup ..mu../ on the structure of space-time are investigated and a useful set of reduced equations is obtained. It is found that under these hypothes the space-time cannot be asymptotically flat if the neutrino field is nonvanishing. All the Demianski--Kerr--NUT-like space-times as well as the space-times which admit a covariantly constant null vector are explicity obtained. copyright 1987 Academic Press, Inc.

  10. Fermions in Gödel-type background space-times with torsion and the Landau quantization

    NASA Astrophysics Data System (ADS)

    Garcia, G. Q.; de S. Oliveira, J. R.; Bakke, K.; Furtado, C.

    2017-03-01

    In this paper, we analyze Dirac fermions in Gödel-type background space-times with torsion. We also consider the Gödel-type space-times embedded in a topological defect background. We show that relativistic bound states solutions to the Dirac equation can be obtained by dealing with three cases of the Gödel-type solutions with torsion, where a cosmic string passes through these three cases of the space-time. We obtain the relativistic energy levels for all cases of the Gödel-type solutions with torsion with a cosmic string, where we show that there exists an analogy with the Landau levels for Dirac particles. We also show that the presence of torsion in the space-time yields new contributions to the relativistic spectrum of energies and that the presence of the topological defect modifies the degeneracy of the relativistic energy levels.

  11. Space-time transport schemes and homogenization: II. Extension of the theory and applications

    NASA Astrophysics Data System (ADS)

    Giona, Massimiliano

    2017-03-01

    The theory of space-time and non-Markovian spin-driven Diffusion processes introduced in Giona (2016 J. Stat. Mech. submitted) is extended to random space-time displacements and to a continuum of spins. The comparison of space-time diffusion models with the continuous time random walk of Montroll and Weiss is developed in order to highlight the analogies and the differences between these two processes. A similar comparison is performed for other classes of processes such as the multistate random walk and the correlated continuous time random walk. Moreover, the article develops the correspondence between space-time and non-Markovian spin-driven diffusion models, analyzes their relativistic properties, and introduces the concept of hyperbolic homogenization. Some preliminary observations on the use of these models in order to frame relativistic quantum mechanics within a stochastic transport paradigm without enforcing the analytic continuation of the time variable to the imaginary axis are outlined.

  12. Generalized space-time fractional diffusion equation with composite fractional time derivative

    NASA Astrophysics Data System (ADS)

    Tomovski, Živorad; Sandev, Trifce; Metzler, Ralf; Dubbeldam, Johan

    2012-04-01

    We investigate the solution of space-time fractional diffusion equations with a generalized Riemann-Liouville time fractional derivative and Riesz-Feller space fractional derivative. The Laplace and Fourier transform methods are applied to solve the proposed fractional diffusion equation. The results are represented by using the Mittag-Leffler functions and the Fox H-function. Special cases of the initial and boundary conditions are considered. Numerical scheme and Grünwald-Letnikov approximation are also used to solve the space-time fractional diffusion equation. The fractional moments of the fundamental solution of the considered space-time fractional diffusion equation are obtained. Many known results are special cases of those obtained in this paper. We investigate also the solution of a space-time fractional diffusion equations with a singular term of the form δ(x)ṡ tΓ/(1-β) (β>0).

  13. Differentiating induced and natural seismicity using space-time-magnitude statistics applied to the Coso Geothermal field

    USGS Publications Warehouse

    Schoenball, Martin; Davatzes, Nicholas C.; Glen, Jonathan M. G.

    2015-01-01

    A remarkable characteristic of earthquakes is their clustering in time and space, displaying their self-similarity. It remains to be tested if natural and induced earthquakes share the same behavior. We study natural and induced earthquakes comparatively in the same tectonic setting at the Coso Geothermal Field. Covering the preproduction and coproduction periods from 1981 to 2013, we analyze interevent times, spatial dimension, and frequency-size distributions for natural and induced earthquakes. Individually, these distributions are statistically indistinguishable. Determining the distribution of nearest neighbor distances in a combined space-time-magnitude metric, lets us identify clear differences between both kinds of seismicity. Compared to natural earthquakes, induced earthquakes feature a larger population of background seismicity and nearest neighbors at large magnitude rescaled times and small magnitude rescaled distances. Local stress perturbations induced by field operations appear to be strong enough to drive local faults through several seismic cycles and reactivate them after time periods on the order of a year.

  14. Stationary axisymmetric four dimensional space-time endowed with Einstein metric

    SciTech Connect

    Hasanuddin; Azwar, A.; Gunara, B. E.

    2015-04-16

    In this paper, we construct Ernst equation from vacuum Einstein field equation for both zero and non-zero cosmological constant. In particular, we consider the case where the space-time admits axisymmetric using Boyer-Lindquist coordinates. This is called Kerr-Einstein solution describing a spinning black hole. Finally, we give a short discussion about the dynamics of photons on Kerr-Einstein space-time.

  15. Federated Space-Time Query for Earth Science Data Using OpenSearch Conventions

    NASA Technical Reports Server (NTRS)

    Lynnes, Chris; Beaumont, Bruce; Duerr, Ruth; Hua, Hook

    2009-01-01

    This slide presentation reviews a Space-time query system that has been developed to assist the user in finding Earth science data that fulfills the researchers needs. It reviews the reasons why finding Earth science data can be so difficult, and explains the workings of the Space-Time Query with OpenSearch and how this system can assist researchers in finding the required data, It also reviews the developments with client server systems.

  16. Spontaneous symmetry breaking in static Robertson-Walker space-time with background charge

    NASA Astrophysics Data System (ADS)

    Majumdar, Bimal Kumar; Roychoudhury, Rajkumar

    1992-01-01

    The finite-temperature λφ 4 theory of static Robertson-Walker (RW) space-time is extended to a case with background charge. In contrast to earlier work on static RW space-time, the curvature term is retained and its effect on the effective potential and phase transition are explicitly calculated. The spontaneous symmetry breaking aspects and its dependence on various factors are discussed.

  17. A note on the topology of space-time in special relativity

    NASA Astrophysics Data System (ADS)

    Wickramasekara, S.

    2001-12-01

    We show that a topology can be defined in the four-dimensional space-time of special relativity so as to obtain a topological semigroup for time. The Minkowski 4-vector character of space-time elements as well as the key properties of special relativity are still the same as in the standard theory. However, the new topological structure allows the possibility of an intrinsic asymmetry in the time evolution of physical systems.

  18. Tectonic implications of space-time patterns of Cenozoic magmatism in the western United States

    USGS Publications Warehouse

    Snyder, W.S.; Dickinson, W.R.; Silberman, M.L.

    1976-01-01

    Locations of 2,100 radiometrically dated igneous rocks were plotted on a series of 20 maps, each representing an interval within the period 80 m.y. B.P. to present. Derivative maps showing the distributions in space and time of dated granitic intrusive rocks, silicic lavas and domes, ash-flow tuffs, andesitic-dacitic rocks, and basalts depict well the two main petrogenetic assemblages noted previously by others: (1) mainly intermediate andesitic-dacitic suites, including associated granitic intrusive rocks, silicic extrusive rocks, and minor basaltic lavas, are interpreted as reflecting plate interactions related to subduction along the continental margin; and (2) bimodal suites, dominantly basaltic but with minor silicic extrusive rocks, are interpreted as reflecting extensional tectonics. Space-time distribution of the two assemblages suggests that magmatic arcs extended continously parallel to the continental margin from Canada to Mexico in latest Mesozoic and in Oligocene times. An early Cenozoic null in magmatism in the Great Basin may delineate the region where subduction was arrested temporarily by development of the proto-San Andreas fault as a transform in coastal California or, alternatively, may reflect complex subsurface configurations of subducted plates. The late Cenozoic transition from subduction-related magmatism to extention-related basaltic volcanism in the southern Cordillera occurred at different times in different areas in harmony with current concepts about the migration of the Mendocino triple junction as the modern San Andreas transform fault was formed. The plots also reveal the existence of several discrete magmatic loci where igneous activity of various kinds was characteristically more intense and long-lived than elsewhere. ?? 1976.

  19. Space-time scan statistics of 2007-2013 dengue incidence in Cimahi City, Indonesia.

    PubMed

    Dhewantara, Pandji Wibawa; Ruliansyah, Andri; Fuadiyah, M Ezza Azmi; Astuti, Endang Puji; Widawati, Mutiara

    2015-11-27

    Four dengue serotypes threatened more than 200 million people and has spread to over 400 districts in Indonesia. Furthermore, 26 districts in most densely populated province, West Java, have been declared as hyperendemic areas. Cimahi is an endemic city with the highest population (14,969 people per square kilometer). Evidence on distribution pattern of dengue cases is required to discover the spread of dengue cases in Cimahi. A study has been conducted to detect clusters of dengue incidence during 2007-2013. A temporal spatial analysis was performed using SaTScan™ software incorporated confirmed dengue monthly data from the Municipality Health Office and population data from a local Bureau of Statistics. A retrospective space-time analysis with a Poisson distribution model and monthly precision was performed. Our results revealed a significant most likely cluster (p<0.001) throughout period of study. The most likely cluster was detected in the centre of the city and moved to the northern region of Cimahi. Cimahi, Karangmekar, and Cibabat village were most likely cluster in 2007-2010 (p <0.001; RR = 2.16-2.98; pop at risk 12% total population); Citeureup were detected as the most likely cluster in 2011-2013 (p <0.001; RR 5.77), respectively. Temporaly, clusters were detected in the first quarter of each year each. In conclusion, a dynamic spread of dengue initiated from the centre to its surrounding areas during the period 2007-2013. Our study suggests the use of GIS to strengthen case detection and surveillance. An in-depth investigation to relevant risk factors in high-risk areas in Cimahi city is encouraged.

  20. Quantification of space/time explicit fossil fuel CO2 emissions in urban domes

    NASA Astrophysics Data System (ADS)

    Gurney, K. R.; Razlivanov, I.; Zhou, Y.; Song, Y.; Turnbull, J. C.; Sweeney, C.; Karion, A.; Davis, K. J.; Miles, N. L.; Richardson, S.; Lauvaux, T.; Shepson, P. B.; Cambaliza, M. L.; Lehman, S. J.; Tans, P. P.

    2011-12-01

    Quantification of fossil fuel CO2 emissions from the bottom-up perspective is a critical element in emerging plans on a carbon monitoring system (CMS). A space/time explicit emissions data product can act as both a verification and planning system. It can verify atmospheric CO2 measurements (in situ and remote) and offer detailed mitigation information to local management authorities in order to optimize the mix of mitigation efforts. Here, we present the Hestia Project, an effort aimed at building a high resolution (eg. building and road link-specific, hourly) fossil fuel CO2 emissions data products for the urban domain. A complete data product has been built for the city of Indianapolis and work is ongoing for the city of Los Angeles. The effort in Indianapolis is now part of a larger effort aimed at a convergent top-down/bottom-up assessment of greenhouse gas emissions, called INFLUX. Our urban-level quantification relies on a mixture of data and modeling structures. We start with the sector-specific Vulcan Project estimate at the mix of geocoded and county-wide levels. The Hestia aim is to distribute the Vulcan result in space and time. Two components take the majority of effort: buildings and onroad emissions. For the buildings, we utilize an energy building model which we constrain through lidar data, county assessor parcel data and GIS layers. For onroad emissions, we use a combination of traffic data and GIS road layers maintaining vehicle class information. Finally, all pointwise data in the Vulcan Project are transferred to our urban landscape and additional time distribution is performed. In collaboration with our INFLUX colleagues, we are transporting these high resolution emissions through an atmospheric transport model for a forward comparison of the Hestia data product with atmospheric measurements, collected on aircraft and cell towers. In preparation for a formal urban-scale inversion, these forward comparisons offer insights into both improving

  1. Entropy of space-time outcome in a movement speed-accuracy task.

    PubMed

    Hsieh, Tsung-Yu; Pacheco, Matheus Maia; Newell, Karl M

    2015-12-01

    The experiment reported was set-up to investigate the space-time entropy of movement outcome as a function of a range of spatial (10, 20 and 30 cm) and temporal (250-2500 ms) criteria in a discrete aiming task. The variability and information entropy of the movement spatial and temporal errors considered separately increased and decreased on the respective dimension as a function of an increment of movement velocity. However, the joint space-time entropy was lowest when the relative contribution of spatial and temporal task criteria was comparable (i.e., mid-range of space-time constraints), and it increased with a greater trade-off between spatial or temporal task demands, revealing a U-shaped function across space-time task criteria. The traditional speed-accuracy functions of spatial error and temporal error considered independently mapped to this joint space-time U-shaped entropy function. The trade-off in movement tasks with joint space-time criteria is between spatial error and timing error, rather than movement speed and accuracy.

  2. Real-time Space-time Integration in GIScience and Geography.

    PubMed

    Richardson, Douglas B

    2013-01-01

    Space-time integration has long been the topic of study and speculation in geography. However, in recent years an entirely new form of space-time integration has become possible in GIS and GIScience: real-time space-time integration and interaction. While real-time spatiotemporal data is now being generated almost ubiquitously, and its applications in research and commerce are widespread and rapidly accelerating, the ability to continuously create and interact with fused space-time data in geography and GIScience is a recent phenomenon, made possible by the invention and development of real-time interactive (RTI) GPS/GIS technology and functionality in the late 1980s and early 1990s. This innovation has since functioned as a core change agent in geography, cartography, GIScience and many related fields, profoundly realigning traditional relationships and structures, expanding research horizons, and transforming the ways geographic data is now collected, mapped, modeled, and used, both in geography and in science and society more broadly. Real-time space-time interactive functionality remains today the underlying process generating the current explosion of fused spatiotemporal data, new geographic research initiatives, and myriad geospatial applications in governments, businesses, and society. This essay addresses briefly the development of these real-time space-time functions and capabilities; their impact on geography, cartography, and GIScience; and some implications for how discovery and change can occur in geography and GIScience, and how we might foster continued innovation in these fields.

  3. Space-time correlation analysis of traffic flow on road network

    NASA Astrophysics Data System (ADS)

    Su, Fei; Dong, Honghui; Jia, Limin; Tian, Zhao; Sun, Xuan

    2017-02-01

    Space-time correlation analysis has become a basic and critical work in the research on road traffic congestion. It plays an important role in improving traffic management quality. The aim of this research is to examine the space-time correlation of road networks to determine likely requirements for building a suitable space-time traffic model. In this paper, it is carried out using traffic flow data collected on Beijing’s road network. In the framework, the space-time autocorrelation function (ST-ACF) is introduced as global measure, and cross-correlation function (CCF) as local measure to reveal the change mechanism of space-time correlation. Through the use of both measures, the correlation is found to be dynamic and heterogeneous in space and time. The finding of seasonal pattern present in space-time correlation provides a theoretical assumption for traffic forecasting. Besides, combined with Simpson’s rule, the CCF is also applied to finding the critical sections in the road network, and the experiments prove that it is feasible in computability, rationality and practicality.

  4. Real-time Space-time Integration in GIScience and Geography

    PubMed Central

    Richardson, Douglas B.

    2013-01-01

    Space-time integration has long been the topic of study and speculation in geography. However, in recent years an entirely new form of space-time integration has become possible in GIS and GIScience: real-time space-time integration and interaction. While real-time spatiotemporal data is now being generated almost ubiquitously, and its applications in research and commerce are widespread and rapidly accelerating, the ability to continuously create and interact with fused space-time data in geography and GIScience is a recent phenomenon, made possible by the invention and development of real-time interactive (RTI) GPS/GIS technology and functionality in the late 1980s and early 1990s. This innovation has since functioned as a core change agent in geography, cartography, GIScience and many related fields, profoundly realigning traditional relationships and structures, expanding research horizons, and transforming the ways geographic data is now collected, mapped, modeled, and used, both in geography and in science and society more broadly. Real-time space-time interactive functionality remains today the underlying process generating the current explosion of fused spatiotemporal data, new geographic research initiatives, and myriad geospatial applications in governments, businesses, and society. This essay addresses briefly the development of these real-time space-time functions and capabilities; their impact on geography, cartography, and GIScience; and some implications for how discovery and change can occur in geography and GIScience, and how we might foster continued innovation in these fields. PMID:24587490

  5. A Method for Counting Multidirection Passer-by by Using Circular Space-Time Image

    NASA Astrophysics Data System (ADS)

    Terada, Kenji; Matsubara, Kazutaka

    Recently, the importance of understanding the number of people and the flow of the persons at public accommodation or department stores have increased more and more. This information is useful for congestion reducing, efficient promotion of the institution management and sales improvement, etc. The conventional methods of counting number of people are carried out by human viewing and by a machine of rotary stick-type counter. Therefore, we have already proposed an automatic system for counting number of people by the image processing to use a straight measurement line and a space-time image. However, these methods are not suitable for the counting at a wide place. In this paper, we propose a method of counting multidirection passer-by by using circular space-time image. In this method, a circular measurment line is set on a sequence of the background subtraction images. All pixels on this line is transformed to the space-time image. The number of passer-by can be counted by using this space-time image. But the direction information of passer-by cannot be obtained from this space-time image. Therefore, two circular measurment lines are set on a sequence of the background subtraction images. Two space-time images are generated from the outside line and the inside line. The directions of passer-by can be obtained by detecting which line passer-by passed previously.

  6. Asymptotically flat radiative space-times with boost-rotation symmetry: The general structure

    SciTech Connect

    Biicak, J.; Schmidt, B. )

    1989-09-15

    This paper deals for the first time with boost-rotation-symmetric space-times from a unified point of view. Boost-rotation-symmetric space-times are the only explicitly known exact solutions of the Einstein vacuum field equations which describe moving singularities or black holes, are radiative and asymptotically flat in the sense that they admit global, though not complete, smooth null infinity, as well as spacelike and timelike infinities. They very likely represent the exterior fields of uniformly accelerated sources in general relativity and may serve as tests of various approximation methods, as nontrivial illustrations of the theory of the asymptotic structure of radiative space-times, and as test beds in numerical relativity. Examples are the {ital C}-metric or the solutions of Bonnor and Swaminarayan. The space-times are defined in a geometrical manner and their global properties are studied in detail, in particular their asymptotic structure. It is demonstrated how one can construct any asymptotically flat boost-rotation-symmetric space-time starting from the boost-rotation-symmetric solution of the flat-space wave equation. The problem of uniformly accelerated sources in special relativity is also discussed. The radiative properties and specific examples of the boost-rotation-symmetric space-times will be analyzed in a following paper.

  7. The Riemann tensor and the Bianchi identity in 5D space-time

    NASA Astrophysics Data System (ADS)

    Taki, Mehran; Mirjalili, Abolfazl

    2017-01-01

    The initial assumption of theories with extra dimension is based on the efforts to yield a geometrical interpretation of the gravitation field. In this paper, using an infinitesimal parallel transportation of a vector, we generalize the obtained results in four dimensions to five-dimensional space-time. For this purpose, we first consider the effect of the geometrical structure of 4D space-time on a vector in a round trip of a closed path, which is basically quoted from chapter three of Ref. [5]. If the vector field is a gravitational field, then the required round trip will lead us to an equation which is dynamically governed by the Riemann tensor. We extend this idea to five-dimensional space-time and derive an improved version of Bianchi's identity. By doing tensor contraction on this identity, we obtain field equations in 5D space-time that are compatible with Einstein's field equations in 4D space-time. As an interesting result, we find that when one generalizes the results to 5D space-time, the new field equations imply a constraint on Ricci scalar equations, which might be containing a new physical insight.

  8. Numerical modeling of space-time wave extremes using WAVEWATCH III

    NASA Astrophysics Data System (ADS)

    Barbariol, Francesco; Alves, Jose-Henrique G. M.; Benetazzo, Alvise; Bergamasco, Filippo; Bertotti, Luciana; Carniel, Sandro; Cavaleri, Luigi; Chao, Yung Y.; Chawla, Arun; Ricchi, Antonio; Sclavo, Mauro; Tolman, Hendrik

    2017-01-01

    A novel implementation of parameters estimating the space-time wave extremes within the spectral wave model WAVEWATCH III (WW3) is presented. The new output parameters, available in WW3 version 5.16, rely on the theoretical model of Fedele (J Phys Oceanogr 42(9):1601-1615, 2012) extended by Benetazzo et al. (J Phys Oceanogr 45(9):2261-2275, 2015) to estimate the maximum second-order nonlinear crest height over a given space-time region. In order to assess the wave height associated to the maximum crest height and the maximum wave height (generally different in a broad-band stormy sea state), the linear quasi-determinism theory of Boccotti (2000) is considered. The new WW3 implementation is tested by simulating sea states and space-time extremes over the Mediterranean Sea (forced by the wind fields produced by the COSMO-ME atmospheric model). Model simulations are compared to space-time wave maxima observed on March 10th, 2014, in the northern Adriatic Sea (Italy), by a stereo camera system installed on-board the "Acqua Alta" oceanographic tower. Results show that modeled space-time extremes are in general agreement with observations. Differences are mostly ascribed to the accuracy of the wind forcing and, to a lesser extent, to the approximations introduced in the space-time extremes parameterizations. Model estimates are expected to be even more accurate over areas larger than the mean wavelength (for instance, the model grid size).

  9. Development of a computer code to calculate the distribution of radionuclides within the human body by the biokinetic models of the ICRP.

    PubMed

    Matsumoto, Masaki; Yamanaka, Tsuneyasu; Hayakawa, Nobuhiro; Iwai, Satoshi; Sugiura, Nobuyuki

    2015-03-01

    This paper describes the Basic Radionuclide vAlue for Internal Dosimetry (BRAID) code, which was developed to calculate the time-dependent activity distribution in each organ and tissue characterised by the biokinetic compartmental models provided by the International Commission on Radiological Protection (ICRP). Translocation from one compartment to the next is taken to be governed by first-order kinetics, which is formulated by the first-order differential equations. In the source program of this code, the conservation equations are solved for the mass balance that describes the transfer of a radionuclide between compartments. This code is applicable to the evaluation of the radioactivity of nuclides in an organ or tissue without modification of the source program. It is also possible to handle easily the cases of the revision of the biokinetic model or the application of a uniquely defined model by a user, because this code is designed so that all information on the biokinetic model structure is imported from an input file. The sample calculations are performed with the ICRP model, and the results are compared with the analytic solutions using simple models. It is suggested that this code provides sufficient result for the dose estimation and interpretation of monitoring data.

  10. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    PubMed Central

    Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions. PMID:26999741

  11. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding.

    PubMed

    Gao, Yuan; Liu, Pengyu; Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions.

  12. The Impact of the DoD Mobile Code Policy on Advanced Distributed Learning, Web-Based Distance Learning and Other Educational Missions

    DTIC Science & Technology

    2001-08-30

    agencies and 13 Learning Management System vendors. Eighteen surveys were returned, and only one-third of the respondents indicated that they used...Documenting the frequency with which mobile code is used in web- enabled courseware programming Questionnaires were distributed to learning management system courseware...web-enabled courseware was sent to points of contact at 51 DoD Academic agencies and 13 Learning Management System vendors. Eighteen surveys were

  13. Morphometric Analysis of Recognized Genes for Autism Spectrum Disorders and Obesity in Relationship to the Distribution of Protein-Coding Genes on Human Chromosomes

    PubMed Central

    McGuire, Austen B.; Rafi, Syed K.; Manzardo, Ann M.; Butler, Merlin G.

    2016-01-01

    Mammalian chromosomes are comprised of complex chromatin architecture with the specific assembly and configuration of each chromosome influencing gene expression and function in yet undefined ways by varying degrees of heterochromatinization that result in Giemsa (G) negative euchromatic (light) bands and G-positive heterochromatic (dark) bands. We carried out morphometric measurements of high-resolution chromosome ideograms for the first time to characterize the total euchromatic and heterochromatic chromosome band length, distribution and localization of 20,145 known protein-coding genes, 790 recognized autism spectrum disorder (ASD) genes and 365 obesity genes. The individual lengths of G-negative euchromatin and G-positive heterochromatin chromosome bands were measured in millimeters and recorded from scaled and stacked digital images of 850-band high-resolution ideograms supplied by the International Society of Chromosome Nomenclature (ISCN) 2013. Our overall measurements followed established banding patterns based on chromosome size. G-negative euchromatic band regions contained 60% of protein-coding genes while the remaining 40% were distributed across the four heterochromatic dark band sub-types. ASD genes were disproportionately overrepresented in the darker heterochromatic sub-bands, while the obesity gene distribution pattern did not significantly differ from protein-coding genes. Our study supports recent trends implicating genes located in heterochromatin regions playing a role in biological processes including neurodevelopment and function, specifically genes associated with ASD. PMID:27164088

  14. Morphometric Analysis of Recognized Genes for Autism Spectrum Disorders and Obesity in Relationship to the Distribution of Protein-Coding Genes on Human Chromosomes.

    PubMed

    McGuire, Austen B; Rafi, Syed K; Manzardo, Ann M; Butler, Merlin G

    2016-05-05

    Mammalian chromosomes are comprised of complex chromatin architecture with the specific assembly and configuration of each chromosome influencing gene expression and function in yet undefined ways by varying degrees of heterochromatinization that result in Giemsa (G) negative euchromatic (light) bands and G-positive heterochromatic (dark) bands. We carried out morphometric measurements of high-resolution chromosome ideograms for the first time to characterize the total euchromatic and heterochromatic chromosome band length, distribution and localization of 20,145 known protein-coding genes, 790 recognized autism spectrum disorder (ASD) genes and 365 obesity genes. The individual lengths of G-negative euchromatin and G-positive heterochromatin chromosome bands were measured in millimeters and recorded from scaled and stacked digital images of 850-band high-resolution ideograms supplied by the International Society of Chromosome Nomenclature (ISCN) 2013. Our overall measurements followed established banding patterns based on chromosome size. G-negative euchromatic band regions contained 60% of protein-coding genes while the remaining 40% were distributed across the four heterochromatic dark band sub-types. ASD genes were disproportionately overrepresented in the darker heterochromatic sub-bands, while the obesity gene distribution pattern did not significantly differ from protein-coding genes. Our study supports recent trends implicating genes located in heterochromatin regions playing a role in biological processes including neurodevelopment and function, specifically genes associated with ASD.

  15. Brillouin distributed sensor over a 200km fiber-loop using a dual-pump configuration and colour coding

    NASA Astrophysics Data System (ADS)

    Le Floch, S.; Sauser, F.; Llera, M.; Rochat, E.

    2014-05-01

    In this paper, we propose a new Brillouin Optical Time Domain Analysis (BOTDA) set-up that combines simultaneous Brillouin gain/loss measurements with colour coding. This technique gives the advantage that the pump power can greatly be increased, compared to other coding schemes, thus increasing the sensing range. A measurement over a 200 km fiber-loop is performed, with a 3 meter spatial resolution and an accuracy of +/- 3 MHz (2σ) at the end of the sensing fiber. To the best of our knowledge, this is the best result obtained with a Brillouin sensor without Raman amplification.

  16. Standardized Semantic Markup for Reference Terminologies, Thesauri and Coding Systems: Benefits for distributed E-Health Applications.

    PubMed

    Hoelzer, Simon; Schweiger, Ralf K; Liu, Raymond; Rudolf, Dirk; Rieger, Joerg; Dudeck, Joachim

    2005-01-01

    With the introduction of the ICD-10 as the standard for diagnosis, the development of an electronic representation of its complete content, inherent semantics and coding rules is necessary. Our concept refers to current efforts of the CEN/TC 251 to establish a European standard for hierarchical classification systems in healthcare. We have developed an electronic representation of the ICD-10 with the extensible Markup Language (XML) that facilitates the integration in current information systems or coding software taking into account different languages and versions. In this context, XML offers a complete framework of related technologies and standard tools for processing that helps to develop interoperable applications.

  17. Space-Time Point Pattern Analysis of Flavescence Dorée Epidemic in a Grapevine Field: Disease Progression and Recovery.

    PubMed

    Maggi, Federico; Bosco, Domenico; Galetto, Luciana; Palmano, Sabrina; Marzachì, Cristina

    2016-01-01

    Analyses of space-time statistical features of a flavescence dorée (FD) epidemic in Vitis vinifera plants are presented. FD spread was surveyed from 2011 to 2015 in a vineyard of 17,500 m(2) surface area in the Piemonte region, Italy; count and position of symptomatic plants were used to test the hypothesis of epidemic Complete Spatial Randomness and isotropicity in the space-time static (year-by-year) point pattern measure. Space-time dynamic (year-to-year) point pattern analyses were applied to newly infected and recovered plants to highlight statistics of FD progression and regression over time. Results highlighted point patterns ranging from disperse (at small scales) to aggregated (at large scales) over the years, suggesting that the FD epidemic is characterized by multiscale properties that may depend on infection incidence, vector population, and flight behavior. Dynamic analyses showed moderate preferential progression and regression along rows. Nearly uniform distributions of direction and negative exponential distributions of distance of newly symptomatic and recovered plants relative to existing symptomatic plants highlighted features of vector mobility similar to Brownian motion. These evidences indicate that space-time epidemics modeling should include environmental setting (e.g., vineyard geometry and topography) to capture anisotropicity as well as statistical features of vector flight behavior, plant recovery and susceptibility, and plant mortality.

  18. Space-Time Point Pattern Analysis of Flavescence Dorée Epidemic in a Grapevine Field: Disease Progression and Recovery

    PubMed Central

    Maggi, Federico; Bosco, Domenico; Galetto, Luciana; Palmano, Sabrina; Marzachì, Cristina

    2017-01-01

    Analyses of space-time statistical features of a flavescence dorée (FD) epidemic in Vitis vinifera plants are presented. FD spread was surveyed from 2011 to 2015 in a vineyard of 17,500 m2 surface area in the Piemonte region, Italy; count and position of symptomatic plants were used to test the hypothesis of epidemic Complete Spatial Randomness and isotropicity in the space-time static (year-by-year) point pattern measure. Space-time dynamic (year-to-year) point pattern analyses were applied to newly infected and recovered plants to highlight statistics of FD progression and regression over time. Results highlighted point patterns ranging from disperse (at small scales) to aggregated (at large scales) over the years, suggesting that the FD epidemic is characterized by multiscale properties that may depend on infection incidence, vector population, and flight behavior. Dynamic analyses showed moderate preferential progression and regression along rows. Nearly uniform distributions of direction and negative exponential distributions of distance of newly symptomatic and recovered plants relative to existing symptomatic plants highlighted features of vector mobility similar to Brownian motion. These evidences indicate that space-time epidemics modeling should include environmental setting (e.g., vineyard geometry and topography) to capture anisotropicity as well as statistical features of vector flight behavior, plant recovery and susceptibility, and plant mortality. PMID:28111581

  19. Space-time evolution of ejected plasma for the triggering of gas switch

    NASA Astrophysics Data System (ADS)

    Liu, Shanhong; Liu, Xuandong; Shen, Xi; Feng, Lei; Tie, Weihao; Zhang, Qiaogen

    2016-06-01

    Ejected plasma has been widely applied to the discharge process of gas spark switches as a trigger technology, and the development process of ejected plasma has a direct and important effect on the discharge characteristics of gas switches. In this paper, both the injection characteristics and space-time evolution of ejected plasma for the triggering of gas spark switch with different stored energies, pulse polarities, and pressures are studied. The discharge characteristics and breakdown process of a gas switch ignited by ejected plasma under different working coefficients are also discussed briefly. The results show that stored energy has significant influence on the characteristics of ejected plasma. With the increase of stored energy, the propulsion mode of ejected plasma in the axial direction transforms from "plasmoid" to "plasma flow," and the distribution of the ejected plasma goes through "cloud," "core-cloud," and "branch" in sequence. The velocity of ejected plasma under negative pulse polarity is obviously higher than that under positive pulse polarity, especially at the very beginning time. The radial dimensions of ejected plasma under two kinds of pulse polarities follow the similar varying pattern over time, which increase first and then decrease, assuming an inverted "U"-shaped curve. With the increase of pressure, the velocity of ejected plasma significantly decreases and the "branch" channels droop earlier. Applying the ejected plasma to the triggering of a gas switch, the switch can be triggered reliably in a much wide working coefficient range of 10%-90%. With the increase of working coefficient, the breakdown process of the switch translates from slow working mode to fast working mode, and the delay time reduces from tens of μs to hundreds of ns.

  20. Optimal search strategies of space-time coupled random walkers with finite lifetimes.

    PubMed

    Campos, D; Abad, E; Méndez, V; Yuste, S B; Lindenberg, K

    2015-05-01

    We present a simple paradigm for detection of an immobile target by a space-time coupled random walker with a finite lifetime. The motion of the walker is characterized by linear displacements at a fixed speed and exponentially distributed duration, interrupted by random changes in the direction of motion and resumption of motion in the new direction with the same speed. We call these walkers "mortal creepers." A mortal creeper may die at any time during its motion according to an exponential decay law characterized by a finite mean death rate ω(m). While still alive, the creeper has a finite mean frequency ω of change of the direction of motion. In particular, we consider the efficiency of the target search process, characterized by the probability that the creeper will eventually detect the target. Analytic results confirmed by numerical results show that there is an ω(m)-dependent optimal frequency ω=ω(opt) that maximizes the probability of eventual target detection. We work primarily in one-dimensional (d=1) domains and examine the role of initial conditions and of finite domain sizes. Numerical results in d=2 domains confirm the existence of an optimal frequency of change of direction, thereby suggesting that the observed effects are robust to changes in dimensionality. In the d=1 case, explicit expressions for the probability of target detection in the long time limit are given. In the case of an infinite domain, we compute the detection probability for arbitrary times and study its early- and late-time behavior. We further consider the survival probability of the target in the presence of many independent creepers beginning their motion at the same location and at the same time. We also consider a version of the standard "target problem" in which many creepers start at random locations at the same time.

  1. Space-time models based on random fields with local interactions

    NASA Astrophysics Data System (ADS)

    Hristopulos, Dionissios T.; Tsantili, Ivi C.

    2016-08-01

    The analysis of space-time data from complex, real-life phenomena requires the use of flexible and physically motivated covariance functions. In most cases, it is not possible to explicitly solve the equations of motion for the fields or the respective covariance functions. In the statistical literature, covariance functions are often based on mathematical constructions. In this paper, we propose deriving space-time covariance functions by solving “effective equations of motion”, which can be used as statistical representations of systems with diffusive behavior. In particular, we propose to formulate space-time covariance functions based on an equilibrium effective Hamiltonian using the linear response theory. The effective space-time dynamics is then generated by a stochastic perturbation around the equilibrium point of the classical field Hamiltonian leading to an associated Langevin equation. We employ a Hamiltonian which extends the classical Gaussian field theory by including a curvature term and leads to a diffusive Langevin equation. Finally, we derive new forms of space-time covariance functions.

  2. Massless scalar fields at null and spatial infinity in the Schwarzschild space-time

    SciTech Connect

    Habisohn, C.X.

    1989-05-01

    It is known that massless scalar, Maxwell, and linearized metric fields (in an appropriate gauge) having data of compact support will evolve to be asymptotically flat on any asymptotically flat background space-time. However, little is known about the evolution of data that is reasonably well behaved but has nontrivial falloff at spatial infinity. Is the set of such data that evolves to be asymptotically flat at null infinity in a curved asymptotically flat space-time of the same size as, and does it consist of elements with falloff rates similar to the set of such data in Minkowski space-time. Stewart and Schmidt analyzed massless scalar fields on both the Minkowski and Schwarzschild space-times. Their calculations indicated that the set of Schwarzschild data in question was much smaller than the Minkowski set. In this paper, this problem is reexamined and it is determined, contrary to the indications of Stewart and Schmidt, that the Schwarzschild set is of the same size, and its elements have falloff rates similar to the corresponding Minkowski set. This result supports the ability of the definition of asymptotic flatness to admit a large class of space-times.

  3. Gravitational deflection of light in the Schwarzschild-de Sitter space-time

    SciTech Connect

    Bhadra, Arunava; Biswas, Swarnadeep; Sarkar, Kabita

    2010-09-15

    Recent studies suggest that the cosmological constant affects the gravitational bending of photons, although the orbital equation for light in Schwarzschild-de Sitter space-time is free from a cosmological constant. Here we argue that the very notion of a cosmological constant independent of the photon orbit in the Schwarzschild-de Sitter space-time is not proper. Consequently, the cosmological constant has some clear contributions to the deflection angle of light rays. We stress the importance of the study of photon trajectories from the reference objects in bending calculations, particularly for asymptotically nonflat space-time. When such an aspect is taken into consideration, the contribution of a cosmological constant to the effective bending is found to depend on the distances of the source and the reference objects.

  4. Black-and-white hole as a space-time with integrable singularity

    NASA Astrophysics Data System (ADS)

    Strokov, Vladimir N.; Lukash, Vladimir N.; Mikheeva, Elena V.

    2016-01-01

    We discuss the problem of singularities in general relativity and emphasize the distinction that should be made between what is understood to be mathematical and physical singularities. We revise examples of space-times that conventionally contain a singularity which, in a sense, does not manifest itself physically. A special attention is paid to the case of integrable singularities for which we propose a well-defined mathematical procedure used to extend the space-time beyond the singularity. We argue that this type of singularity may connect the interior of a black hole with a newly born universe (a space-time referred to as black-and-white hole) giving a resolution to the problem of initial high density and symmetry of the universe. We exemplify by presenting toy models of eternal and astrophysical black-and-white holes.

  5. Space-time fractional diffusion equation using a derivative with nonsingular and regular kernel

    NASA Astrophysics Data System (ADS)

    Gómez-Aguilar, J. F.

    2017-01-01

    In this paper, using the fractional operators with Mittag-Leffler kernel in Caputo and Riemann-Liouville sense the space-time fractional diffusion equation is modified, the fractional equation will be examined separately; with fractional spatial derivative and fractional temporal derivative. For the study cases, the order considered is 0 < β , γ ≤ 1 respectively. In this alternative representation we introduce the appropriate fractional dimensional parameters which characterize consistently the existence of the fractional space-time derivatives into the fractional diffusion equation, these parameters related to equation results in a fractal space-time geometry provide a new family of solutions for the diffusive processes. The proposed mathematical representation can be useful to understand electrochemical phenomena, propagation of energy in dissipative systems, viscoelastic materials, material heterogeneities and media with different scales.

  6. Analytical solution of the geodesic equation in Kerr-(anti-) de Sitter space-times

    SciTech Connect

    Hackmann, Eva; Laemmerzahl, Claus; Kagramanova, Valeria; Kunz, Jutta

    2010-02-15

    The complete analytical solutions of the geodesic equations in Kerr-de Sitter and Kerr-anti-de Sitter space-times are presented. They are expressed in terms of Weierstrass elliptic p, {zeta}, and {sigma} functions as well as hyperelliptic Kleinian {sigma} functions restricted to the one-dimensional {theta} divisor. We analyze the dependency of timelike geodesics on the parameters of the space-time metric and the test-particle and compare the results with the situation in Kerr space-time with vanishing cosmological constant. Furthermore, we systematically can find all last stable spherical and circular orbits and derive the expressions of the deflection angle of flyby orbits, the orbital frequencies of bound orbits, the periastron shift, and the Lense-Thirring effect.

  7. Even perturbations of the self-similar Vaidya space-time

    SciTech Connect

    Nolan, Brien C.; Waters, Thomas J.

    2005-05-15

    We study even parity metric and matter perturbations of all angular modes in self-similar Vaidya space-time. We focus on the case where the background contains a naked singularity. Initial conditions are imposed, describing a finite perturbation emerging from the portion of flat space-time preceding the matter-filled region of space-time. The most general perturbation satisfying the initial conditions is allowed to impinge upon the Cauchy horizon (CH), where the perturbation remains finite: There is no 'blue-sheet' instability. However, when the perturbation evolves through the CH and onto the second future similarity horizon of the naked singularity, divergence necessarily occurs: This surface is found to be unstable. The analysis is based on the study of individual modes following a Mellin transform of the perturbation. We present an argument that the full perturbation remains finite after resummation of the (possibly infinite number of) modes.

  8. Space-Time Foam in 2D and the Sum Over Topologies

    NASA Astrophysics Data System (ADS)

    Loll, R.; Westra, W.

    2003-10-01

    It is well-known that the sum over topologies in quantum gravity is ill-defined, due to a super-exponential growth of the number of geometries as a function of the space-time volume, leading to a badly divergent gravitational path integral. Not even in dimension 2, where a non-perturbative quantum gravity theory can be constructed explicitly from a (regularized) path integral, has this problem found a satisfactory solution. In the present work, we extend a previous 2d Lorentzian path integral, regulated in terms of Lorentzian random triangulations, to include space-times with an arbitrary number of handles. We show that after the imposition of physically motivated causality constraints, the combined sum over geometries and topologies is well-defined and possesses a continuum limit which yields a concrete model of space-time foam in two dimensions.

  9. Mathematical Formalism for an Experimental Test of Space-Time Anisotropy

    SciTech Connect

    Voicu-Brinzei, Nicoleta; Siparov, Sergey

    2010-01-01

    Some specific astrophysical data collected during the last decade suggest the need of a modification of the expression for the Einstein-Hilbert action, and several attempts are known in this respect. The modification suggested in this paper stems from a possible anisotropy of space-time--which leads to a dependence on directional variables of the simplest scalar in the least action principle. In order to provide a testable support to this idea, the optic-metrical parametric resonance is regarded - an experiment on a galactic scale, based on the interaction between the electromagnetic radiation of cosmic masers and periodical gravitational waves emitted by close double systems or pulsars. Since the effect depends on the space-time metric, a possible anisotropy could be revealed through observations. We prove that if space-time is anisotropic, then the orientation of the astrophysical systems suitable for observations would show it.

  10. Local Effect of Space-Time Expansion ---- How Galaxies Form and Evolve

    NASA Astrophysics Data System (ADS)

    Yang, Jian Liang; Hua, He Yu

    2016-09-01

    generalize gravitational theory of central field to the expanding space-time, and realize the unification of structure of big scope space-time and physical phenomena of small scope, and reasonably and systematically explain gravitational anomalies of solar system such as extra receding rate of lunar orbit, the increase of astronomical unit, the secular change of day length, the earth's expansion as well as the extra acceleration of artificial aerocrafts and so on, which cannot be treated by current knowledge. Besides, it is disclosed that galaxies form from continued growth but not the assemblage of existent matter after big bang, new matter continuously creates in the interior of celestial bodies, celestial bodies, galaxies and space simultaneously enlarge at the same proportion, and it is the local effect of space-time expansion that determines formation and evolution of galaxies.

  11. Space-Time for MIMO Multicasting and Full-Rate, Full-Diversity Codes with Partial CSI

    DTIC Science & Technology

    2009-05-31

    beamforming, this scheme achieves universally better performance in terms of the worst-case SNR . Computational complexity reduction was achieved via a...often used to optimize the SNR averaged over all receivers. However, the drawback of this scheme is the unfair performance among the receivers, i.e...users with poor channel conditions may be allocated with unacceptably low SNRs . Practical systems, such as future digital video/audio/data applications

  12. Time-Dependent Distribution Functions in C-Mod Calculated with the CQL3D-Hybrid-FOW, AORSA Full-Wave, and DC Lorentz Codes

    NASA Astrophysics Data System (ADS)

    Harvey, R. W. (Bob); Petrov, Yu. V.; Jaeger, E. F.; Berry, L. A.; Bonoli, P. T.; Bader, A.

    2015-11-01

    A time-dependent simulation of C-Mod pulsed ICRF power is made calculating minority hydrogen ion distribution functions with the CQL3D-Hybrid-FOW finite-orbit-width Fokker-Planck code. ICRF fields are calculated with the AORSA full wave code, and RF diffusion coefficients are obtained from these fields using the DC Lorentz gyro-orbit code. Prior results with a zero-banana-width simulation using the CQL3D/AORSA/DC time-cycles showed a pronounced enhancement of the H distribution in the perpendicular velocity direction compared to results obtained from Stix's quasilinear theory, in general agreement with experiment. The present study compares the new FOW results, including relevant gyro-radius effects, to determine the importance of these effects on the the NPA synthetic diagnostic time-dependence. The new NPA results give increased agreement with experiment, particularly in the ramp-down time after the ICRF pulse. Funded, through subcontract with Massachusetts Institute of Technology, by USDOE sponsored SciDAC Center for Simulation of Wave-Plasma Interactions.

  13. Application of space-time scan statistics to describe geographic and temporal clustering of visible drug activity.

    PubMed

    Linton, Sabriya L; Jennings, Jacky M; Latkin, Carl A; Gomez, Marisela B; Mehta, Shruti H

    2014-10-01

    Knowledge of the geographic and temporal clustering of drug activity can inform where health and social services are needed and can provide insight on the potential impact of local policies on drug activity. This ecologic study assessed the spatial and temporal distribution of drug activity in Baltimore, Maryland, prior to and following the implementation of a large urban redevelopment project in East Baltimore, which began in 2003. Drug activity was measured by narcotic calls for service at the neighborhood level. A space-time scan statistic approach was used to identify statistically significant clusters of narcotic calls for service across space and time, using a discrete Poisson model. After adjusting for economic deprivation and housing vacancy, clusters of narcotic calls for service were identified among neighborhoods located in Southeast, Northeast, Northwest, and West Baltimore from 2001 to 2010. Clusters of narcotic calls for service were identified among neighborhoods located in East Baltimore from 2001 to 2003, indicating a decrease in narcotic calls thereafter. A large proportion of clusters occurred among neighborhoods located in North and Northeast Baltimore after 2003, which indicated a potential spike during this time frame. These findings suggest potential displacement of drug activity coinciding with the initiation of urban redevelopment in East Baltimore. Space-time scan statistics should be used in future research to describe the potential implications of local policies on drug activity.

  14. Impact of curvature divergences on physical observers in a wormhole space-time with horizons

    NASA Astrophysics Data System (ADS)

    Olmo, Gonzalo J.; Rubiera-Garcia, D.; Sanchez-Puente, A.

    2016-06-01

    The impact of curvature divergences on physical observers in a black hole space-time, which, nonetheless, is geodesically complete is investigated. This space-time is an exact solution of certain extensions of general relativity coupled to Maxwell’s electrodynamics and, roughly speaking, consists of two Reissner-Nordström (or Schwarzschild or Minkowski) geometries connected by a spherical wormhole near the center. We find that, despite the existence of infinite tidal forces, causal contact is never lost among the elements making up the observer. This suggests that curvature divergences may not be as pathological as traditionally thought.

  15. A multi-element cosmological model with a complex space-time topology

    NASA Astrophysics Data System (ADS)

    Kardashev, N. S.; Lipatova, L. N.; Novikov, I. D.; Shatskiy, A. A.

    2015-02-01

    Wormhole models with a complex topology having one entrance and two exits into the same space-time of another universe are considered, as well as models with two entrances from the same space-time and one exit to another universe. These models are used to build a model of a multi-sheeted universe (a multi-element model of the "Multiverse") with a complex topology. Spherical symmetry is assumed in all the models. A Reissner-Norström black-hole model having no singularity beyond the horizon is constructed. The strength of the central singularity of the black hole is analyzed.

  16. Analytic treatment of complete and incomplete geodesics in Taub-NUT space-times

    SciTech Connect

    Kagramanova, Valeria; Kunz, Jutta; Hackmann, Eva; Laemmerzahl, Claus

    2010-06-15

    We present the complete set of analytical solutions of the geodesic equation in Taub-NUT space-times in terms of the Weierstrass elliptic functions. We systematically study the underlying polynomials and characterize the motion of test particles by its zeros. Since the presence of the 'Misner string' in the Taub-NUT metric has led to different interpretations, we consider these in terms of the geodesics of the space-time. In particular, we address the geodesic incompleteness at the horizons discussed by Misner and Taub [C. W. Misner and A. H. Taub, Sov. Phys. JETP 28, 122 (1969) [Zh. Eksp. Teor. Fiz. 55, 233 (1968)

  17. Quantum entanglement of fermions-antifermions pair creation modes in noncommutative Bianchi I space-time

    NASA Astrophysics Data System (ADS)

    Ghiti, M. F.; Mebarki, N.; Aissaoui, H.

    2015-08-01

    The noncommutative Bianchi I curved space-time vierbeins and spin connections are derived. Moreover, the corresponding noncommutative Dirac equation as well as its solutions are presented. As an application within the quantum field theory approach using Bogoliubov transformations, the von Neumann fermion-antifermion pair creation quantum entanglement entropy is studied. It is shown that its behavior is strongly dependent on the value of the noncommutativity θ parameter, k⊥-modes frequencies and the structure of the curved space-time. Various discussions of the obtained features are presented.

  18. Space-Time Variable Superstring Vacua Calabi-Yau Cosmic Yarn)

    NASA Astrophysics Data System (ADS)

    Green, Paul S.; Hübsch, Tristan

    In a general superstring vacuum configuration, the “internal” space (sector) varies in space-time. When this variation is nontrivial only in two spacelike dimensions, the vacuum contains static cosmic strings with finite energy per unit length and which is, up to interactions with matter, an easily computed topological invariant. The total space-time is smooth although the “internal” space is singular at the center of each cosmic string. In a similar analysis of the Wick-rotated Euclidean model, these cosmic strings acquire expected self-interactions. Also, a possibility emerges to define a global time in order to rotate back to the Lorentzian case.

  19. Open quantum system approach to the Gibbons-Hawking effect of de Sitter space-time.

    PubMed

    Yu, Hongwei

    2011-02-11

    We analyze, in the paradigm of open quantum systems, the reduced dynamics of a freely falling two-level detector in de Sitter space-time in weak interaction with a reservoir of fluctuating quantized conformal scalar fields in the de Sitter-invariant vacuum. We find that the detector is asymptotically driven to a thermal state at the Gibbons-Hawking temperature, regardless of its initial state. Our discussion, therefore, shows that the Gibbons-Hawking effect of de Sitter space-time can be understood as a manifestation of thermalization phenomena that involves decoherence and dissipation in open quantum systems.

  20. Space-Time Error Representation and Estimation in Navier-Stokes Calculations

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2006-01-01

    The mathematical framework for a-posteriori error estimation of functionals elucidated by Eriksson et al. [7] and Becker and Rannacher [3] is revisited in a space-time context. Using these theories, a hierarchy of exact and approximate error representation formulas are presented for use in error estimation and mesh adaptivity. Numerical space-time results for simple model problems as well as compressible Navier-Stokes flow at Re = 300 over a 2D circular cylinder are then presented to demonstrate elements of the error representation theory for time-dependent problems.

  1. Relativistic spectrum of hydrogen atom in the space-time non-commutativity

    SciTech Connect

    Moumni, Mustafa; BenSlama, Achour; Zaim, Slimane

    2012-06-27

    We study space-time non-commutativity applied to the hydrogen atom and its phenomenological effects. We find that it modifies the Coulomb potential in the Hamiltonian and add an r{sup -3} part. By calculating the energies from Dirac equation using perturbation theory, we study the modifications to the hydrogen spectrum. We find that it removes the degeneracy with respect to the total angular momentum quantum number and acts like a Lamb shift. Comparing the results with experimental values from spectroscopy, we get a new bound for the space-time non-commutative parameter.

  2. Polar Codes

    DTIC Science & Technology

    2014-12-01

    density parity check (LDPC) code, a Reed–Solomon code, and three convolutional codes. iii CONTENTS EXECUTIVE SUMMARY...the most common. Many civilian systems use low density parity check (LDPC) FEC codes, and the Navy is planning to use LDPC for some future systems...other forward error correction methods: a turbo code, a low density parity check (LDPC) code, a Reed–Solomon code, and three convolutional codes

  3. Einstein-Weyl field equations in a Bianchi type-IX space-time

    SciTech Connect

    Kolassis, C.A.; Le Denmat, G.

    1984-07-15

    It is proved that there exists no solution of the combined gravitational-neutrino field equations in general relativity if the space-time metric admits a group of isometries of Bianchi type IX and the neutrino field has geodesic and shearfree rays.

  4. Quantum field theory in the space-time of a cosmic string

    SciTech Connect

    Linet, B.

    1987-01-15

    For a massive scalar field in the static cylindrically symmetric space-time describing a cosmic string, we determine explicitly the Euclidean Green's function. We obtain also an alternative local form which allows us to calculate the vacuum energy-momentum tensor. In the case of a conformal scalar field, we carry out completely the calculations.

  5. Living in Space: Time, Space and Spirit--Keys to Scientific Literacy Series.

    ERIC Educational Resources Information Center

    Stonebarger, Bill

    The idea of flight and space travel are not new, but the technologies which make them possible are very recent. This booklet considers time, space, and spirit related to living in space. Time refers to a sense of history; space refers to geography; and spirit refers to life and thought. Several chapters on the history and concepts of flight and…

  6. Bayesian Space-Time Patterns and Climatic Determinants of Bovine Anaplasmosis

    PubMed Central

    Hanzlicek, Gregg A.; Raghavan, Ram K.; Ganta, Roman R.; Anderson, Gary A.

    2016-01-01

    The space-time pattern and environmental drivers (land cover, climate) of bovine anaplasmosis in the Midwestern state of Kansas was retrospectively evaluated using Bayesian hierarchical spatio-temporal models and publicly available, remotely-sensed environmental covariate information. Cases of bovine anaplasmosis positively diagnosed at Kansas State Veterinary Diagnostic Laboratory (n = 478) between years 2005–2013 were used to construct the models, which included random effects for space, time and space-time interaction effects with defined priors, and fixed-effect covariates selected a priori using an univariate screening procedure. The Bayesian posterior median and 95% credible intervals for the space-time interaction term in the best-fitting covariate model indicated a steady progression of bovine anaplasmosis over time and geographic area in the state. Posterior median estimates and 95% credible intervals derived for covariates in the final covariate model indicated land surface temperature (minimum), relative humidity and diurnal temperature range to be important risk factors for bovine anaplasmosis in the study. The model performance measured using the Area Under the Curve (AUC) value indicated a good performance for the covariate model (> 0.7). The relevance of climatological factors for bovine anaplasmosis is discussed. PMID:27003596

  7. The Space-Time CE/SE Method for Solving Maxwell's Equations in Time-Domain

    NASA Technical Reports Server (NTRS)

    Wang, X. Y.; Chen, C. L.; Liu, Yen

    2002-01-01

    An innovative finite-volume-type numerical method named as the space-time conservation element and solution element (CE/SE) method is applied to solve time-dependent Maxwell's equations in this paper. Test problems of electromagnetics scattering and antenna radiation are solved for validations. Numerical results are presented and compared with the analytical solutions, showing very good agreements.

  8. Holographic Dark Energy Model with Interaction and Cosmological Constant in the Flat Space-Time

    NASA Astrophysics Data System (ADS)

    Saadat, Hassan

    2012-06-01

    In this paper we consider holographic dark energy model with interaction in the flat space-time with non-zero cosmological constant. We calculate cosmic scale factor and Hubble expansion parameter by using the time-dependent dark energy density. Then, we obtain phenomenological interaction between holographic dark energy and matter. We fixed our solution by using the observational data.

  9. Texturing Space-Times in the Australian Curriculum: Cross-Curriculum Priorities

    ERIC Educational Resources Information Center

    Peacock, David; Lingard, Robert; Sellar, Sam

    2015-01-01

    The Australian curriculum, as a policy imagining what learning should take place in schools, and what that learning should achieve, involves the imagining and rescaling of social relations amongst students, their schools, the nation-state and the globe. Following David Harvey's theorisations of space-time and Norman Fairclough's operationalisation…

  10. High-Order Space-time Discontinuous Galerkin Cell Vertex Scheme toward Compressible Navier Stokes Equations

    DTIC Science & Technology

    2012-06-25

    the idea of enforcing the space-time flux conservation, consider the 1-D case shown in Fig. 4. Suppose the solution at the spacetime node (m + 1 2 , n...the CE associated with the spacetime node (m + 1 2 , n + 1 2 ) is divided into five sections 1 , 2 , 3 , 4 and 5 , as shown in Figure 4

  11. Geometrical properties of an internal local octonionic space in curved space time

    SciTech Connect

    Marques, S.; Oliveira, C.G.

    1986-04-01

    A geometrical treatment on a flat tangent space local to a generalized complex, quaternionic, and octonionic space-time is constructed. It is shown that it is possible to find an Einstein-Maxwell-Yang-Mills correspondence in this generalized (Minkowskian) tangent space. 9 refs.

  12. Perturbative Approaching for Boson Fields' System in a Lewis-Papapetrou Space-Time

    SciTech Connect

    Murariu, G.; Dariescu, M. A.; Dariescu, C.

    2010-08-04

    In this paper the first order solutions of a Klein--Gordon--Maxwell--Einstein coupled system equations were derived for boson fields in a Lewis Papapetrou space time. The results expand the previous static solutions obtained in literature. A main goal is represented by the symbolic script built for such approach.

  13. A bootstrap based space-time surveillance model with an application to crime occurrences

    NASA Astrophysics Data System (ADS)

    Kim, Youngho; O'Kelly, Morton

    2008-06-01

    This study proposes a bootstrap-based space-time surveillance model. Designed to find emerging hotspots in near-real time, the bootstrap based model is characterized by its use of past occurrence information and bootstrap permutations. Many existing space-time surveillance methods, using population at risk data to generate expected values, have resulting hotspots bounded by administrative area units and are of limited use for near-real time applications because of the population data needed. However, this study generates expected values for local hotspots from past occurrences rather than population at risk. Also, bootstrap permutations of previous occurrences are used for significant tests. Consequently, the bootstrap-based model, without the requirement of population at risk data, (1) is free from administrative area restriction, (2) enables more frequent surveillance for continuously updated registry database, and (3) is readily applicable to criminology and epidemiology surveillance. The bootstrap-based model performs better for space-time surveillance than the space-time scan statistic. This is shown by means of simulations and an application to residential crime occurrences in Columbus, OH, year 2000.

  14. Improved Space-Time Forecasting of next Day Ozone Concentrations in the Eastern U.S.

    EPA Science Inventory

    There is an urgent need to provide accurate air quality information and forecasts to the general public and environmental health decision-makers. This paper develops a hierarchical space-time model for daily 8-hour maximum ozone concentration (O3) data covering much of the easter...

  15. Accelerated multiscale space-time finite element simulation and application to high cycle fatigue life prediction

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Wen, Lihua; Naboulsi, Sam; Eason, Thomas; Vasudevan, Vijay K.; Qian, Dong

    2016-08-01

    A multiscale space-time finite element method based on time-discontinuous Galerkin and enrichment approach is presented in this work with a focus on improving the computational efficiencies for high cycle fatigue simulations. While the robustness of the TDG-based space-time method has been extensively demonstrated, a critical barrier for the extensive application is the large computational cost due to the additional temporal dimension and enrichment that are introduced. The present implementation focuses on two aspects: firstly, a preconditioned iterative solver is developed along with techniques for optimizing the matrix storage and operations. Secondly, parallel algorithms based on multi-core graphics processing unit are established to accelerate the progressive damage model implementation. It is shown that the computing time and memory from the accelerated space-time implementation scale with the number of degree of freedom N through ˜ O(N^{1.6}) and ˜ O(N), respectively. Finally, we demonstrate the accelerated space-time FEM simulation through benchmark problems.

  16. A space-time hybridizable discontinuous Galerkin method for incompressible flows on deforming domains

    NASA Astrophysics Data System (ADS)

    Rhebergen, Sander; Cockburn, Bernardo

    2012-06-01

    We present the first space-time hybridizable discontinuous Galerkin (HDG) finite element method for the incompressible Navier-Stokes and Oseen equations. Major advantages of a space-time formulation are its excellent capabilities of dealing with moving and deforming domains and grids and its ability to achieve higher-order accurate approximations in both time and space by simply increasing the order of polynomial approximation in the space-time elements. Our formulation is related to the HDG formulation for incompressible flows introduced recently in, e.g., [N.C. Nguyen, J. Peraire, B. Cockburn, A hybridizable discontinuous Galerkin method for Stokes flow, Comput. Methods Appl. Mech. Eng. 199 (2010) 582-597]. However, ours is inspired in typical DG formulations for compressible flows which allow for a more straightforward implementation. Another difference is the use of polynomials of fixed total degree with space-time hexahedral and quadrilateral elements, instead of simplicial elements. We present numerical experiments in order to assess the quality of the performance of the methods on deforming domains and to experimentally investigate the behavior of the convergence rates of each component of the solution with respect to the polynomial degree of the approximations in both space and time.

  17. Space-time adaptive processing with sum and multiple difference beams for airborne radars

    NASA Astrophysics Data System (ADS)

    Maher, John E.; Zhang, Yuhong; Wang, Hong

    1999-07-01

    This paper describes some new results on a signal processing approach for airborne surveillance radars. This is a space- time adaptive processing technique that simultaneously processes temporal data from sum and difference ((Sigma) (Delta) ) beams to suppress clutter returns. The approach also includes employing spatial adaptive pre- suppression to suppress wideband noise jammers in a two- stage processor.

  18. Similarity solution to fractional nonlinear space-time diffusion-wave equation

    NASA Astrophysics Data System (ADS)

    Costa, F. Silva; Marão, J. A. P. F.; Soares, J. C. Alves; de Oliveira, E. Capelas

    2015-03-01

    In this article, the so-called fractional nonlinear space-time wave-diffusion equation is presented and discussed. This equation is solved by the similarity method using fractional derivatives in the Caputo, Riesz-Feller, and Riesz senses. Some particular cases are presented and the corresponding solutions are shown by means of 2-D and 3-D plots.

  19. Self-quartic interaction for a scalar field in an extended DFR noncommutative space-time

    NASA Astrophysics Data System (ADS)

    Abreu, Everton M. C.; Neves, M. J.

    2014-07-01

    The framework of Dopliche-Fredenhagen-Roberts (DFR) for a noncommutative (NC) space-time is considered as an alternative approach to study the NC space-time of the early Universe. Concerning this formalism, the NC constant parameter, θ, is promoted to coordinate of the space-time and consequently we can describe a field theory in a space-time with extra-dimensions. We will see that there is a canonical momentum associated with this new coordinate in which the effects of a new physics can emerge in the propagation of the fields along the extra-dimensions. The Fourier space of this framework is automatically extended by the addition of the new momenta components. The main concept that we would like to emphasize from the outset is that the formalism demonstrated here will not be constructed by introducing a NC parameter in the system, as usual. It will be generated naturally from an already NC space. We will review that when the components of the new momentum are zero, the (extended) DFR approach is reduced to the usual (canonical) NC case, in which θ is an antisymmetric constant matrix. In this work we will study a scalar field action with self-quartic interaction ϕ4⋆ defined in the DFR NC space-time. We will obtain the Feynman rules in the Fourier space for the scalar propagator and vertex of the model. With these rules we are able to build the radiative corrections to one loop order of the model propagator. The consequences of the NC scale, as well as the propagation of the field in extra-dimensions, will be analyzed in the ultraviolet divergences scenario. We will investigate about the actual possibility that this kμν conjugate momentum has the property of healing the combination of IR/UV divergences that emerges in this recently new NC spacetime quantum field theory.

  20. Is space-time symmetry a suitable generalization of parity-time symmetry?

    SciTech Connect

    Amore, Paolo; Fernández, Francisco M.; Garcia, Javier

    2014-11-15

    We discuss space-time symmetric Hamiltonian operators of the form H=H{sub 0}+igH{sup ′}, where H{sub 0} is Hermitian and g real. H{sub 0} is invariant under the unitary operations of a point group G while H{sup ′} is invariant under transformation by elements of a subgroup G{sup ′} of G. If G exhibits irreducible representations of dimension greater than unity, then it is possible that H has complex eigenvalues for sufficiently small nonzero values of g. In the particular case that H is parity-time symmetric then it appears to exhibit real eigenvalues for all 00. We illustrate the main theoretical results and conclusions of this paper by means of two- and three-dimensional Hamiltonians exhibiting a variety of different point-group symmetries. - Highlights: • Space-time symmetry is a generalization of PT symmetry. • The eigenvalues of a space-time Hamiltonian are either real or appear as pairs of complex conjugate numbers. • In some cases all the eigenvalues are real for some values of a potential-strength parameter g. • At some value of g space-time symmetry is broken and complex eigenvalues appear. • Some multidimensional oscillators exhibit broken space-time symmetry for all values of g.

  1. Modeling of Dose Distribution for a Proton Beam Delivering System with the use of the Multi-Particle Transport Code 'Fluka'

    SciTech Connect

    Mumot, Marta; Agapov, Alexey

    2007-11-26

    We have developed a new delivering system for hadron therapy which uses a multileaf collimator and a range shifter. We simulate our delivering beam system with the multi-particle transport code 'Fluka'. From these simulations we obtained information about the dose distributions, about stars generated in the delivering system elements and also information about the neutron flux. All the informations obtained were analyzed from the point of view of radiation protection, homogeneity of beam delivery to patient body, and also in order to improve some modifiers used.

  2. A comparison study between Wiener and adaptive state estimation (STAP-ASE) algorithms for space time adaptive radar processing

    NASA Astrophysics Data System (ADS)

    Malek, Obaidul; Venetsanopoulos, Anastasios; Anpalagan, Alagan

    2010-08-01

    Space Time Adaptive Processing (STAP) is a multi-dimensional adaptive signal processing technique, which processes the signal in spatial and Doppler domains for which a target detection hypothesis is to be formed. It is a sample based technique and based on the assumption of adequate number of Independent and Identically Distributed (i.i.d.) training data set in the surrounding environment. The principal challenge of the radar processing lies when it violates these underlying assumptions due to severe dynamic heterogeneous clutter (hot clutter) and jammer effects. This in turn degrades the Signal to Interference-plus-Noise Ratio (SINR), hence signal detection performance. Classical Wiener filtering theory is inadequate to deal with nonlinear and nonstationary interferences, however Wiener filtering approach is optimal for stationary and linear systems. But, these challenges can be overcome by Adaptive Sequential State Estimation (ASSE) filtering technique.

  3. Quantification of space/time explicit fossil fuel CO2 emissions in urban domains

    NASA Astrophysics Data System (ADS)

    Gurney, K. R.; Razlivanov, I. N.; Song, Y.

    2013-05-01

    Quantification of fossil fuel CO2 emissions from the bottom-up perspective is a critical element in development of a carbon monitoring system. A space/time explicit emissions data product can verify atmospheric CO2 measurements and offer practical information to authorities in order to optimize mitigation efforts. Here, we present the Hestia Project, an effort aimed at building a high resolution (eg. building and road link-specific, hourly) fossil fuel CO2 emissions data product for the urban domain. A complete data product has been built for the city of Indianapolis and work is ongoing in Los Angeles. The work in Indianapolis is now part of a larger effort, INFLUX, aimed at a convergent top-down/bottom-up assessment of greenhouse gas emissions. The work in Los Angeles with JPL colleagues is aimed at building an operational carbon monitoring system with focus on global megacities. Our urban-level quantification relies on a mixture of data and modeling structures. We start with the sector-specific Vulcan Project estimate using Hestia to distribute emissions in space and time. Two components take the majority of effort: buildings and onroad emissions. For the buildings, we utilize an energy building model constrained with multiple local data streams. For onroad emissions, we use a combination of traffic data and GIS road layers maintaining vehicle class information. In collaboration with our INFLUX colleagues, we are transporting these high resolution emissions through an atmospheric transport model for a forward comparison of the Hestia data product with atmospheric measurements, collected on aircraft and cell towers. In collaboration with our JPL colleagues, we are testing the feasibility of quantifying a megacity domain and how it might integrate with remote sensing and in situ measurement systems. The Hestia effort also holds promise for a useable policy tool at the city scale. With detailed information on energy consumption and emissions with process

  4. Pacific Missile Test Center Information Resources Management Organization (code 0300): The ORACLE client-server and distributed processing architecture

    SciTech Connect

    Beckwith, A. L.; Phillips, J. T.

    1990-06-10

    Computing architectures using distributed processing and distributed databases are increasingly becoming considered acceptable solutions for advanced data processing systems. This is occurring even though there is still considerable professional debate as to what truly'' distributed computing actually is and despite the relative lack of advanced relational database management software (RDBMS) capable of meeting database and system integrity requirements for developing reliable integrated systems. This study investigates the functionally of ORACLE data base management software that is performing distributed processing between a MicroVAX/VMS minicomputer and three MS-DOS-based microcomputers. The ORACLE database resides on the MicroVAX and is accessed from the microcomputers with ORACLE SQL*NET, DECnet, and ORACLE PC TOOL PACKS. Data gathered during the study reveals that there is a demonstrable decrease in CPU demand on the MicroVAX, due to distributed processing'', when the ORACLE PC Tools are used to access the database as opposed to database access from dumb'' terminals. Also discovered were several hardware/software constraints that must be considered in implementing various software modules. The results of the study indicate that this distributed data processing architecture is becoming sufficiently mature, reliable, and should be considered for developing applications that reduce processing on central hosts. 33 refs., 2 figs.

  5. Clinical coding. Code breakers.

    PubMed

    Mathieson, Steve

    2005-02-24

    --The advent of payment by results has seen the role of the clinical coder pushed to the fore in England. --Examinations for a clinical coding qualification began in 1999. In 2004, approximately 200 people took the qualification. --Trusts are attracting people to the role by offering training from scratch or through modern apprenticeships.

  6. Fuzzy geometry via the spinor bundle, with applications to holographic space-time and matrix theory

    SciTech Connect

    Banks, Tom; Kehayias, John

    2011-10-15

    We present a new framework for defining fuzzy approximations to geometry in terms of a cutoff on the spectrum of the Dirac operator, and a generalization of it that we call the Dirac-flux operator. This framework does not require a symplectic form on the manifold, and is completely rotation invariant on an arbitrary n-sphere. The framework is motivated by the formalism of holographic space-time, whose fundamental variables are sections of the spinor bundle over a compact Euclidean manifold. The strong holographic principle requires the space of these sections to be finite dimensional. We discuss applications of fuzzy spinor geometry to holographic space-time and to matrix theory.

  7. Ensemble Space-Time Correlation of Plasma Turbulence in the Solar Wind.

    PubMed

    Matthaeus, W H; Weygand, J M; Dasso, S

    2016-06-17

    Single point measurement turbulence cannot distinguish variations in space and time. We employ an ensemble of one- and two-point measurements in the solar wind to estimate the space-time correlation function in the comoving plasma frame. The method is illustrated using near Earth spacecraft observations, employing ACE, Geotail, IMP-8, and Wind data sets. New results include an evaluation of both correlation time and correlation length from a single method, and a new assessment of the accuracy of the familiar frozen-in flow approximation. This novel view of the space-time structure of turbulence may prove essential in exploratory space missions such as Solar Probe Plus and Solar Orbiter for which the frozen-in flow hypothesis may not be a useful approximation.

  8. Ensemble Space-Time Correlation of Plasma Turbulence in the Solar Wind

    NASA Astrophysics Data System (ADS)

    Matthaeus, W. H.; Weygand, J. M.; Dasso, S.

    2016-06-01

    Single point measurement turbulence cannot distinguish variations in space and time. We employ an ensemble of one- and two-point measurements in the solar wind to estimate the space-time correlation function in the comoving plasma frame. The method is illustrated using near Earth spacecraft observations, employing ACE, Geotail, IMP-8, and Wind data sets. New results include an evaluation of both correlation time and correlation length from a single method, and a new assessment of the accuracy of the familiar frozen-in flow approximation. This novel view of the space-time structure of turbulence may prove essential in exploratory space missions such as Solar Probe Plus and Solar Orbiter for which the frozen-in flow hypothesis may not be a useful approximation.

  9. Particle motion in Horava-Lifshitz black hole space-times

    SciTech Connect

    Enolskii, Victor; Hartmann, Betti; Sirimachan, Parinya; Kagramanova, Valeria; Kunz, Jutta; Laemmerzahl, Claus

    2011-10-15

    We study the particle motion in the space-time of a Kehagias-Sfetsos black hole which is a static spherically symmetric solution of a Horava-Lifshitz gravity model. This model reduces to general relativity in the infrared limit and deviates slightly from detailed balance. Taking the viewpoint that the model is essentially a (3+1)-dimensional modification of general relativity we use the geodesic equation to determine the motion of massive and massless particles. We solve the geodesic equation exactly by using numerical techniques. We find that neither massless nor massive particles with nonvanishing angular momentum can reach the singularity at r=0. Next to bound and escape orbits that are also present in the Schwarzschild space-time we find that new types of orbits exist: manyworld bound orbits as well as two-world escape orbits. We also discuss observables such as the perihelion shift and the light deflection.

  10. Gauge-invariant extensions of the Proca model in a noncommutative space-time

    NASA Astrophysics Data System (ADS)

    Abreu, Everton M. C.; Neto, Jorge Ananias; Fernandes, Rafael L.; Mendes, Albert C. R.

    2016-09-01

    The gauge invariance analysis of theories described in noncommutative (NC) space-times can lead us to interesting results since noncommutativity is one of the possible paths to investigate quantum effects in classical theories such as general relativity, for example. This theoretical possibility has motivated us to analyze the gauge invariance of the NC version of the Proca model, which is a second-class system, in Dirac’s classification, since its classical formulation (commutative space-time) has its gauge invariance broken thanks to the mass term. To obtain such gauge invariant model, we have used the gauge unfixing method to construct a first-class NC version of the Proca model. We have also questioned if the gauge symmetries of NC theories are affected necessarily or not by the NC parameter. In this way, we have calculated its respective symmetries in a standard way via Poisson brackets.

  11. Wick rotation for quantum field theories on degenerate Moyal space(-time)

    SciTech Connect

    Grosse, Harald; Lechner, Gandalf; Ludwig, Thomas; Verch, Rainer

    2013-02-15

    In this paper the connection between quantum field theories on flat noncommutative space(-times) in Euclidean and Lorentzian signature is studied for the case that time is still commutative. By making use of the algebraic framework of quantum field theory and an analytic continuation of the symmetry groups which are compatible with the structure of Moyal space, a general correspondence between field theories on Euclidean space satisfying a time zero condition and quantum field theories on Moyal Minkowski space is presented ('Wick rotation'). It is then shown that field theories transferred to Moyal space(-time) by Rieffel deformation and warped convolution fit into this framework, and that the processes of Wick rotation and deformation commute.

  12. Tensor-product preconditioners for higher-order space-time discontinuous Galerkin methods

    NASA Astrophysics Data System (ADS)

    Diosady, Laslo T.; Murman, Scott M.

    2017-02-01

    A space-time discontinuous-Galerkin spectral-element discretization is presented for direct numerical simulation of the compressible Navier-Stokes equations. An efficient solution technique based on a matrix-free Newton-Krylov method is developed in order to overcome the stiffness associated with high solution order. The use of tensor-product basis functions is key to maintaining efficiency at high-order. Efficient preconditioning methods are presented which can take advantage of the tensor-product formulation. A diagonalized Alternating-Direction-Implicit (ADI) scheme is extended to the space-time discontinuous Galerkin discretization. A new preconditioner for the compressible Euler/Navier-Stokes equations based on the fast-diagonalization method is also presented. Numerical results demonstrate the effectiveness of these preconditioners for the direct numerical simulation of subsonic turbulent flows.

  13. Gauge-invariant coupled gravitational, acoustical, and electromagnetic modes on most general spherical space-times

    NASA Astrophysics Data System (ADS)

    Gerlach, Ulrich H.; Sengupta, Uday K.

    1980-09-01

    The coupled Einstein-Maxwell system linearized away from an arbitrarily given spherically symmetric background space-time is reduced from its four-dimensional to a two-dimensional form expressed solely in terms of gauge-invariant geometrical perturbation objects. These objects, which besides the gravitational and electromagnetic, also include mass-energy degrees of freedom, are defined on the two-manifold spanned by the radial and time coordinates. For charged or uncharged arbitrary matter background the odd-parity perturbation equations for example, reduce to three second-order linear scalar equations driven by matter and charge inhomogeneities. These three equations describe the intercoupled gravitational, electromagnetic, and acoustic perturbational degrees of freedom. For a charged black hole in an asymptotically de Sitter space-time the gravitational and electromagnetic equations decouple into two inhomogeneous scalar wave equations.

  14. Self-Existing Objects and Auto-Generated Information in Chronology-Violating Space-Times

    NASA Astrophysics Data System (ADS)

    Romero, Gustavo E.; Torres, Diego F.

    Closed time-like curves (CTCs) naturally appear in a variety of chronology-violating space-times. In these space-times, the principle of self-consistency demands a harmony between local and global affairs that excludes grandfather-like paradoxes. However, self-existing objects trapped in CTCs are not seemingly avoided by the standard interpretation of this principle, usually constrained to a dynamical framework. In this letter we discuss whether we are committed to accept an ontology with self-existing objects if CTCs actually occur in the universe. In addition, the epistemological status of the principle of self-consistency is analyzed and a discussion on the information flux through CTCs is presented.

  15. Space-time wiring specificity supports direction selectivity in the retina

    PubMed Central

    Zlateski, Aleksandar; Lee, Kisuk; Richardson, Mark; Turaga, Srinivas C.; Purcaro, Michael; Balkam, Matthew; Robinson, Amy; Behabadi, Bardia F.; Campos, Michael; Denk, Winfried; Seung, H. Sebastian

    2014-01-01

    How does the mammalian retina detect motion? This classic problem in visual neuroscience has remained unsolved for 50 years. In search of clues, we reconstructed Off-type starburst amacrine cells (SACs) and bipolar cells (BCs) in serial electron microscopic images with help from EyeWire, an online community of “citizen neuroscientists.” Based on quantitative analyses of contact area and branch depth in the retina, we found evidence that one BC type prefers to wire with a SAC dendrite near the SAC soma, while another BC type prefers to wire far from the soma. The near type is known to lag the far type in time of visual response. A mathematical model shows how such “space-time wiring specificity” could endow SAC dendrites with receptive fields that are oriented in space-time and therefore respond selectively to stimuli that move in the outward direction from the soma. PMID:24805243

  16. Space-time delta-sigma modulation for reception of multiple simultaneous independent RF beams

    NASA Astrophysics Data System (ADS)

    Rong, Guoguang; Black, Bruce A.; Siahmakoun, Azad Z.

    2005-09-01

    In this paper we introduce and analyze a multiple-RF-beam beamformer in receive mode utilizing the principle of space-time delta-sigma modulation. This principle is based on sampling input signals in both time and space and converting the sampled signals into a digital format by delta-sigma conversion. Noise shaping is achieved in 2D frequency domain. We show that the modulator can receive signals of narrow and wide bandwidths with steering capability, can receive multiple beams, and establish tradeoffs between sampling in time and in space. The ability of the modulator to trade off between time and space provides an effective way to sample high frequency RF signals without down conversion. In addition, a space-time delta-sigma modulator has better performance than a solely temporal delta-sigma modulator (for the same filter order), as is typically used in communication systems to digitize the down-converted analog signals.

  17. Perturbative quantization of two-dimensional space-time noncommutative QED

    SciTech Connect

    Ghasemkhani, M.; Sadooghi, N.

    2010-02-15

    Using the method of perturbative quantization in the first order approximation, we quantize a nonlocal QED-like theory including fermions and bosons whose interactions are described by terms containing higher order space-time derivatives. As an example, the two-dimensional space-time noncommutative QED (NC-QED) is quantized perturbatively up to O(e{sup 2},{theta}{sup 3}), where e is the NC-QED coupling constant and {theta} is the noncommutativity parameter. The resulting modified Lagrangian density is shown to include terms consisting of first order time-derivative and higher order space-derivatives of the modified field variables that satisfy the ordinary equal-time commutation relations up to O(e{sup 2},{theta}{sup 3}). Using these commutation relations, the canonical current algebra of the modified theory is also derived.

  18. Evolution of a massless test scalar field on boson star space-times

    SciTech Connect

    Lora-Clavijo, F. D.; Cruz-Osorio, A.; Guzman, F. S.

    2010-07-15

    We numerically solve the massless test scalar field equation on the space-time background of boson stars and black holes. In order to do so, we use a numerical domain that contains future null infinity. We achieve this construction using a scri-fixing conformal compactification technique based on hyperboloidal constant mean curvature foliations of the space-time and solve the conformally invariant wave equation. We present two results: the scalar field shows oscillations of the quasinormal mode type found for black holes only for boson star configurations that are compact; and no signs of tail decay are found in the parameter space we explored. Even though our results do not correspond to the master equation of perturbations of boson star solutions, they indicate that the parameter space of boson stars as black hole mimickers is restricted to compact configurations.

  19. Pseudo-Z symmetric space-times with divergence-free Weyl tensor and pp-waves

    NASA Astrophysics Data System (ADS)

    Mantica, Carlo Alberto; Suh, Young Jin

    2016-12-01

    In this paper we present some new results about n(≥ 4)-dimensional pseudo-Z symmetric space-times. First we show that if the tensor Z satisfies the Codazzi condition then its rank is one, the space-time is a quasi-Einstein manifold, and the associated 1-form results to be null and recurrent. In the case in which such covector can be rescaled to a covariantly constant we obtain a Brinkmann-wave. Anyway the metric results to be a subclass of the Kundt metric. Next we investigate pseudo-Z symmetric space-times with harmonic conformal curvature tensor: a complete classification of such spaces is obtained. They are necessarily quasi-Einstein and represent a perfect fluid space-time in the case of time-like associated covector; in the case of null associated covector they represent a pure radiation field. Further if the associated covector is locally a gradient we get a Brinkmann-wave space-time for n > 4 and a pp-wave space-time in n = 4. In all cases an algebraic classification for the Weyl tensor is provided for n = 4 and higher dimensions. Then conformally flat pseudo-Z symmetric space-times are investigated. In the case of null associated covector the space-time reduces to a plane wave and results to be generalized quasi-Einstein. In the case of time-like associated covector we show that under the condition of divergence-free Weyl tensor the space-time admits a proper concircular vector that can be rescaled to a time like vector of concurrent form and is a conformal Killing vector. A recent result then shows that the metric is necessarily a generalized Robertson-Walker space-time. In particular we show that a conformally flat (PZS)n, n ≥ 4, space-time is conformal to the Robertson-Walker space-time.

  20. A Summary of the Space-Time Conservation Element and Solution Element (CESE) Method

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen J.

    2015-01-01

    The space-time Conservation Element and Solution Element (CESE) method for solving conservation laws is examined for its development motivation and design requirements. The characteristics of the resulting scheme are discussed. The discretization of the Euler equations is presented to show readers how to construct a scheme based on the CESE method. The differences and similarities between the CESE method and other traditional methods are discussed. The strengths and weaknesses of the method are also addressed.

  1. Space-time clustering of childhood cancers in Switzerland: A nationwide study.

    PubMed

    Kreis, Christian; Grotzer, Michael; Hengartner, Heinz; Spycher, Ben Daniel

    2016-05-01

    The aetiology of childhood cancers remains largely unknown. It has been hypothesized that infections may be involved and that mini-epidemics thereof could result in space-time clustering of incident cases. Most previous studies support spatio-temporal clustering for leukaemia, while results for other diagnostic groups remain mixed. Few studies have corrected for uneven regional population shifts which can lead to spurious detection of clustering. We examined whether there is space-time clustering of childhood cancers in Switzerland identifying cases diagnosed at age <16 years between 1985 and 2010 from the Swiss Childhood Cancer Registry. Knox tests were performed on geocoded residence at birth and diagnosis separately for leukaemia, acute lymphoid leukaemia (ALL), lymphomas, tumours of the central nervous system, neuroblastomas and soft tissue sarcomas. We used Baker's Max statistic to correct for multiple testing and randomly sampled time-, sex- and age-matched controls from the resident population to correct for uneven regional population shifts. We observed space-time clustering of childhood leukaemia at birth (Baker's Max p = 0.045) but not at diagnosis (p = 0.98). Clustering was strongest for a spatial lag of <1 km and a temporal lag of <2 years (Observed/expected close pairs: 124/98; p Knox test = 0.003). A similar clustering pattern was observed for ALL though overall evidence was weaker (Baker's Max p = 0.13). Little evidence of clustering was found for other diagnostic groups (p > 0.2). Our study suggests that childhood leukaemia tends to cluster in space-time due to an etiologic factor present in early life.

  2. Arbitrary Dimension Convection-Diffusion Schemes for Space-Time Discretizations

    SciTech Connect

    Bank, Randolph E.; Vassilevski, Panayot S.; Zikatanov, Ludmil T.

    2016-01-20

    This note proposes embedding a time dependent PDE into a convection-diffusion type PDE (in one space dimension higher) with singularity, for which two discretization schemes, the classical streamline-diffusion and the EAFE (edge average finite element) one, are investigated in terms of stability and error analysis. The EAFE scheme, in particular, is extended to be arbitrary order which is of interest on its own. Numerical results, in combined space-time domain demonstrate the feasibility of the proposed approach.

  3. General regular charged space-times in teleparallel equivalent of general relativity

    NASA Astrophysics Data System (ADS)

    Nashed, G. G. L.

    2007-07-01

    Using a non-linear version of electrodynamics coupled to the teleparallel equivalent of general relativity (TEGR), we obtain new regular exact solutions. The non-linear theory reduces to the Maxwell one in the weak limit with the tetrad fields corresponding to a charged space-time. We then apply the energy-momentum tensor of the gravitational field, established in the Hamiltonian structure of the TEGR, to the solutions obtained.

  4. Assessing SaTScan ability to detect space-time clusters in wildfires

    NASA Astrophysics Data System (ADS)

    Costa, Ricardo; Pereira, Mário; Caramelo, Liliana; Vega Orozco, Carmen; Kanevski, Mikhail

    2013-04-01

    Besides classical cluster analysis techniques which are able to analyse spatial and temporal data, SaTScan software analyses space-time data using the spatial, temporal or space-time scan statistics. This software requires the spatial coordinates of the fire, but since in the Rural Fire Portuguese Database (PRFD) (Pereira et al, 2011) the location of each fire is the parish where the ignition occurs, the fire spatial coordinates were considered as coordinates of the centroid of the parishes. Moreover, in general, the northern region is characterized by a large number of small parishes while the southern comprises parish much larger. The objectives of this study are: (i) to test the ability of SaTScan to detect the correct space-time clusters, in what respects to spatial and temporal location and size; and, (ii) to evaluate the effect of the dimensions of the parishes and of aggregating all fires occurred in a parish in a single point. Results obtained with a synthetic database where clusters were artificially created with different densities, in different regions of the country and with different sizes and durations, allow to conclude: the ability of SaTScan to correctly identify the clusters (location, shape and spatial and temporal dimension); and objectively assess the influence of the size of the parishes and windows used in space-time detection. Pereira, M. G., Malamud, B. D., Trigo, R. M., and Alves, P. I.: The history and characteristics of the 1980-2005 Portuguese rural fire database, Nat. Hazards Earth Syst. Sci., 11, 3343-3358, doi:10.5194/nhess-11-3343-2011, 2011 This work is supported by European Union Funds (FEDER/COMPETE - Operational Competitiveness Programme) and by national funds (FCT - Portuguese Foundation for Science and Technology) under the project FCOMP-01-0124-FEDER-022692, the project FLAIR (PTDC/AAC-AMB/104702/2008) and the EU 7th Framework Program through FUME (contract number 243888).

  5. Global scale precipitation from monthly to centennial scales: empirical space-time scaling analysis, anthropogenic effects

    NASA Astrophysics Data System (ADS)

    de Lima, Isabel; Lovejoy, Shaun

    2016-04-01

    The characterization of precipitation scaling regimes represents a key contribution to the improved understanding of space-time precipitation variability, which is the focus here. We conduct space-time scaling analyses of spectra and Haar fluctuations in precipitation, using three global scale precipitation products (one instrument based, one reanalysis based, one satellite and gauge based), from monthly to centennial scales and planetary down to several hundred kilometers in spatial scale. Results show the presence - similarly to other atmospheric fields - of an intermediate "macroweather" regime between the familiar weather and climate regimes: we characterize systematically the macroweather precipitation temporal and spatial, and joint space-time statistics and variability, and the outer scale limit of temporal scaling. These regimes qualitatively and quantitatively alternate in the way fluctuations vary with scale. In the macroweather regime, the fluctuations diminish with time scale (this is important for seasonal, annual, and decadal forecasts) while anthropogenic effects increase with time scale. Our approach determines the time scale at which the anthropogenic signal can be detected above the natural variability noise: the critical scale is about 20 - 40 yrs (depending on the product, on the spatial scale). This explains for example why studies that use data covering only a few decades do not easily give evidence of anthropogenic changes in precipitation, as a consequence of warming: the period is too short. Overall, while showing that precipitation can be modeled with space-time scaling processes, our results clarify the different precipitation scaling regimes and further allow us to quantify the agreement (and lack of agreement) of the precipitation products as a function of space and time scales. Moreover, this work contributes to clarify a basic problem in hydro-climatology, which is to measure precipitation trends at decadal and longer scales and to

  6. Dynamic Modelling of Aquifer Level Using Space-Time Kriging and Sequential Gaussian Simulation

    NASA Astrophysics Data System (ADS)

    Varouchakis, Emmanouil A.; Hristopulos, Dionisis T.

    2016-04-01

    Geostatistical models are widely used in water resources management projects to represent and predict the spatial variability of aquifer levels. In addition, they can be applied as surrogate to numerical hydrological models if the hydrogeological data needed to calibrate the latter are not available. For space-time data, spatiotemporal geostatistical approaches can model the aquifer level variability by incorporating complex space-time correlations. A major advantage of such models is that they can improve the reliability of predictions compared to purely spatial or temporal models in areas with limited spatial and temporal data availability. The identification and incorporation of a spatiotemporal trend model can further increase the accuracy of groundwater level predictions. Our goal is to derive a geostatistical model of dynamic aquifer level changes in a sparsely gauged basin on the island of Crete (Greece). The available data consist of bi-annual (dry and wet hydrological period) groundwater level measurements at 11 monitoring locations for the time period 1981 to 2010. We identify a spatiotemporal trend function that follows the overall drop of the aquifer level over the study period. The correlation of the residuals is modeled using a non-separable space-time variogram function based on the Spartan covariance family. The space-time Residual Kriging (STRK) method is then applied to combine the estimated trend and the residuals into dynamic predictions of groundwater level. Sequential Gaussian Simulation is also employed to determine the uncertainty of the spatiotemporal model (trend and covariance) parameters. This stochastic modelling approach produces multiple realizations, ranks the prediction results on the basis of specified criteria, and captures the range of the uncertainty. The model projections recommend that in 2032 a part of the basin will be under serious threat as the aquifer level will approximate the sea level boundary.

  7. Divergence identities in curved space-time a resolution of the stress-energy problem

    NASA Astrophysics Data System (ADS)

    Yilmaz, Hüseyin

    1989-03-01

    It is noted that the joint use of two basic differential identities in curved space-time, namely, 1) the Einstein-Hilbert identity (1915), and 2) the identity of P. Freud (1939), permits a viable alternative to general relativity and a resolution of the "field stress-energy" problem of the gravitational theory. (A tribute to Eugene P. Wigner's 1957 presidential address to the APS)

  8. Exact solutions of the (3+1)-dimensional space-time fractional Jimbo-Miwa equation

    NASA Astrophysics Data System (ADS)

    Aksoy, Esin; Guner, Ozkan; Bekir, Ahmet; Cevikel, Adem C.

    2016-06-01

    Exact solutions of the (3+1)-dimensional space-time fractional Jimbo-Miwa equation are studied by the generalized Kudryashov method, the exp-function method and the (G'/G)-expansion method. The solutions obtained include the form of hyperbolic functions, trigonometric and rational functions. These methods are effective, simple, and many types of solutions can be obtained at the same time.

  9. Twisting space-time: relativistic origin of seed magnetic field and vorticity.

    PubMed

    Mahajan, S M; Yoshida, Z

    2010-08-27

    We demonstrate that a purely ideal mechanism, originating in the space-time distortion caused by the demands of special relativity, can break the topological constraint (leading to helicity conservation) that would forbid the emergence of a magnetic field (a generalized vorticity) in an ideal nonrelativistic dynamics. The new mechanism, arising from the interaction between the inhomogeneous flow fields and inhomogeneous entropy, is universal and can provide a finite seed even for mildly relativistic flows.

  10. Quantum Theory of Antisymmetric Higher Rank Tensor Gauge Field in Higher Dimensional Space-Time

    NASA Astrophysics Data System (ADS)

    Kimura, T.

    1981-01-01

    In a higher dimensional space-time, the Lagrangian formalism and the canonical operator formalism of covariant quantization of the antisymmetric tensor gauge field of higher rank are formulated consistently by introducing BRS transformation and Lagrangian multiplier fields From the effective Lagrangian, the numbers of the physical components and the effective ghosts are counted correctly without referring to a special reference frame. The confinement of unphysical components is assured from the viewpoint of the ``quartet mechanism'' of Kugo and Ojima.

  11. Quantum effects and elimination of the conformal anomaly in anisotropic space-time

    SciTech Connect

    Grib, A.A.; Nesteruk, A.V.

    1988-03-01

    In homogeneous anisotropic space-time the connection between the problem of the elimination of infrared divergences and the conformal anomaly of the regularized energy-momentum tensor is studied. It is shown that removal of the infrared divergence by means of a cutoff leads to the absence of a conformal anomaly. A physical interpretation of the infrared cutoff as a shift in the particle-energy spectrum by an amount equal to the effective temperature of the gravitational field is proposed.

  12. Space-time visual analytics of eye-tracking data for dynamic stimuli.

    PubMed

    Kurzhals, Kuno; Weiskopf, Daniel

    2013-12-01

    We introduce a visual analytics method to analyze eye movement data recorded for dynamic stimuli such as video or animated graphics. The focus lies on the analysis of data of several viewers to identify trends in the general viewing behavior, including time sequences of attentional synchrony and objects with strong attentional focus. By using a space-time cube visualization in combination with clustering, the dynamic stimuli and associated eye gazes can be analyzed in a static 3D representation. Shotbased, spatiotemporal clustering of the data generates potential areas of interest that can be filtered interactively. We also facilitate data drill-down: the gaze points are shown with density-based color mapping and individual scan paths as lines in the space-time cube. The analytical process is supported by multiple coordinated views that allow the user to focus on different aspects of spatial and temporal information in eye gaze data. Common eye-tracking visualization techniques are extended to incorporate the spatiotemporal characteristics of the data. For example, heat maps are extended to motion-compensated heat maps and trajectories of scan paths are included in the space-time visualization. Our visual analytics approach is assessed in a qualitative users study with expert users, which showed the usefulness of the approach and uncovered that the experts applied different analysis strategies supported by the system.

  13. A Stochastic Fractional Dynamics Model of Space-time Variability of Rain

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Travis, James E.

    2013-01-01

    Rainfall varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, that allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and times scales. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and in Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to the second moment statistics of radar data. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well without any further adjustment.

  14. ``Simplest Molecule'' Clarifies Modern Physics I. CW Laser Space-Time Frame Dynamics

    NASA Astrophysics Data System (ADS)

    Reimer, Tyle; Harter, William

    2015-05-01

    Molecular spectroscopy makes very precise applications of quantum theory including GPS, BEC, and laser clocks. Now it can return the favor by shedding some light on modern physics mysteries by further unifying quantum theory and relativity. We first ask, ``What is the simplest molecule?'' Hydrogen H2 is the simplest stable molecule. Positronium is an electron-positron (e+e-) -pair. An even simpler ``molecule'' or ``radical'' is a photon-pair (γ, γ) that under certain conditions can create an (e+e-) -pair. To help unravel relativistic and quantum mysteries consider CW laser beam pairs or TE-waveguides. Remarkably, their wave interference immediately gives Minkowski space-time coordinates and clearly relates eight kinds of space-time wave dilations or contractions to shifts in Doppler frequency or wavenumber. Modern physics students may find this approach significantly simplifies and clarifies relativistic physics in space-time (x,ct) and inverse time-space (ω,ck). It resolves some mysteries surrounding super-constant c = 299,792,458 m/s by proving ``Evenson's Axiom'' named in honor of NIST metrologist Ken Evenson (1932-2002) whose spectroscopy established c to start a precision renaissance in spectroscopy and GPS metrology.

  15. Of magnitudes and metaphors: explaining cognitive interactions between space, time, and number.

    PubMed

    Winter, Bodo; Marghetis, Tyler; Matlock, Teenie

    2015-03-01

    Space, time, and number are fundamental to how we act within and reason about the world. These three experiential domains are systematically intertwined in behavior, language, and the brain. Two main theories have attempted to account for cross-domain interactions. A Theory of Magnitude (ATOM) posits a domain-general magnitude system. Conceptual Metaphor Theory (CMT) maintains that cross-domain interactions are manifestations of asymmetric mappings that use representations of space to structure the domains of number and time. These theories are often viewed as competing accounts. We propose instead that ATOM and CMT are complementary, each illuminating different aspects of cross-domain interactions. We argue that simple representations of magnitude cannot, on their own, account for the rich, complex interactions between space, time and number described by CMT. On the other hand, ATOM is better at accounting for low-level and language-independent associations that arise early in ontogeny. We conclude by discussing how magnitudes and metaphors are both needed to understand our neural and cognitive web of space, time and number.

  16. Spatiotemporal Domain Decomposition for Massive Parallel Computation of Space-Time Kernel Density

    NASA Astrophysics Data System (ADS)

    Hohl, A.; Delmelle, E. M.; Tang, W.

    2015-07-01

    Accelerated processing capabilities are deemed critical when conducting analysis on spatiotemporal datasets of increasing size, diversity and availability. High-performance parallel computing offers the capacity to solve computationally demanding problems in a limited timeframe, but likewise poses the challenge of preventing processing inefficiency due to workload imbalance between computing resources. Therefore, when designing new algorithms capable of implementing parallel strategies, careful spatiotemporal domain decomposition is necessary to account for heterogeneity in the data. In this study, we perform octtree-based adaptive decomposition of the spatiotemporal domain for parallel computation of space-time kernel density. In order to avoid edge effects near subdomain boundaries, we establish spatiotemporal buffers to include adjacent data-points that are within the spatial and temporal kernel bandwidths. Then, we quantify computational intensity of each subdomain to balance workloads among processors. We illustrate the benefits of our methodology using a space-time epidemiological dataset of Dengue fever, an infectious vector-borne disease that poses a severe threat to communities in tropical climates. Our parallel implementation of kernel density reaches substantial speedup compared to sequential processing, and achieves high levels of workload balance among processors due to great accuracy in quantifying computational intensity. Our approach is portable of other space-time analytical tests.

  17. Land use and land cover change based on historical space-time model

    NASA Astrophysics Data System (ADS)

    Sun, Qiong; Zhang, Chi; Liu, Min; Zhang, Yongjing

    2016-09-01

    Land use and cover change is a leading edge topic in the current research field of global environmental changes and case study of typical areas is an important approach understanding global environmental changes. Taking the Qiantang River (Zhejiang, China) as an example, this study explores automatic classification of land use using remote sensing technology and analyzes historical space-time change by remote sensing monitoring. This study combines spectral angle mapping (SAM) with multi-source information and creates a convenient and efficient high-precision land use computer automatic classification method which meets the application requirements and is suitable for complex landform of the studied area. This work analyzes the histological space-time characteristics of land use and cover change in the Qiantang River basin in 2001, 2007 and 2014, in order to (i) verify the feasibility of studying land use change with remote sensing technology, (ii) accurately understand the change of land use and cover as well as historical space-time evolution trend, (iii) provide a realistic basis for the sustainable development of the Qiantang River basin and (iv) provide a strong information support and new research method for optimizing the Qiantang River land use structure and achieving optimal allocation of land resources and scientific management.

  18. Minkowski's Road to Space-Time, and its Consequences and an Alternative

    NASA Astrophysics Data System (ADS)

    Smith, Felix T.

    2014-03-01

    The road from Maxwell's equations to early relativity and then to Minkowski's space-time is traced through his Göttingen lecture in 1907 and his paper in 1908 that introduced the 4-dimensional tensor form of electrodynamics. This led to a puzzle: What is the reason for the time dependence in its position space geometry shown in the metric sum ds2 = dx12 + dx22 + dx32 -c2 dt2 ? Having no physical explanation for this, Minkowski made the drastic move of enlarging 3-space into 4-dimensional space-time, advocating it powerfully in his paper ``Space and Time'' (1909). I will discuss the circumstances that led to its rapid acceptance (but not by Poincaré), and its consequences that emerged much later in the partial disconnect between relativity and the other domains of modern physics. Much later still, the Hubble expansion of our cosmos can now be shown to imply that the term -c2 dt2 is a direct concomitant of an expanding, negatively curved 3-space and does not require either a 4-dimensional space-time or multiple time dimensions for multiple particles.

  19. Measuring space-time fuzziness with high energy γ-ray detectors

    NASA Astrophysics Data System (ADS)

    Cattaneo, Paolo Walter; Rappoldi, Andrea

    2017-03-01

    There are several suggestions to probe space-time fuzziness (also known as space-time foam) due to the quantum mechanics nature of space-time. These effects are predicted to be very small, being related to the Planck length, so that the only hope to experimentally detect them is to look at particles propagating along cosmological distances. Some phenomenological approaches suggest that photons originating from pointlike sources at cosmological distance experience path length fluctuation that could be detected. Also the direction of flight of such photons may be subject to a dispersion such that the image of a point-like source is blurred and detected as a disk. An experimentally accessible signature may be images of point-like sources larger that the size due to the Point Spread Function of the instrument. This additional broadening should increase with distance and photon energy. Some concrete examples that can be studied with the AGILE and FERMI-LAT γ -ray satellite experiments are discussed.

  20. Obstacle regions extraction method for unmanned aerial vehicles based on space-time tensor descriptor

    NASA Astrophysics Data System (ADS)

    Wu, Zhenglong; Li, Jie; Guan, Zhenyu; Yang, Huan

    2016-09-01

    Obstacle avoidance is an important and challenging task for the autonomous flight of unmanned aerial vehicles. Obstacle regions extraction from image sequences is a critical prerequisite in obstacle avoidance. We propose an obstacle regions extraction method based on space-time tensor descriptor. In our method, first, the space-time tensor descriptor is defined and a criterion function based on the descriptor of extracting space-time interest points (STIPs) is designed. Then a self-adaptive clustering of STIPs approach is presented to locate the possible obstacle regions. Finally, an improved level set algorithm is applied with the result of clustering to extract the obstacle regions. We demonstrate the experiments of obstacle regions extraction by our method on image sequences. Sequences are captured in indoor simulative obstacle avoidance environments and outdoor real flight obstacle avoidance environments. Experimental results validate that our method can effectively complete extraction and segmentation of obstacle region with captured images. Compared with the state-of-the-art methods, our method performs well to extract the contours of obstacle regions on the whole and significantly improves segmentation speed.

  1. ADER-WENO finite volume schemes with space-time adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Dumbser, Michael; Zanotti, Olindo; Hidalgo, Arturo; Balsara, Dinshaw S.

    2013-09-01

    We present the first high order one-step ADER-WENO finite volume scheme with adaptive mesh refinement (AMR) in multiple space dimensions. High order spatial accuracy is obtained through a WENO reconstruction, while a high order one-step time discretization is achieved using a local space-time discontinuous Galerkin predictor method. Due to the one-step nature of the underlying scheme, the resulting algorithm is particularly well suited for an AMR strategy on space-time adaptive meshes, i.e. with time-accurate local time stepping. The AMR property has been implemented 'cell-by-cell', with a standard tree-type algorithm, while the scheme has been parallelized via the message passing interface (MPI) paradigm. The new scheme has been tested over a wide range of examples for nonlinear systems of hyperbolic conservation laws, including the classical Euler equations of compressible gas dynamics and the equations of magnetohydrodynamics (MHD). High order in space and time have been confirmed via a numerical convergence study and a detailed analysis of the computational speed-up with respect to highly refined uniform meshes is also presented. We also show test problems where the presented high order AMR scheme behaves clearly better than traditional second order AMR methods. The proposed scheme that combines for the first time high order ADER methods with space-time adaptive grids in two and three space dimensions is likely to become a useful tool in several fields of computational physics, applied mathematics and mechanics.

  2. Characterising the space-time structure of rainfall in the Sahel with a view to estimating IDAF curves

    NASA Astrophysics Data System (ADS)

    Panthou, G.; Vischel, T.; Lebel, T.; Quantin, G.; Molinié, G.

    2014-12-01

    Intensity-duration-area-frequency (IDAF) curves are increasingly demanded for characterising the severity of storms and for designing hydraulic structures. Their computation requires inferring areal rainfall distributions over the range of space scales and timescales that are the most relevant for hydrological studies at catchment scale. In this study, IDAF curves are computed for the first time in West Africa, based on the data provided by the AMMA-CATCH Niger network, composed of 30 recording rain gauges having operated since 1990 over a 16 000 km2 area in south-western Niger. The IDAF curves are obtained by separately considering the time (intensity-duration-frequency, IDF) and space (areal reduction factor, ARF) components of the extreme rainfall distribution. Annual maximum intensities are extracted for resolutions between 1 and 24 h in time and from point (rain gauge) to 2500 km2 in space. The IDF model used is based on the concept of scale invariance (simple scaling) which allows the normalisation of the different temporal resolutions of maxima series to which a global generalised extreme value (GEV) is fitted. This parsimonious framework allows one to use the concept of dynamic scaling to describe the ARF. The results show that coupling a simple scaling in space and time with a dynamical scaling that relates to space and time allows one to satisfactorily model the effect of space-time aggregation on the distribution of extreme rainfall.

  3. Numerical solution of the wave equation on 1+1 Minkowski space-time with scri-fixing conformal compatification

    SciTech Connect

    Cruz-Osorio, A.; Lora-Clavijo, F. D.; Guzman, F. S.

    2010-07-12

    We solve numerically the wave equation on a fixed background space-time corresponding to 1+1 Minkowski space-time. In this case we use scri-fixing conformal compactifications and solve the wave equation on the conformal space-time. We draw space-time and conformal diagrams in order to describe the consistency of the results and the effects of the gauge choices. We are interested in containg g{sup +} in the numerical domain because such boundary is the future boundary of the wave function.

  4. Implementation of a double Gaussian source model for the BEAMnrc Monte Carlo code and its influence on small fields dose distributions.

    PubMed

    Doerner, Edgardo; Caprile, Paola

    2016-09-01

    The shape of the radiation source of a linac has a direct impact on the delivered dose distributions, especially in the case of small radiation fields. Traditionally, a single Gaussian source model is used to describe the electron beam hitting the target, although different studies have shown that the shape of the electron source can be better described by a mixed distribution consisting of two Gaussian components. Therefore, this study presents the implementation of a double Gaussian source model into the BEAMnrc Monte Carlo code. The impact of the double Gaussian source model for a 6 MV beam is assessed through the comparison of different dosimetric parameters calculated using a single Gaussian source, previously commissioned, the new double Gaussian source model and measurements, performed with a diode detector in a water phantom. It was found that the new source can be easily implemented into the BEAMnrc code and that it improves the agreement between measurements and simulations for small radiation fields. The impact of the change in source shape becomes less important as the field size increases and for increasing distance of the collimators to the source, as expected. In particular, for radiation fields delivered using stereotactic collimators located at a distance of 59 cm from the source, it was found that the effect of the double Gaussian source on the calculated dose distributions is negligible, even for radiation fields smaller than 5 mm in diameter. Accurate determination of the shape of the radiation source allows us to improve the Monte Carlo modeling of the linac, especially for treatment modalities such as IMRT, were the radiation beams used could be very narrow, becoming more sensitive to the shape of the source. PACS number(s): 87.53.Bn, 87.55.K, 87.56.B-, 87.56.jf.

  5. The elliptical dimension of space-time atmospheric stratification of passive admixtures using lidar data

    NASA Astrophysics Data System (ADS)

    Radkevich, A.; Lovejoy, S.; Strawbridge, K.; Schertzer, D.

    2007-08-01

    State-of-the-art airborne lidar data of passive scalars have shown that the spatial stratification of the atmosphere is scaling: the vertical extent (Δ z) of structures is typically ≈Δx where Δ x is the horizontal extent and H z is a stratification exponent. Assuming horizontal isotropy, the volumes of the structures therefore vary as ΔxΔxΔx=Δx where the “elliptical dimension” D s characterizes the rate at which the volumes of typical non-intermittent structures vary with scale. Work on vertical cross-sections has shown that 2+ H z=2.55±0.02 (close to the theoretical prediction 23/9). In this paper we extend these ( x, z) analyses to ( z, t). In the absence of overall advection, the lifetime Δ t of a structure of size Δ x varies as Δx with H t=2/3 so that the overall space-time dimension is D st=29/9=3.22…. However, horizontal and vertical advection lead to new exponents: we argue that the temporal stratification exponent H t≈1 or ≈0.7 depending on the relative importance of horizontal versus vertical advection velocities. We empirically test these space-time predictions using vertical-time ( z, t) cross-sections using passive scalar surrogates (aerosol backscatter ratios from lidar) at ∼3 m resolution in the vertical, 0.5-30 s in time and spanning 3-4 orders of magnitude in scale as well as new analyses of vertical ( x, z) cross-sections (spanning over 3 orders of magnitude in both x, z directions). In order to test the theory for density fluctuations at arbitrary displacements in (Δ z, Δ t) and (Δ x, Δ z) spaces, we developed and applied a new Anisotropic Scaling Analysis Technique (ASAT) based on nonlinear coordinate transformations. Applying this and other analyses to data spanning more than 3 orders of magnitude of space-time scales we determined the anisotropic scaling of space-time finding the empirical value D st=3.13±0.16. The analyses also show that both cirrus clouds and aerosols had very similar space-time scaling

  6. A space-time multifractal analysis on radar rainfall sequences from central Poland

    NASA Astrophysics Data System (ADS)

    Licznar, Paweł; Deidda, Roberto

    2014-05-01

    Rainfall downscaling belongs to most important tasks of modern hydrology. Especially from the perspective of urban hydrology there is real need for development of practical tools for possible rainfall scenarios generation. Rainfall scenarios of fine temporal scale reaching single minutes are indispensable as inputs for hydrological models. Assumption of probabilistic philosophy of drainage systems design and functioning leads to widespread application of hydrodynamic models in engineering practice. However models like these covering large areas could not be supplied with only uncorrelated point-rainfall time series. They should be rather supplied with space time rainfall scenarios displaying statistical properties of local natural rainfall fields. Implementation of a Space-Time Rainfall (STRAIN) model for hydrometeorological applications in Polish conditions, such as rainfall downscaling from the large scales of meteorological models to the scale of interest for rainfall-runoff processes is the long-distance aim of our research. As an introduction part of our study we verify the veracity of the following STRAIN model assumptions: rainfall fields are isotropic and statistically homogeneous in space; self-similarity holds (so that, after having rescaled the time by the advection velocity, rainfall is a fully homogeneous and isotropic process in the space-time domain); statistical properties of rainfall are characterized by an "a priori" known multifractal behavior. We conduct a space-time multifractal analysis on radar rainfall sequences selected from the Polish national radar system POLRAD. Radar rainfall sequences covering the area of 256 km x 256 km of original 2 km x 2 km spatial resolution and 15 minutes temporal resolution are used as study material. Attention is mainly focused on most severe summer convective rainfalls. It is shown that space-time rainfall can be considered with a good approximation to be a self-similar multifractal process. Multifractal

  7. The binary weight distribution of the extended (2 sup m, 2 sup m-4) code of Reed-Solomon code over GF(2 sup m) with generator polynomial (x-alpha sup 2) (x-alpha sup 3)

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1987-01-01

    Consider an (n,k) linear code with symbols from GF(2 sup m). If each code symbol is represented by a binary m-tuple using a certain basis for GF(2 sup m), a binary (nm,km) linear code called a binary image of the original code is obtained. A lower bound is presented on the minimum weight enumerator for a binary image of the extended (2 sup m, 2 sup m -4) code of Reed-Solomon code over GF(2 sup m) with generator polynomical (x - alpha)(x- alpha squared)(x - alpha cubed) and its dual code, where alpha is a primitive element in GF(2 sup m).

  8. Distributions of secondary particles in proton and carbon-ion therapy: a comparison between GATE/Geant4 and FLUKA Monte Carlo codes.

    PubMed

    Robert, C; Dedes, G; Battistoni, G; Böhlen, T T; Buvat, I; Cerutti, F; Chin, M P W; Ferrari, A; Gueth, P; Kurz, C; Lestand, L; Mairani, A; Montarou, G; Nicolini, R; Ortega, P G; Parodi, K; Prezado, Y; Sala, P R; Sarrut, D; Testa, E

    2013-05-07

    Monte Carlo simulations play a crucial role for in-vivo treatment monitoring based on PET and prompt gamma imaging in proton and carbon-ion therapies. The accuracy of the nuclear fragmentation models implemented in these codes might affect the quality of the treatment verification. In this paper, we investigate the nuclear models implemented in GATE/Geant4 and FLUKA by comparing the angular and energy distributions of secondary particles exiting a homogeneous target of PMMA. Comparison results were restricted to fragmentation of (16)O and (12)C. Despite the very simple target and set-up, substantial discrepancies were observed between the two codes. For instance, the number of high energy (>1 MeV) prompt gammas exiting the target was about twice as large with GATE/Geant4 than with FLUKA both for proton and carbon ion beams. Such differences were not observed for the predicted annihilation photon production yields, for which ratios of 1.09 and 1.20 were obtained between GATE and FLUKA for the proton beam and the carbon ion beam, respectively. For neutrons and protons, discrepancies from 14% (exiting protons-carbon ion beam) to 57% (exiting neutrons-proton beam) have been identified in production yields as well as in the energy spectra for neutrons.

  9. A note on the electromagnetic irradiation in a holed spatial region: A space-time approach

    NASA Astrophysics Data System (ADS)

    Botelho, Luiz C. L.

    2017-02-01

    We study the role of the homological topological property of a space-time with holes (a multiple connected manifold) on the formal solution of the electromagnetic irradiation problem taking place on these “holed” space-times. In this paper, in addition to the main focus of study, we present as well important studies on this irradiation problem on other mathematical frameworks.

  10. Comparison of depth-dose distributions of proton therapeutic beams calculated by means of logical detectors and ionization chamber modeled in Monte Carlo codes

    NASA Astrophysics Data System (ADS)

    Pietrzak, Robert; Konefał, Adam; Sokół, Maria; Orlef, Andrzej

    2016-08-01

    The success of proton therapy depends strongly on the precision of treatment planning. Dose distribution in biological tissue may be obtained from Monte Carlo simulations using various scientific codes making it possible to perform very accurate calculations. However, there are many factors affecting the accuracy of modeling. One of them is a structure of objects called bins registering a dose. In this work the influence of bin structure on the dose distributions was examined. The MCNPX code calculations of Bragg curve for the 60 MeV proton beam were done in two ways: using simple logical detectors being the volumes determined in water, and using a precise model of ionization chamber used in clinical dosimetry. The results of the simulations were verified experimentally in the water phantom with Marcus ionization chamber. The average local dose difference between the measured relative doses in the water phantom and those calculated by means of the logical detectors was 1.4% at first 25 mm, whereas in the full depth range this difference was 1.6% for the maximum uncertainty in the calculations less than 2.4% and for the maximum measuring error of 1%. In case of the relative doses calculated with the use of the ionization chamber model this average difference was somewhat greater, being 2.3% at depths up to 25 mm and 2.4% in the full range of depths for the maximum uncertainty in the calculations of 3%. In the dose calculations the ionization chamber model does not offer any additional advantages over the logical detectors. The results provided by both models are similar and in good agreement with the measurements, however, the logical detector approach is a more time-effective method.

  11. Space-time extreme wind waves: Observation and analysis of shapes and heights

    NASA Astrophysics Data System (ADS)

    Benetazzo, Alvise; Barbariol, Francesco; Bergamasco, Filippo; Carniel, Sandro; Sclavo, Mauro

    2016-04-01

    We analyze here the temporal shape and the maximal height of extreme wind waves, which were obtained from an observational space-time sample of sea surface elevations during a mature and short-crested sea state (Benetazzo et al., 2015). Space-time wave data are processed to detect the largest waves of specific 3-D wave groups close to the apex of their development. First, maximal elevations of the groups are discussed within the framework of space-time (ST) extreme statistical models of random wave fields (Adler and Taylor, 2007; Benetazzo et al., 2015; Fedele, 2012). Results of ST models are also compared with observations and predictions of maxima based on time series of sea surface elevations. Second, the time profile of the extreme waves around the maximal crest height is analyzed and compared with the expectations of the linear (Boccotti, 1983) and second-order nonlinear extension (Arena, 2005) of the Quasi-Determinism (QD) theory. Main purpose is to verify to what extent, using the QD model results, one can estimate the shape and the crest-to-trough height of large waves in a random ST wave field. From the results presented, it emerges that, apart from the displacements around the crest apex, sea surface elevations of very high waves are greatly dispersed around a mean profile. Yet the QD model furnishes, on average, a fair prediction of the wave height of the maximal waves, especially when nonlinearities are taken into account. Moreover, the combination of ST and QD model predictions allow establishing, for a given sea condition, a framework for the representation of waves with very large crest heights. The results have also the potential to be implemented in a phase-averaged numerical wave model (see abstract EGU2016-14008 and Barbariol et al., 2015). - Adler, R.J., Taylor, J.E., 2007. Random fields and geometry. Springer, New York (USA), 448 pp. - Arena, F., 2005. On non-linear very large sea wave groups. Ocean Eng. 32, 1311-1331. - Barbariol, F., Alves, J

  12. Space-time cascades and the scaling of ECMWF reanalyses: Fluxes and fields

    NASA Astrophysics Data System (ADS)

    Lovejoy, S.; Schertzer, D.

    2011-07-01

    We consider the space-time scaling properties of the European Centre for Medium-Range Weather Forecasts (ECMWF) interim reanalysis products for the wind (u, v, w), humidity (hs), temperature (T), and geopotentials (z) and their corresponding turbulent fluxes using the daily 700 mbar products for the year 2006. Following previous studies on T, hs, and u, we show that that the basic predictions of multiplicative cascade models are well respected over space-time scales below ˜5000 km, shorter than ˜5-10 days providing precise scale by scale determination of the reanalysis statistical properties (needed for example for stochastic parameterizations in ensemble forecasting systems). We innovate by including the meridional and vertical wind components (v, w) and geopotential (z), and by considering their horizontal anisotropies, their latitudinal variations and, perhaps most importantly, by directly analyzing the fields (not just fluxes). Whereas the fluxes have nearly isotropic exponents in space-time with little latitudinal variation (displaying only scale independent “trivial” anisotropy), the fields have significant scaling horizontal anisotropies. These complicate the interpretation of standard isotropic spectra and are likely to be artifacts. Many of the new (nonconservation) exponents (H) are nonstandard and currently have no adequate theoretical explanation although the key horizontal wind and temperature H exponents may be consequences of horizontal Kolmogorov scaling, combined with sloping isobaric surfaces. In time the scaling is broken at around 5-10 days, i.e., roughly the lifetime of planetary structures; lower frequencies are spectrally flatter: the “spectral plateau,” weather-low-frequency weather regime.

  13. The periodic table of real geometric algebras, bits of space-time, and the Standard Model.

    NASA Astrophysics Data System (ADS)

    Marks, Dennis

    2007-04-01

    Real geometric algebras Rn;s in n dimensions with signature s are isomorphic to algebras of real, complex, or quaternionic matrices R(2^n 2), C(2^n-1 2), or H(2^n-2 2), or of block diagonal matrices ^2R(2^n-1 2) or ^2H(2^n-3 2), for | ( s+3 )8-4 | = 1, 2, 3, 0, or 4, respectively. Only for n = 2 or 4 and s = 0 or 2 is Rn;s isomorphic to real nxn matrices R(n). R2;2 and R2;0 describe the Euclidean plane and the Minkowskian plane. Their direct product, R4;2 = R2;0 R2;2, describes 4-d space-time with signature + + + -- and with dynamical elements (position, spin, momentum, and action) that satisfy the Heisenberg commutation relations. Quantum mechanics emerges naturally. Electromagnetism, described by U(1) C R1;-1, has one time-like coordinate; the weak force, described by SU(2) SO(3) R3;3, has three space-like coordinates. Thus the real algebra of the symmetry group of the electro-weak force is isomorphic to the real algebra of space-time. Finally, R8;2 = R4;0 R4;2 is isomorphic to R(16), into which can be fit three generations of weakly interacting Fermi doublets and three generations of three colors of quarks. Every 8 dimensions thereafter, geometric algebras factor into direct products of R(16), interpreted as a 4-d hexadecimal space-time lattice with four additional internal coordinates for the Standard Model.

  14. Relativistic Landau-Aharonov Quantization in Topological Defect Space-Time

    NASA Astrophysics Data System (ADS)

    Bakke, K.; Furtado, C.

    In this paper we study the Landau levels arising within the relativistic dynamics of a neutral particle which possesses a permanent magnetic dipole moment interacting with an external electric field in the curved space-time background with the presence of a torsion field. We use the Aharonov-Casher effect to couple this neutral particle with the electric field in this curved background. The eigenfunction and eigenvalues of the Hamiltonian are obtained. We show that the presence of the topological defect breaks the infinite degeneracy of the relativistic Landau levels arising in this system. We study the nonrelativistic limit of the eigenvalues and compare these results with cases studied earlier.

  15. Coupling gravity, electromagnetism and space-time for space propulsion breakthroughs

    NASA Technical Reports Server (NTRS)

    Millis, Marc G.

    1994-01-01

    spaceflight would be revolutionized if it were possible to propel a spacecraft without rockets using the coupling between gravity, electromagnetism, and space-time (hence called 'space coupling propulsion'). New theories and observations about the properties of space are emerging which offer new approaches to consider this breakthrough possibility. To guide the search, evaluation, and application of these emerging possibilities, a variety of hypothetical space coupling propulsion mechanisms are presented to highlight the issues that would have to be satisfied to enable such breakthroughs. A brief introduction of the emerging opportunities is also presented.

  16. Space-time airborne disease mapping applied to detect specific behaviour of varicella in Valencia, Spain.

    PubMed

    Iftimi, Adina; Montes, Francisco; Santiyán, Ana Míguez; Martínez-Ruiz, Francisco

    2015-01-01

    Airborne diseases are one of humanity's most feared sicknesses and have regularly caused concern among specialists. Varicella is an airborne disease which usually affects children before the age of 10. Because of its nature, varicella gives rise to interesting spatial, temporal and spatio-temporal patterns. This paper studies spatio-temporal exploratory analysis tools to detect specific behaviour of varicella in the city of Valencia, Spain, from 2008 to 2013. These methods have shown a significant association between the spatial and the temporal component, confirmed by the space-time models applied to the data. High relative risk of varicella is observed in economically disadvantaged regions, areas less involved in vaccination programmes.

  17. Preconditioned iterative methods for space-time fractional advection-diffusion equations

    NASA Astrophysics Data System (ADS)

    Zhao, Zhi; Jin, Xiao-Qing; Lin, Matthew M.

    2016-08-01

    In this paper, we propose practical numerical methods for solving a class of initial-boundary value problems of space-time fractional advection-diffusion equations. First, we propose an implicit method based on two-sided Grünwald formulae and discuss its stability and consistency. Then, we develop the preconditioned generalized minimal residual (preconditioned GMRES) method and preconditioned conjugate gradient normal residual (preconditioned CGNR) method with easily constructed preconditioners. Importantly, because resulting systems are Toeplitz-like, fast Fourier transform can be applied to significantly reduce the computational cost. We perform numerical experiments to demonstrate the efficiency of our preconditioners, even in cases with variable coefficients.

  18. Information content of nonautonomous free fields in curved space-time

    SciTech Connect

    Parreira, J. E.; Nemes, M. C.; Fonseca-Romero, K. M.

    2011-03-15

    We show that it is possible to quantify the information content of a nonautonomous free field state in curved space-time. A covariance matrix is defined and it is shown that, for symmetric Gaussian field states, the matrix is connected to the entropy of the state. This connection is maintained throughout a quadratic nonautonomous (including possible phase transitions) evolution. Although particle-antiparticle correlations are dynamically generated, the evolution is isoentropic. If the current standard cosmological model for the inflationary period is correct, in absence of decoherence such correlations will be preserved, and could potentially lead to observable effects, allowing for a test of the model.

  19. Stringy models of modified gravity: space-time defects and structure formation

    SciTech Connect

    Mavromatos, Nick E.; Sakellariadou, Mairi; Yusaf, Muhammad Furqaan E-mail: mairi.sakellariadou@kcl.ac.uk

    2013-03-01

    Starting from microscopic models of space-time foam, based on brane universes propagating in bulk space-times populated by D0-brane defects (''D-particles''), we arrive at effective actions used by a low-energy observer on the brane world to describe his/her observations of the Universe. These actions include, apart from the metric tensor field, also scalar (dilaton) and vector fields, the latter describing the interactions of low-energy matter on the brane world with the recoiling point-like space-time defect (D-particle). The vector field is proportional to the recoil velocity of the D-particle and as such it satisfies a certain constraint. The vector breaks locally Lorentz invariance, which however is assumed to be conserved on average in a space-time foam situation, involving the interaction of matter with populations of D-particle defects. In this paper we clarify the role of fluctuations of the vector field on structure formation and galactic growth. In particular we demonstrate that, already at the end of the radiation era, the (constrained) vector field associated with the recoil of the defects provides the seeds for a growing mode in the evolution of the Universe. Such a growing mode survives during the matter dominated era, provided the variance of the D-particle recoil velocities on the brane is larger than a critical value. We note that in this model, as a result of specific properties of D-brane dynamics in the bulk, there is no issue of overclosing the brane Universe for large defect densities. Thus, in these models, the presence of defects may be associated with large-structure formation. Although our string inspired models do have (conventional, from a particle physics point of view) dark matter components, nevertheless it is interesting that the role of ''extra'' dark matter is also provided by the population of massive defects. This is consistent with the weakly interacting character of the D-particle defects, which predominantly interact only

  20. Learning characteristics of a space-time neural network as a tether skiprope observer

    NASA Technical Reports Server (NTRS)

    Lea, Robert N.; Villarreal, James A.; Jani, Yashvant; Copeland, Charles

    1992-01-01

    The Software Technology Laboratory at JSC is testing a Space Time Neural Network (STNN) for observing tether oscillations present during retrieval of a tethered satellite. Proper identification of tether oscillations, known as 'skiprope' motion, is vital to safe retrieval of the tethered satellite. Our studies indicate that STNN has certain learning characteristics that must be understood properly to utilize this type of neural network for the tethered satellite problem. We present our findings on the learning characteristics including a learning rate versus momentum performance table.

  1. Learning characteristics of a space-time neural network as a tether skiprope observer

    NASA Technical Reports Server (NTRS)

    Lea, Robert N.; Villarreal, James A.; Jani, Yashvant; Copeland, Charles

    1993-01-01

    The Software Technology Laboratory at the Johnson Space Center is testing a Space Time Neural Network (STNN) for observing tether oscillations present during retrieval of a tethered satellite. Proper identification of tether oscillations, known as 'skiprope' motion, is vital to safe retrieval of the tethered satellite. Our studies indicate that STNN has certain learning characteristics that must be understood properly to utilize this type of neural network for the tethered satellite problem. We present our findings on the learning characteristics including a learning rate versus momentum performance table.

  2. Revised single-spacecraft method for determining wave vector k and resolving space-time ambiguity

    NASA Astrophysics Data System (ADS)

    Bellan, P. M.

    2016-09-01

    A practical method is proposed for determining the wave vector of waves from single-spacecraft measurements. This wave vector knowledge can then be used to remove the space-time ambiguity produced by frequency Doppler shift associated with spacecraft motion. The method involves applying the Wiener-Khinchin theorem to cross correlations of the current and magnetic field oscillations and to autocorrelations of the magnetic field oscillations. The method requires that each wave frequency component map to a unique wave vector, a condition presumed true in many spacecraft measurement situations. Examples validating the method are presented.

  3. New conserved currents for vacuum space-times in dimension four with a Killing vector

    NASA Astrophysics Data System (ADS)

    Gómez-Lobo, Alfonso García-Parrado

    2016-10-01

    A new family of conserved currents for vacuum space-times with a Killing vector is presented. The currents are constructed from the superenergy tensor of the Mars-Simon tensor and using the positivity properties of the former we find that the conserved charges associated to the currents have natural positivity properties in certain cases. Given the role played by the Mars-Simon tensor in local and semi-local characterisations of the Kerr solution, the currents presented in this work are useful to construct non-negative scalar quantities characterising Kerr initial data (known in the literature as non-Kerrness) which in addition are conserved charges.

  4. The simulation of space-time speckle and impact based on synthetic aperture imaging ladar

    NASA Astrophysics Data System (ADS)

    Xu, Qian; Liu, Liren; Zhou, Yu; Sun, Jianfeng; Wu, Yapeng

    2012-10-01

    In synthetic aperture imaging ladar (SAIL), spatially and temporally varied speckles are resulted from the linear wavelength chirped laser signal. The random phase and amplitude of space-time speckle is imported to heterodyne beat signal by antenna aperture integration. The numerical evolution for such an effect is presented. Our research indicates the random phase and amplitude is closely related to the ratio of antenna aperture and speckle scale. According to computer simulation results, the scale design of optical antenna aperture to reduce the image degradation is proposed.

  5. Absorption of a Massive Scalar Field by Wormhole Space-Times

    NASA Astrophysics Data System (ADS)

    Huang, Hai; Chen, Juhua; Wang, Yongjiu; Jin, Yao

    2017-04-01

    In this paper we consider the problem of the test massive scalar field propagating in the background of a class of wormhole space-times. Basing on the quantum scattering theory, we analyze the Schrödinger-type scalar wave equation and compute transmission coefficients for arbitrary coupling of the field to the background geometry with the WKB approximation. We numerically investigate its absorption cross section and analyze them in the high frequency regime. We find that the absorption cross section oscillates about the geometric optical value and the limit of absorption cross section is uniform in the high frequency regime.

  6. Absorption of a Massive Scalar Field by Wormhole Space-Times

    NASA Astrophysics Data System (ADS)

    Huang, Hai; Chen, Juhua; Wang, Yongjiu; Jin, Yao

    2017-01-01

    In this paper we consider the problem of the test massive scalar field propagating in the background of a class of wormhole space-times. Basing on the quantum scattering theory, we analyze the Schrödinger-type scalar wave equation and compute transmission coefficients for arbitrary coupling of the field to the background geometry with the WKB approximation. We numerically investigate its absorption cross section and analyze them in the high frequency regime. We find that the absorption cross section oscillates about the geometric optical value and the limit of absorption cross section is uniform in the high frequency regime.

  7. Realization of Cohen-Glashow very special relativity on noncommutative space-time.

    PubMed

    Sheikh-Jabbari, M M; Tureanu, A

    2008-12-31

    We show that the Cohen-Glashow very special relativity (VSR) theory [A. G. Cohen and S. L. Glashow, Phys. Rev. Lett. 97, 021601 (2006)] can be realized as the part of the Poincaré symmetry preserved on a noncommutative Moyal plane with lightlike noncommutativity. Moreover, we show that the three subgroups relevant to VSR can also be realized in the noncommutative space-time setting. For all of these three cases, the noncommutativity parameter theta(mu upsilon) should be lightlike (theta(mu upsilon) theta mu upsilon = 0). We discuss some physical implications of this realization of the Cohen-Glashow VSR.

  8. Oscillating dark energy model in plane symmetric space-time with time periodic varying deceleration parameter

    NASA Astrophysics Data System (ADS)

    She, M.; Jiang, L. P.

    2014-12-01

    In this paper, an oscillating dark energy model is presented in an isotropic but inhomogeneous plane symmetric space-time by considering a time periodic varying deceleration parameter. We find three different types of new solutions which describe different scenarios of oscillating universe. The first two solutions show an oscillating universe with singularities. For the third one, the universe is singularity-free during the whole evolution. Moreover, the Hubble parameter oscillates and keeps positive which explores an interesting possibility to unify the early inflation and late time acceleration of the universe.

  9. Energy-efficient space-time modulation for indoor MISO visible light communications.

    PubMed

    Zhang, Yan-Yu; Yu, Hong-Yi; Zhang, Jian-Kang; Zhu, Yi-Jun; Wang, Tao

    2016-01-15

    We consider an indoor multi-input single-output (MISO) visible light communication (VLC) system without channel state information at the transmitter. For such a system, an energy-efficient time-collaborative modulation (TCM) constellation is first designed by minimizing a total optical power subject to a fixed minimum Euclidean distance. Then, a new space-time transmission scheme is proposed. Comprehensive computer simulations indicate that our proposed design always has better average error performance within illumination coverage area than the currently available schemes for this application.

  10. The perturbation of gravitational waves in plasma in the FRW space-time

    NASA Astrophysics Data System (ADS)

    Youssef, Manal H.

    2016-01-01

    In this paper we study the perturbation of gravitational waves in plasma,using the relativistic hydro-magnetic equation in the so-called Cowling approximation considering a Friedman-Robertson-Walker (FRW) cosmological model. It has been assumed the gravitational field and the weak magnetic field do not break the homogeneity and isotropy of the considered FRW space time. Applying the formalism proposed for Zel'dovich and Novikov (The structure and evolution of the universe, Volume II, 1983), Brandenburg et al. (Phys. Rev. D 54:1291, 1996) and Weinberg (Gravitation and Cosmology, 1972). We verify that density fluctuation may be obtained.

  11. Intuitive analysis of space-time focusing with double-ABCD calculation

    PubMed Central

    Durfee, Charles G.; Greco, Michael; Block, Erica; Vitek, Dawn; Squier, Jeff A.

    2012-01-01

    We analyze the structure of space-time focusing of spatially-chirped pulses using a technique where each frequency component of the beam follows its own Gaussian beamlet that in turn travels as a ray through the system. The approach leads to analytic expressions for the axially-varying pulse duration, pulse-front tilt, and the longitudinal intensity profile. We find that an important contribution to the intensity localization obtained with spatial-chirp focusing arises from the evolution of the geometric phase of the beamlets. PMID:22714487

  12. Tracking and visualization of space-time activities for a micro-scale flu transmission study

    PubMed Central

    2013-01-01

    Background Infectious diseases pose increasing threats to public health with increasing population density and more and more sophisticated social networks. While efforts continue in studying the large scale dissemination of contagious diseases, individual-based activity and behaviour study benefits not only disease transmission modelling but also the control, containment, and prevention decision making at the local scale. The potential for using tracking technologies to capture detailed space-time trajectories and model individual behaviour is increasing rapidly, as technological advances enable the manufacture of small, lightweight, highly sensitive, and affordable receivers and the routine use of location-aware devices has become widespread (e.g., smart cellular phones). The use of low-cost tracking devices in medical research has also been proved effective by more and more studies. This study describes the use of tracking devices to collect data of space-time trajectories and the spatiotemporal processing of such data to facilitate micro-scale flu transmission study. We also reports preliminary findings on activity patterns related to chances of influenza infection in a pilot study. Methods Specifically, this study employed A-GPS tracking devices to collect data on a university campus. Spatiotemporal processing was conducted for data cleaning and segmentation. Processed data was validated with traditional activity diaries. The A-GPS data set was then used for visual explorations including density surface visualization and connection analysis to examine space-time activity patterns in relation to chances of influenza infection. Results When compared to diary data, the segmented tracking data demonstrated to be an effective alternative and showed greater accuracies in time as well as the details of routes taken by participants. A comparison of space-time activity patterns between participants who caught seasonal influenza and those who did not revealed interesting

  13. XSOR codes users manual

    SciTech Connect

    Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.

    1993-11-01

    This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ``XSOR``. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms.

  14. Final Technical Report for SBIR entitled Four-Dimensional Finite-Orbit-Width Fokker-Planck Code with Sources, for Neoclassical/Anomalous Transport Simulation of Ion and Electron Distributions

    SciTech Connect

    Harvey, R. W.; Petrov, Yu. V.

    2013-12-03

    Within the US Department of Energy/Office of Fusion Energy magnetic fusion research program, there is an important whole-plasma-modeling need for a radio-frequency/neutral-beam-injection (RF/NBI) transport-oriented finite-difference Fokker-Planck (FP) code with combined capabilities for 4D (2R2V) geometry near the fusion plasma periphery, and computationally less demanding 3D (1R2V) bounce-averaged capabilities for plasma in the core of fusion devices. Demonstration of proof-of-principle achievement of this goal has been carried out in research carried out under Phase I of the SBIR award. Two DOE-sponsored codes, the CQL3D bounce-average Fokker-Planck code in which CompX has specialized, and the COGENT 4D, plasma edge-oriented Fokker-Planck code which has been constructed by Lawrence Livermore National Laboratory and Lawrence Berkeley Laboratory scientists, where coupled. Coupling was achieved by using CQL3D calculated velocity distributions including an energetic tail resulting from NBI, as boundary conditions for the COGENT code over the two-dimensional velocity space on a spatial interface (flux) surface at a given radius near the plasma periphery. The finite-orbit-width fast ions from the CQL3D distributions penetrated into the peripheral plasma modeled by the COGENT code. This combined code demonstrates the feasibility of the proposed 3D/4D code. By combining these codes, the greatest computational efficiency is achieved subject to present modeling needs in toroidally symmetric magnetic fusion devices. The more efficient 3D code can be used in its regions of applicability, coupled to the more computationally demanding 4D code in higher collisionality edge plasma regions where that extended capability is necessary for accurate representation of the plasma. More efficient code leads to greater use and utility of the model. An ancillary aim of the project is to make the combined 3D/4D code user friendly. Achievement of full-coupling of these two Fokker

  15. High Efficiency Integrated Space Conditioning, Water Heating and Air Distribution System for HUD-Code Manufactured Housing

    SciTech Connect

    Henry DeLima; Joe Akin; Joseph Pietsch

    2008-09-14

    Recognizing the need for new space conditioning and water heating systems for manufactured housing, DeLima Associates assembled a team to develop a space conditioning system that would enhance comfort conditions while also reducing energy usage at the systems level. The product, Comboflair® was defined as a result of a needs analysis of project sponsors and industry stakeholders. An integrated system would be developed that would combine a packaged airconditioning system with a small-duct, high-velocity air distribution system. In its basic configuration, the source for space heating would be a gas water heater. The complete system would be installed at the manufactured home factory and would require no site installation work at the homesite as is now required with conventional split-system air conditioners. Several prototypes were fabricated and tested before a field test unit was completed in October 2005. The Comboflair® system, complete with ductwork, was installed in a 1,984 square feet, double-wide manufactured home built by Palm Harbor Homes in Austin, TX. After the home was transported and installed at a Palm Harbor dealer lot in Austin, TX, a data acquisition system was installed for remote data collection. Over 60 parameters were continuously monitored and measurements were transmitted to a remote site every 15 minutes for performance analysis. The Comboflair® system was field tested from February 2006 until April 2007. The cooling system performed in accordance with the design specifications. The heating system initially could not provide the needed capacity at peak heating conditions until the water heater was replaced with a higher capacity standard water heater. All system comfort goals were then met. As a result of field testing, we have identified improvements to be made to specific components for incorporation into production models. The Comboflair® system will be manufactured by Unico, Inc. at their new production facility in St. Louis

  16. Fitted Hanbury-Brown-Twiss radii versus space-time variances in flow-dominated models

    SciTech Connect

    Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan

    2006-04-15

    The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown-Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simple Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data.

  17. GNSS space-time interference mitigation and attitude determination in the presence of interference signals.

    PubMed

    Daneshmand, Saeed; Jahromi, Ali Jafarnia; Broumandan, Ali; Lachapelle, Gérard

    2015-05-26

    The use of Space-Time Processing (STP) in Global Navigation Satellite System (GNSS) applications is gaining significant attention due to its effectiveness for both narrowband and wideband interference suppression. However, the resulting distortion and bias on the cross correlation functions due to space-time filtering is a major limitation of this technique. Employing the steering vector of the GNSS signals in the filter structure can significantly reduce the distortion on cross correlation functions and lead to more accurate pseudorange measurements. This paper proposes a two-stage interference mitigation approach in which the first stage estimates an interference-free subspace before the acquisition and tracking phases and projects all received signals into this subspace. The next stage estimates array attitude parameters based on detecting and employing GNSS signals that are less distorted due to the projection process. Attitude parameters enable the receiver to estimate the steering vector of each satellite signal and use it in the novel distortionless STP filter to significantly reduce distortion and maximize Signal-to-Noise Ratio (SNR). GPS signals were collected using a six-element antenna array under open sky conditions to first calibrate the antenna array. Simulated interfering signals were then added to the digitized samples in software to verify the applicability of the proposed receiver structure and assess its performance for several interference scenarios.

  18. Is the local linearity of space-time inherited from the linearity of probabilities?

    NASA Astrophysics Data System (ADS)

    Müller, Markus P.; Carrozza, Sylvain; Höhn, Philipp A.

    2017-02-01

    The appearance of linear spaces, describing physical quantities by vectors and tensors, is ubiquitous in all of physics, from classical mechanics to the modern notion of local Lorentz invariance. However, as natural as this seems to the physicist, most computer scientists would argue that something like a ‘local linear tangent space’ is not very typical and in fact a quite surprising property of any conceivable world or algorithm. In this paper, we take the perspective of the computer scientist seriously, and ask whether there could be any inherently information-theoretic reason to expect this notion of linearity to appear in physics. We give a series of simple arguments, spanning quantum information theory, group representation theory, and renormalization in quantum gravity, that supports a surprising thesis: namely, that the local linearity of space-time might ultimately be a consequence of the linearity of probabilities. While our arguments involve a fair amount of speculation, they have the virtue of being independent of any detailed assumptions on quantum gravity, and they are in harmony with several independent recent ideas on emergent space-time in high-energy physics.

  19. On magnitudes in memory: An internal clock account of space-time interaction.

    PubMed

    Cai, Zhenguang G; Connell, Louise

    2016-07-01

    Traditionally, research on time perception has diverged into a representational approach that focuses on the interaction between time and non-temporal magnitude information like spatial distance, and a mechanistic approach that emphasizes the workings and timecourse of components within an internal clock. We combined these approaches in order to identify the locus of space-time interaction effects in the mechanistic framework of the internal clock model. In three experiments, we contrasted the effects of spatial distance (a long- vs. short-distance line) on time perception with those of visual flicker (a flickering vs. static stimulus) in a duration reproduction paradigm. We found that both a flickering stimulus and a long-distance line lengthened reproduced time when presented during time encoding. However, when presented during time reproduction, a flickering stimulus shortened reproduced time but a long-distance line had no effect. The results thus show that, while visual flickers affects duration accumulation itself, spatial distance instead biases the memory of the accumulated duration. These findings are consistent with a clock-magnitude account of space-time interaction whereby both temporal duration and spatial distance are represented as mental magnitudes that can interfere with each other while being kept in memory, and places the locus of interaction between temporal and non-temporal magnitude dimensions at the memory maintenance stage of the internal clock model.

  20. Towards a new synthesis for atmospheric dynamics: Space-time cascades

    NASA Astrophysics Data System (ADS)

    Lovejoy, S.; Schertzer, D.

    2010-04-01

    In spite of the unprecedented quantity and quality of meteorological data and numerical models, there is still no consensus about the atmosphere's elementary statistical properties as functions of scale in either time or in space. This review paper proposes a new synthesis based on a) advances in the last 25 years in nonlinear dynamics, b) a critical re-analysis of empirical aircraft and vertical sonde data, c) the systematic scale by scale, space-time exploitation of high resolution remotely sensed data and d) the systematic re-analysis of the outputs of numerical models of the atmosphere including reanalyses, e) a new turbulent model for the emergence of the climate from "weather" and climate variability. We conclude that Richardson's old idea of scale by scale simplicity — today embodied in multiplicative cascades — can accurately explain the statistical properties of the atmosphere and its models over most of the meteorologically significant range of scales, as well as at least some of the climate range. The resulting space-time cascade model combines these nonlinear developments with modern statistical analyses, it is based on strongly anisotropic and intermittent generalizations of the classical turbulence laws of Kolmogorov, Corrsin, Obukhov, and Bolgiano.

  1. Environmental Controls on Space-Time Biodiversity Patterns in the Amazon

    NASA Astrophysics Data System (ADS)

    Porporato, A. M.; Bonetti, S.; Feng, X.

    2014-12-01

    The Amazon/Andes territory is characterized by the highest biodiversity on Earth and understanding how all these ecological niches and different species originated and developed is an open challenge. The niche perspective assumes that species have evolved and occupy deterministically different roles within its environment. This view differs from that of the neutral theories, which assume ecological equivalence between all species but incorporates stochastic demographic processes along with long-term migration and speciation rates. Both approaches have demonstrated tremendous power in predicting aspects species biodiversity. By combining tools from both approaches, we use modified birth and death processes to simulate plant species diversification in the Amazon/Andes and their space-time ecohydrological controls. By defining parameters related to births and deaths as functions of available resources, we incorporate the role of space-time resource variability on niche formation and community composition. We also explicitly include the role of a heterogeneous landscape and topography. The results are discussed in relation to transect datasets from neotropical forests.

  2. BIPM/IAU Joint Committee on relativity for space-time reference systems and metrology

    NASA Astrophysics Data System (ADS)

    Petit, G.

    At the Kyoto General Assembly, the IAU endorsed, by its Resolution B3 (1997), the creation of the Joint Committee on Relativity for space-time reference systems and metrology (JCR), which was also approved by the Comité International des Poids et Mesures (CIPM) at its 86th meeting in September 1997. Its tasks are ``to establish definitions and conventions to provide a coherent relativistic frame ....... and to develop the adopted definitions and conventions for practical application by the user." The web site of the JCR (www.bipm.fr/WG/CCTF/JCR) contains the headlines of the JCR work. The BIPM/IAU JCR has worked in collaboration with the IAU Working Group on relativity for celestial mechanics and astrometry (RCMA) on the problems of astronomical relativistic space-time reference frames. A document was established in common (document jcrissue.html on the JCR web site) discussing as much as possible all topics that may be addressed by the two groups. The RCMA has specified a consistent framework for defining the barycentric and geocentric celestial reference systems at the first post-Newtonian level. Because new clock technology and space missions will necessitate the application of this framework for time and frequency measurements in the solar system, the JCR focused on these applications. The paper outlines the conclusions of the work and the proposed IAU resolutions, that were discussed at IAU Colloquium 180 in March 2000.

  3. Self-similar space-time evolution of an initial density discontinuity

    SciTech Connect

    Rekaa, V. L.; Pécseli, H. L.; Trulsen, J. K.

    2013-07-15

    The space-time evolution of an initial step-like plasma density variation is studied. We give particular attention to formulate the problem in a way that opens for the possibility of realizing the conditions experimentally. After a short transient time interval of the order of the electron plasma period, the solution is self-similar as illustrated by a video where the space-time evolution is reduced to be a function of the ratio x/t. Solutions of this form are usually found for problems without characteristic length and time scales, in our case the quasi-neutral limit. By introducing ion collisions with neutrals into the numerical analysis, we introduce a length scale, the collisional mean free path. We study the breakdown of the self-similarity of the solution as the mean free path is made shorter than the system length. Analytical results are presented for charge exchange collisions, demonstrating a short time collisionless evolution with an ensuing long time diffusive relaxation of the initial perturbation. For large times, we find a diffusion equation as the limiting analytical form for a charge-exchange collisional plasma, with a diffusion coefficient defined as the square of the ion sound speed divided by the (constant) ion collision frequency. The ion-neutral collision frequency acts as a parameter that allows a collisionless result to be obtained in one limit, while the solution of a diffusion equation is recovered in the opposite limit of large collision frequencies.

  4. A short essay on quantum black holes and underlying noncommutative quantized space-time

    NASA Astrophysics Data System (ADS)

    Tanaka, Sho

    2017-01-01

    We emphasize the importance of noncommutative geometry or Lorenz-covariant quantized space-time towards the ultimate theory of quantum gravity and Planck scale physics. We focus our attention on the statistical and substantial understanding of the Bekenstein-Hawking area-entropy law of black holes in terms of the kinematical holographic relation (KHR). KHR manifestly holds in Yang’s quantized space-time as the result of kinematical reduction of spatial degrees of freedom caused by its own nature of noncommutative geometry, and plays an important role in our approach without any recourse to the familiar hypothesis, so-called holographic principle. In the present paper, we find a unified form of KHR applicable to the whole region ranging from macroscopic to microscopic scales in spatial dimension d  =  3. We notice a possibility of nontrivial modification of area-entropy law of black holes which becomes most remarkable in the extremely microscopic system close to Planck scale.

  5. Space-time super-resolution using graph-cut optimization.

    PubMed

    Mudenagudi, Uma; Banerjee, Subhashis; Kalra, Prem Kumar

    2011-05-01

    We address the problem of super-resolution—obtaining high-resolution images and videos from multiple low-resolution inputs. The increased resolution can be in spatial or temporal dimensions, or even in both. We present a unified framework which uses a generative model of the imaging process and can address spatial super-resolution, space-time super-resolution, image deconvolution, single-image expansion, removal of noise, and image restoration. We model a high-resolution image or video as a Markov random field and use maximum a posteriori estimate as the final solution using graph-cut optimization technique. We derive insights into what super-resolution magnification factors are possible and the conditions necessary for super-resolution. We demonstrate spatial super-resolution reconstruction results with magnifications higher than predicted limits of magnification. We also formulate a scheme for selective super-resolution reconstruction of videos to obtain simultaneous increase of resolutions in both spatial and temporal directions. We show that it is possible to achieve space-time magnification factors beyond what has been suggested in the literature by selectively applying super-resolution constraints. We present results on both synthetic and real input sequences.

  6. What could the LHC teach us on the structure of space-time?

    NASA Astrophysics Data System (ADS)

    Triantaphyllou, George

    2016-11-01

    Collision energies of proton beams now available at the LHC increase the probability of discovering the inner works of the Brout-Englert-Higgs (BEH) mechanism within the foreseeable future. Nevertheless, they are still several orders of magnitude below the scale where a possible non-trivial structure of space-time would be detectable. Apart from remaining completely silent on the issue of the fundamental nature of elementary particles and the space in which they propagate, one may try to speculate on this matter by carefully extrapolating existing scientific methods and knowledge to Planck energies. In this talk, an effort is made to logically link some potential discoveries at the LHC with specific space-time structures. Since such links are inevitably weak due to the huge energy hierarchy between the electro-weak and the Planck scales, our goal does not exceed a mere presentation of naturalness and self-consistency arguments in favor of some of the possible outcomes, placing particular emphasis on the scenario of the mirror world.

  7. Scalar Field Theory on κ-MINKOWSKI Space-Time and Translation and Lorentz Invariance

    NASA Astrophysics Data System (ADS)

    Meljanac, S.; Samsarov, A.

    We investigate the properties of κ-Minkowski space-time by using representations of the corresponding deformed algebra in terms of undeformed Heisenberg-Weyl algebra. The deformed algebra consists of κ-Poincaré algebra extended with the generators of the deformed Weyl algebra. The part of deformed algebra, generated by rotation, boost and momentum generators, is described by the Hopf algebra structure. The approach used in our considerations is completely Lorentz covariant. We further use an advantage of this approach to consistently construct a star product, which has a property that under integration sign, it can be replaced by a standard pointwise multiplication, a property that was since known to hold for Moyal but not for κ-Minkowski space-time. This star product also has generalized trace and cyclic properties, and the construction alone is accomplished by considering a classical Dirac operator representation of deformed algebra and requiring it to be Hermitian. We find that the obtained star product is not translationally invariant, leading to a conclusion that the classical Dirac operator representation is the one where translation invariance cannot simultaneously be implemented along with hermiticity. However, due to the integral property satisfied by the star product, noncommutative free scalar field theory does not have a problem with translation symmetry breaking and can be shown to reduce to an ordinary free scalar field theory without nonlocal features and tachyonic modes and basically of the very same form. The issue of Lorentz invariance of the theory is also discussed.

  8. On the usefulness of relativistic space-times for the description of the Earth's gravitational field

    NASA Astrophysics Data System (ADS)

    Soffel, Michael; Frutos, Francisco

    2016-12-01

    The usefulness of relativistic space-times for the description of the Earth's gravitational field is investigated. A variety of exact vacuum solutions of Einstein's field equations (Schwarzschild, Erez and Rosen, Gutsunayev and Manko, Hernández-Pastora and Martín, Kerr, Quevedo, and Mashhoon) are investigated in that respect. It is argued that because of their multipole structure and influences from external bodies, all these exact solutions are not really useful for the central problem. Then, approximate space-times resulting from an MPM or post-Newtonian approximation are considered. Only in the DSX formalism that is of the first post-Newtonian order, all aspects of the problem can be tackled: a relativistic description (a) of the Earth's gravity field in a well-defined geocentric reference system (GCRS), (b) of the motion of solar system bodies in a barycentric reference system (BCRS), and (c) of inertial and tidal terms in the geocentric metric describing the external gravitational field. A relativistic SLR theory is also discussed with respect to our central problem. Orders of magnitude of many effects related to the Earth's gravitational field and SLR are given. It is argued that a formalism with accuracies better than of the first post-Newtonian order is not yet available.

  9. Fitted Hanbury-Brown Twiss radii versus space-time variances in flow-dominated models

    NASA Astrophysics Data System (ADS)

    Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan

    2006-04-01

    The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simple Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data.

  10. 100 years of relativity. Space-time structure: Einstein and beyond

    NASA Astrophysics Data System (ADS)

    Ashtekar, Abhay

    2005-11-01

    Thanks to Einstein's relativity theories, our notions of space and time underwent profound revisions about a 100 years ago. The resulting interplay between geometry and physics has dominated all of fundamental physics since then. This volume contains contributions from leading researchers, worldwide, who have thought deeply about the nature and consequences of this interplay. The articles take a long-range view of the subject and distill the most important advances in broad terms, making them easily accessible to non-specialists. The first part is devoted to a summary of how relativity theories were born (J. Stachel). The second part discusses the most dramatic ramifications of general relativity, such as black holes (P. Chrusciel and R. Price), space-time singularities (H. Nicolai and A. Rendall), gravitational waves (P. Laguna and P. Saulson), the large scale structure of the cosmos (T. Padmanabhan); experimental status of this theory (C. Will) as well as its practical application to the GPS system (N. Ashby). The last part looks beyond Einstein and provides glimpses into what is in store for us in the 21st century. Contributions here include summaries of radical changes in the notions of space and time that are emerging from quantum field theory in curved space-times (Ford), string theory (T. Banks), loop quantum gravity (A. Ashtekar), quantum cosmology (M. Bojowald), discrete approaches (Dowker, Gambini and Pullin) and twistor theory (R Penrose).

  11. Space-time patterns of Campylobacter spp. colonization in broiler flocks, 2002-2006.

    PubMed

    Jonsson, M E; Norström, M; Sandberg, M; Ersbøll, A K; Hofshagen, M

    2010-09-01

    This study was performed to investigate space-time patterns of Campylobacter spp. colonization in broiler flocks in Norway. Data on the Campylobacter spp. status at the time of slaughter of 16 054 broiler flocks from 580 farms between 2002 and 2006 was included in the study. Spatial relative risk maps together with maps of space-time clustering were generated, the latter by using spatial scan statistics. These maps identified the same areas almost every year where there was a higher risk for a broiler flock to test positive for Campylobacter spp. during the summer months. A modified K-function analysis showed significant clustering at distances between 2.5 and 4 km within different years. The identification of geographical areas with higher risk for Campylobacter spp. colonization in broilers indicates that there are risk factors associated with Campylobacter spp. colonization in broiler flocks varying with region and time, e.g. climate, landscape or geography. These need to be further explored. The results also showed clustering at shorter distances indicating that there are risk factors for Campylobacter spp. acting in a more narrow scale as well.

  12. GNSS Space-Time Interference Mitigation and Attitude Determination in the Presence of Interference Signals

    PubMed Central

    Daneshmand, Saeed; Jahromi, Ali Jafarnia; Broumandan, Ali; Lachapelle, Gérard

    2015-01-01

    The use of Space-Time Processing (STP) in Global Navigation Satellite System (GNSS) applications is gaining significant attention due to its effectiveness for both narrowband and wideband interference suppression. However, the resulting distortion and bias on the cross correlation functions due to space-time filtering is a major limitation of this technique. Employing the steering vector of the GNSS signals in the filter structure can significantly reduce the distortion on cross correlation functions and lead to more accurate pseudorange measurements. This paper proposes a two-stage interference mitigation approach in which the first stage estimates an interference-free subspace before the acquisition and tracking phases and projects all received signals into this subspace. The next stage estimates array attitude parameters based on detecting and employing GNSS signals that are less distorted due to the projection process. Attitude parameters enable the receiver to estimate the steering vector of each satellite signal and use it in the novel distortionless STP filter to significantly reduce distortion and maximize Signal-to-Noise Ratio (SNR). GPS signals were collected using a six-element antenna array under open sky conditions to first calibrate the antenna array. Simulated interfering signals were then added to the digitized samples in software to verify the applicability of the proposed receiver structure and assess its performance for several interference scenarios. PMID:26016909

  13. Space-Time Modelling of Groundwater Level Using Spartan Covariance Function

    NASA Astrophysics Data System (ADS)

    Varouchakis, Emmanouil; Hristopulos, Dionissios

    2014-05-01

    Geostatistical models often need to handle variables that change in space and in time, such as the groundwater level of aquifers. A major advantage of space-time observations is that a higher number of data supports parameter estimation and prediction. In a statistical context, space-time data can be considered as realizations of random fields that are spatially extended and evolve in time. The combination of spatial and temporal measurements in sparsely monitored watersheds can provide very useful information by incorporating spatiotemporal correlations. Spatiotemporal interpolation is usually performed by applying the standard Kriging algorithms extended in a space-time framework. Spatiotemoral covariance functions for groundwater level modelling, however, have not been widely developed. We present a new non-separable theoretical spatiotemporal variogram function which is based on the Spartan covariance family and evaluate its performance in spatiotemporal Kriging (STRK) interpolation. The original spatial expression (Hristopulos and Elogne 2007) that has been successfully used for the spatial interpolation of groundwater level (Varouchakis and Hristopulos 2013) is modified by defining the following space-time normalized distance h = °h2r-+-α h2τ, hr=r- ξr, hτ=τ- ξτ; where r is the spatial lag vector, τ the temporal lag vector, ξr is the correlation length in position space (r) and ξτ in time (τ), h the normalized space-time lag vector, h = |h| is its Euclidean norm of the normalized space-time lag and α the coefficient that determines the relative weight of the time lag. The space-time experimental semivariogram is determined from the biannual (wet and dry period) time series of groundwater level residuals (obtained from the original series after trend removal) between the years 1981 and 2003 at ten sampling stations located in the Mires hydrological basin in the island of Crete (Greece). After the hydrological year 2002-2003 there is a significant

  14. Modeling the space-time evolution of pore pressure in layered shallow covers

    NASA Astrophysics Data System (ADS)

    Salciarini, Diana; Cuomo, Sabatino; Castorino, Giuseppe; Fanelli, Giulia; Tamagnini, Claudio

    2015-04-01

    In most of the available models for the prediction of shallow landslide susceptibility, the potentially unstable soil cover is considered uniform and homogeneous, over an impervious underlying bedrock (see, e.g., Baum et al. 2008; Salciarini et al. 2006, 2012). However, in several case studies, this was proven to be unlikely, for example in the case of pyroclastic soil covers, where two clearly separated layers are detectable (Cascini et al., 2008, 2011). The possibility of taking into account the detailed configuration of the soil cover allows having a more accurate estimate of the potentially unstable volumes, which significantly modify the intensity of the considered phenomena. To take into account the possibility of having layers in the soil cover with different permeability, the existing routines of the TRIGRS code (Baum et al. 2008) devoted to the hydrologic process modeling have been modified. The closed-form solution by Srivastava & Yeh (1991) implemented into TRIGRS was substituted with the numerical solution of the mass balance equation governing the infiltration process. A parametric analysis was carried out by varying the permeability ratio between the two layers, with the aim of examining the influence of such parameter on the pore-pressure distribution along the vertical profile. As expected, as the permeability ratio increases, the underlying layer tends to behave as an impervious boundary. This increases the chance that only the most superficial soil layer fails. An analysis of the routine performance and efficiency was also done to investigate the response of the model with different tolerances and different time steps of the integration procedure, and different spatial discretizations along the vertical profile.

  15. ALGORITHMS AND PROGRAMS FOR STRONG GRAVITATIONAL LENSING IN KERR SPACE-TIME INCLUDING POLARIZATION

    SciTech Connect

    Chen, Bin; Maddumage, Prasad; Kantowski, Ronald; Dai, Xinyu; Baron, Eddie

    2015-05-15

    Active galactic nuclei (AGNs) and quasars are important astrophysical objects to understand. Recently, microlensing observations have constrained the size of the quasar X-ray emission region to be of the order of 10 gravitational radii of the central supermassive black hole. For distances within a few gravitational radii, light paths are strongly bent by the strong gravity field of the central black hole. If the central black hole has nonzero angular momentum (spin), then a photon’s polarization plane will be rotated by the gravitational Faraday effect. The observed X-ray flux and polarization will then be influenced significantly by the strong gravity field near the source. Consequently, linear gravitational lensing theory is inadequate for such extreme circumstances. We present simple algorithms computing the strong lensing effects of Kerr black holes, including the effects on polarization. Our algorithms are realized in a program “KERTAP” in two versions: MATLAB and Python. The key ingredients of KERTAP are a graphic user interface, a backward ray-tracing algorithm, a polarization propagator dealing with gravitational Faraday rotation, and algorithms computing observables such as flux magnification and polarization angles. Our algorithms can be easily realized in other programming languages such as FORTRAN, C, and C++. The MATLAB version of KERTAP is parallelized using the MATLAB Parallel Computing Toolbox and the Distributed Computing Server. The Python code was sped up using Cython and supports full implementation of MPI using the “mpi4py” package. As an example, we investigate the inclination angle dependence of the observed polarization and the strong lensing magnification of AGN X-ray emission. We conclude that it is possible to perform complex numerical-relativity related computations using interpreted languages such as MATLAB and Python.

  16. Algorithms and Programs for Strong Gravitational Lensing In Kerr Space-time Including Polarization

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Kantowski, Ronald; Dai, Xinyu; Baron, Eddie; Maddumage, Prasad

    2015-05-01

    Active galactic nuclei (AGNs) and quasars are important astrophysical objects to understand. Recently, microlensing observations have constrained the size of the quasar X-ray emission region to be of the order of 10 gravitational radii of the central supermassive black hole. For distances within a few gravitational radii, light paths are strongly bent by the strong gravity field of the central black hole. If the central black hole has nonzero angular momentum (spin), then a photon’s polarization plane will be rotated by the gravitational Faraday effect. The observed X-ray flux and polarization will then be influenced significantly by the strong gravity field near the source. Consequently, linear gravitational lensing theory is inadequate for such extreme circumstances. We present simple algorithms computing the strong lensing effects of Kerr black holes, including the effects on polarization. Our algorithms are realized in a program “KERTAP” in two versions: MATLAB and Python. The key ingredients of KERTAP are a graphic user interface, a backward ray-tracing algorithm, a polarization propagator dealing with gravitational Faraday rotation, and algorithms computing observables such as flux magnification and polarization angles. Our algorithms can be easily realized in other programming languages such as FORTRAN, C, and C++. The MATLAB version of KERTAP is parallelized using the MATLAB Parallel Computing Toolbox and the Distributed Computing Server. The Python code was sped up using Cython and supports full implementation of MPI using the “mpi4py” package. As an example, we investigate the inclination angle dependence of the observed polarization and the strong lensing magnification of AGN X-ray emission. We conclude that it is possible to perform complex numerical-relativity related computations using interpreted languages such as MATLAB and Python.

  17. Vacuum Fluctuation Force on a Rigid Casimir Cavity in de Sitter and Schwarzschild-De Sitter Space-Time

    NASA Astrophysics Data System (ADS)

    Chen, Xiang

    2012-11-01

    We investigate the net force on a rigid Casimir cavity generated by vacuum fluctuations of electromagnetic field in three cases: de Sitter space-time, de Sitter space-time with weak gravitational field and Schwarzschild-de Sitter space-time. In de Sitter space-time the resulting net force follows the square inverse law but unfortunately it is too weak to be measurable due to the large universe radius. By introducing a weak gravitational field into the de Sitter space-time, we find that the net force can now be split into two parts, one is the gravitational force due to the induced effective mass between the two plates and the other one is generated by the metric structure of de Sitter space-time. In order to investigate the vacuum fluctuation force on the rigid cavity under strong gravitational field, we perform a similar analysis in Schwarzschild-de Sitter space-time and results are obtained in three different limits. The most interesting one is when the cavity gets closer to the horizon of a blackhole, square inverse law is recovered and the repulsive force due to negative energy/mass of the cavity now has an observable strength. More importantly the force changes from being repulsive to attractive when the cavity crosses the event horizon, so that the energy/mass of the cavity switches the sign, which suggests the unusual time direction inside the event horizon.

  18. The distribution and chemical coding of intramural neurons supplying the porcine stomach - the study on normal pigs and on animals suffering from swine dysentery.

    PubMed

    Kaleczyc, J; Klimczuk, M; Franke-Radowiecka, A; Sienkiewicz, W; Majewski, M; Łakomy, M

    2007-06-01

    The present study was designed to investigate the expression of biologically active substances by intramural neurons supplying the stomach in normal (control) pigs and in pigs suffering from dysentery. Eight juvenile female pigs were used. Both dysenteric (n = 4; inoculated with Brachyspira hyodysenteriae) and control (n = 4) animals were deeply anaesthetized, transcardially perfused with buffered paraformalehyde, and tissue samples comprising all layers of the wall of the ventricular fundus were collected. The cryostat sections were processed for double-labelling immunofluorescence to study the distribution of the intramural nerve structures (visualized with antibodies against protein gene-product 9.5) and their chemical coding using antibodies against vesicular acetylcholine (ACh) transporter (VAChT), nitric oxide synthase (NOS), galanin (GAL), vasoactive intestinal polypeptide (VIP), somatostatin (SOM), Leu(5)-enkephalin (LENK), substance P (SP) and calcitonin gene-related peptide (CGRP). In both inner and outer submucosal plexuses of the control pigs, the majority of neurons were SP (55% and 58%, respectively)- or VAChT (54%)-positive. Many neurons stained also for CGRP (43 and 45%) or GAL (20% and 18%) and solitary perikarya were NOS-, SOM- or VIP-positive. The myenteric plexus neurons stained for NOS (20%), VAChT (15%), GAL (10%), VIP (7%), SP (6%) or CGRP (solitary neurons), but they were SOM-negative. No intramural neurons immunoreactive to LENK were found. The most remarkable difference in the chemical coding of enteric neurons between the control and dysenteric pigs was a very increased number of GAL- and VAChT-positive nerve cells (up to 61% and 85%, respectively) in submucosal plexuses of the infected animals. The present results suggest that GAL and ACh have a specific role in local neural circuits of the inflamed porcine stomach in the course of swine dysentery.

  19. The impact of space-time speckle to the resolution in range and azimuth direction on synthetic aperture imaging ladar

    NASA Astrophysics Data System (ADS)

    Xu, Qian; Zhou, Yu; Sun, Jianfeng; Zhi, Ya'nan; Ma, Xiaoping; Sun, Zhiwei; Lu, Dong; Liu, Liren

    2013-09-01

    As synthetic aperture imaging ladar employs the linear chirp laser signal, it is inevitably impacted by the space-time varying speckle effect. In many SAIL two-dimensional reconstructed images, the laser speckle effect severely reduces the image quality. In this paper, we analyze and simulate the influence of space-time speckle effect to the resolution element imaging both in range direction and in azimuth direction. Expressions for two-dimensional data collection contained space-time speckle effect are obtained, and computer simulation results of resolution degradation both in range direction and in cross-range direction are presented.

  20. Modeling of 3d Space-time Surface of Potential Fields and Hydrogeologic Modeling of Nuclear Waste Disposal Sites

    NASA Astrophysics Data System (ADS)

    Shestopalov, V.; Bondarenko, Y.; Zayonts, I.; Rudenko, Y.

    extracted from the total vertical and hori- zontal gradient respectively, both shaded from the 5 northeast to 355 northwest. The dip of multi-layer surfaces indicates the down -"gradient" direction in the fields. The methodology of 3D STSI is based on the analysis of vertical and horizontal anisotropy of gravity and magnetic fields, as well as of multi-layer 3D space-time surface model (3D STSM) of the stress fields. The 3D STSM is multi-layer topology structure of 1 lineaments or gradients (edges) and surfaces calculated by uniform matrices of the geophysical fields. One of the information components of the stress fields character- istics is the aspects and slopes for compressive and tensile stresses. Overlaying of the 3D STSI and lineaments with maps of multi-layer gradients enables to create highly reliable 3D Space-Time Kinematic Model "3D STKM". The analysis of 3D STKM in- cluded: - the space-time reconstruct of forces direction and strain distribution scheme during formation of geological structures and structural paragenesis (lineaments) of potential fields; - predict the real location of expected tectonic dislocations, zones of rock fracturing and disintegration, and mass-stable blocks. Based on these data, the 3D STSM are drawn which reflect the geodynamics of territory development on the ground of paleotectonic reconstruction of successive activity stages having formed the present-day lithosphere. Thus three-dimensional STSM allows to construct an un- mixing geodynamic processes in any interval of fixed space-time in coordinates x, y, t(z). The integrated of the 3D STSM and 3D seismic models enables also to create structural-kinematic and geodynamic maps of the Earth's crust at different depth. As a result, the classification of CNPP areas is performed into zones of compressive and tensile stresses characterized by enhanced permeability of rocks, and zones of consoli- dation with minimal rocks permeability. In addition, the vertically alternating zones of