Censored Distributed Space-Time Coding for Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Yiu, S.; Schober, R.
2007-12-01
We consider the application of distributed space-time coding in wireless sensor networks (WSNs). In particular, sensors use a common noncoherent distributed space-time block code (DSTBC) to forward their local decisions to the fusion center (FC) which makes the final decision. We show that the performance of distributed space-time coding is negatively affected by erroneous sensor decisions caused by observation noise. To overcome this problem of error propagation, we introduce censored distributed space-time coding where only reliable decisions are forwarded to the FC. The optimum noncoherent maximum-likelihood and a low-complexity, suboptimum generalized likelihood ratio test (GLRT) FC decision rules are derived and the performance of the GLRT decision rule is analyzed. Based on this performance analysis we derive a gradient algorithm for optimization of the local decision/censoring threshold. Numerical and simulation results show the effectiveness of the proposed censoring scheme making distributed space-time coding a prime candidate for signaling in WSNs.
NASA Astrophysics Data System (ADS)
Lenkeit, Florian; Wübben, Dirk; Dekorsy, Armin
2013-12-01
In this article, distributed interleave-division multiplexing space-time codes (dIDM-STCs) are applied for multi-user two-hop decode-and-forward (DF) relay networks. In case of decoding errors at the relays which propagate to the destination, severe performance degradations can occur as the original detection scheme for common IDM-STCs does not take any reliability information about the first hop into account. Here, a novel reliability-aware iterative detection scheme (RAID) for dIDM-STCs is proposed. This new detection scheme takes the decoding reliability of the relays for each user into account for the detection at the destination. Performance evaluations show that the proposed RAID scheme clearly outperforms the original detection scheme and that in certain scenarios even a better performance than for adaptive relaying schemes can be achieved.
Coding space-time stimulus dynamics in auditory brain maps
Wang, Yunyan; Gutfreund, Yoram; Peña, José L.
2014-01-01
Sensory maps are often distorted representations of the environment, where ethologically-important ranges are magnified. The implication of a biased representation extends beyond increased acuity for having more neurons dedicated to a certain range. Because neurons are functionally interconnected, non-uniform representations influence the processing of high-order features that rely on comparison across areas of the map. Among these features are time-dependent changes of the auditory scene generated by moving objects. How sensory representation affects high order processing can be approached in the map of auditory space of the owl's midbrain, where locations in the front are over-represented. In this map, neurons are selective not only to location but also to location over time. The tuning to space over time leads to direction selectivity, which is also topographically organized. Across the population, neurons tuned to peripheral space are more selective to sounds moving into the front. The distribution of direction selectivity can be explained by spatial and temporal integration on the non-uniform map of space. Thus, the representation of space can induce biased computation of a second-order stimulus feature. This phenomenon is likely observed in other sensory maps and may be relevant for behavior. PMID:24782781
A High-Rate Space-Time Block Code with Full Diversity
NASA Astrophysics Data System (ADS)
Gao, Zhenzhen; Zhu, Shihua; Zhong, Zhimeng
A new high-rate space-time block code (STBC) with full transmit diversity gain for four transmit antennas based on a generalized Alamouti code structure is proposed. The proposed code has lower Maximum Likelihood (ML) decoding complexity than the Double ABBA scheme does. Constellation rotation is used to maximize the diversity product. With the optimal rotated constellations, the proposed code significantly outperforms some known high-rate STBCs in the literature with similar complexity and the same spectral efficiency.
CoCoNuT: General relativistic hydrodynamics code with dynamical space-time evolution
NASA Astrophysics Data System (ADS)
Dimmelmeier, Harald; Novak, Jérôme; Cerdá-Durán, Pablo
2012-02-01
CoCoNuT is a general relativistic hydrodynamics code with dynamical space-time evolution. The main aim of this numerical code is the study of several astrophysical scenarios in which general relativity can play an important role, namely the collapse of rapidly rotating stellar cores and the evolution of isolated neutron stars. The code has two flavors: CoCoA, the axisymmetric (2D) magnetized version, and CoCoNuT, the 3D non-magnetized version.
Novel space-time trellis codes for free-space optical communications using transmit laser selection.
García-Zambrana, Antonio; Boluda-Ruiz, Rubén; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz
2015-09-21
In this paper, the deployment of novel space-time trellis codes (STTCs) with transmit laser selection (TLS) for free-space optical (FSO) communication systems using intensity modulation and direct detection (IM/DD) over atmospheric turbulence and misalignment fading channels is presented. Combining TLS and STTC with rate 1 bit/(s · Hz), a new code design criterion based on the use of the largest order statistics is here proposed for multiple-input/single-output (MISO) FSO systems in order to improve the diversity order gain by properly chosing the transmit lasers out of the available L lasers. Based on a pairwise error probability (PEP) analysis, closed-form asymptotic bit error-rate (BER) expressions in the range from low to high signal-to-noise ratio (SNR) are derived when the irradiance of the transmitted optical beam is susceptible to moderate-to-strong turbulence conditions, following a gamma-gamma (GG) distribution, and pointing error effects, following a misalignment fading model where the effect of beam width, detector size and jitter variance is considered. Obtained results show diversity orders of 2L and 3L when simple two-state and four-state STTCs are considered, respectively. Simulation results are further demonstrated to confirm the analytical results. PMID:26406626
A New Quaternion Design for Space-Time-Polarization Block Code with Full Diversity
NASA Astrophysics Data System (ADS)
Ma, Huanfei; Kan, Haibin; Imai, Hideki
Construction of quaternion design for Space-Time-Polarization Block Codes (STPBCs) is a hot but difficult topic. This letter introduces a novel way to construct high dimensional quaternion designs based on any existing low dimensional quaternion orthogonal designs(QODs) for STPBC, while preserving the merits of the original QODs such as full diversity and simple decoding. Furthermore, it also provides a specific schema to reach full diversity and maximized code gain by signal constellation rotation on the polarization plane.
A novel repetition space-time coding scheme for mobile FSO systems
NASA Astrophysics Data System (ADS)
Li, Ming; Cao, Yang; Li, Shu-ming; Yang, Shao-wen
2015-03-01
Considering the influence of more random atmospheric turbulence, worse pointing errors and highly dynamic link on the transmission performance of mobile multiple-input multiple-output (MIMO) free space optics (FSO) communication systems, this paper establishes a channel model for the mobile platform. Based on the combination of Alamouti space-time code and time hopping ultra-wide band (TH-UWB) communications, a novel repetition space-time coding (RSTC) method for mobile 2×2 free-space optical communications with pulse position modulation (PPM) is developed. In particular, two decoding methods of equal gain combining (EGC) maximum likelihood detection (MLD) and correlation matrix detection (CMD) are derived. When a quasi-static fading and weak turbulence channel model are considered, simulation results show that whether the channel state information (CSI) is known or not, the coding system demonstrates more significant performance of the symbol error rate (SER) than the uncoding. In other words, transmitting diversity can be achieved while conveying the information only through the time delays of the modulated signals transmitted from different antennas. CMD has almost the same effect of signal combining with maximal ratio combining (MRC). However, when the channel correlation increases, SER performance of the coding 2×2 system degrades significantly.
Numerical relativity for D dimensional axially symmetric space-times: Formalism and code tests
NASA Astrophysics Data System (ADS)
Zilhão, Miguel; Witek, Helvi; Sperhake, Ulrich; Cardoso, Vitor; Gualtieri, Leonardo; Herdeiro, Carlos; Nerozzi, Andrea
2010-04-01
The numerical evolution of Einstein’s field equations in a generic background has the potential to answer a variety of important questions in physics: from applications to the gauge-gravity duality, to modeling black hole production in TeV gravity scenarios, to analysis of the stability of exact solutions, and to tests of cosmic censorship. In order to investigate these questions, we extend numerical relativity to more general space-times than those investigated hitherto, by developing a framework to study the numerical evolution of D dimensional vacuum space-times with an SO(D-2) isometry group for D≥5, or SO(D-3) for D≥6. Performing a dimensional reduction on a (D-4) sphere, the D dimensional vacuum Einstein equations are rewritten as a 3+1 dimensional system with source terms, and presented in the Baumgarte, Shapiro, Shibata, and Nakamura formulation. This allows the use of existing 3+1 dimensional numerical codes with small adaptations. Brill-Lindquist initial data are constructed in D dimensions and a procedure to match them to our 3+1 dimensional evolution equations is given. We have implemented our framework by adapting the Lean code and perform a variety of simulations of nonspinning black hole space-times. Specifically, we present a modified moving puncture gauge, which facilitates long-term stable simulations in D=5. We further demonstrate the internal consistency of the code by studying convergence and comparing numerical versus analytic results in the case of geodesic slicing for D=5, 6.
Mei, Zhe; Wu, Tsung-Feng; Pion-Tonachini, Luca; Qiao, Wen; Zhao, Chao; Liu, Zhiwen; Lo, Yu-Hwa
2011-09-01
An "optical space-time coding method" was applied to microfluidic devices to detect the forward and large angle light scattering signals for unlabelled bead and cell detection. Because of the enhanced sensitivity by this method, silicon pin photoreceivers can be used to detect both forward scattering (FS) and large angle (45-60°) scattering (LAS) signals, the latter of which has been traditionally detected by a photomultiplier tube. This method yields significant improvements in coefficients of variation (CV), producing CVs of 3.95% to 10.05% for FS and 7.97% to 26.12% for LAS with 15 μm, 10 μm, and 5 μm beads. These are among the best values ever demonstrated with microfluidic devices. The optical space-time coding method also enables us to measure the speed and position of each particle, producing valuable information for the design and assessment of microfluidic lab-on-a-chip devices such as flow cytometers and complete blood count devices. PMID:21915241
Space-Time Coded MC-CDMA: Blind Channel Estimation, Identifiability, and Receiver Design
NASA Astrophysics Data System (ADS)
Sun, Wei; Li, Hongbin
2003-12-01
Integrating the strengths of multicarrier (MC) modulation and code division multiple access (CDMA), MC-CDMA systems are of great interest for future broadband transmissions. This paper considers the problem of channel identification and signal combining/detection schemes for MC-CDMA systems equipped with multiple transmit antennas and space-time (ST) coding. In particular, a subspace based blind channel identification algorithm is presented. Identifiability conditions are examined and specified which guarantee unique and perfect (up to a scalar) channel estimation when knowledge of the noise subspace is available. Several popular single-user based signal combining schemes, namely the maximum ratio combining (MRC) and the equal gain combining (EGC), which are often utilized in conventional single-transmit-antenna based MC-CDMA systems, are extended to the current ST-coded MC-CDMA (STC-MC-CDMA) system to perform joint combining and decoding. In addition, a linear multiuser minimum mean-squared error (MMSE) detection scheme is also presented, which is shown to outperform the MRC and EGC at some increased computational complexity. Numerical examples are presented to evaluate and compare the proposed channel identification and signal detection/combining techniques.
Efficient space-time sampling with pixel-wise coded exposure for high-speed imaging.
Liu, Dengyu; Gu, Jinwei; Hitomi, Yasunobu; Gupta, Mohit; Mitsunaga, Tomoo; Nayar, Shree K
2014-02-01
Cameras face a fundamental trade-off between spatial and temporal resolution. Digital still cameras can capture images with high spatial resolution, but most high-speed video cameras have relatively low spatial resolution. It is hard to overcome this trade-off without incurring a significant increase in hardware costs. In this paper, we propose techniques for sampling, representing, and reconstructing the space-time volume to overcome this trade-off. Our approach has two important distinctions compared to previous works: 1) We achieve sparse representation of videos by learning an overcomplete dictionary on video patches, and 2) we adhere to practical hardware constraints on sampling schemes imposed by architectures of current image sensors, which means that our sampling function can be implemented on CMOS image sensors with modified control units in the future. We evaluate components of our approach, sampling function and sparse representation, by comparing them to several existing approaches. We also implement a prototype imaging system with pixel-wise coded exposure control using a liquid crystal on silicon device. System characteristics such as field of view and modulation transfer function are evaluated for our imaging system. Both simulations and experiments on a wide range of scenes show that our method can effectively reconstruct a video from a single coded image while maintaining high spatial resolution. PMID:24356347
Robust image transmission over MIMO space-time coded wireless systems
NASA Astrophysics Data System (ADS)
Song, Daewon; Chen, Chang W.
2006-05-01
We present in this paper an integrated robust image transmission scheme using space-time block codes (STBC) over multi-input multi-output (MIMO) wireless systems. First, in order to achieve an excellent error resilient capability, multiple bitstreams are generated based on wavelet trees along the spatial orientations. The spatial-orientation trees in the wavelet domain are individually encoded using SPIHT. Error propagation is thus limited within each bitstreams. Then, Reed-Solomon (R-S) codes as forward error correction (FEC) are adopted to combat transmission errors over error-prone wireless channels and to detect residual errors so as to avoid error propagation in each bitstream. FEC can reduce the bit error rates at the expenses of increased data rate. However, it is often difficult to design an optimal FEC scheme for a time-varying multi-path fading channel that may fluctuate beyond the capacity of the adopted FEC scheme. Therefore, in order to overcome such difficulty, we propose an approach to alleviate the effect of multi-path fading by employing the STBC for spatial diversity with assumption that channel state information (CSI) is perfectly estimated at the receiver. Experimental results demonstrate that the proposed scheme can achieve much improved performance in terms of PSNR over Rayleigh flat fading channel as compared with a wireless system without spatial diversity.
Computer code for space-time diagnostics of nuclear safety parameters
Solovyev, D. A.; Semenov, A. A.; Gruzdov, F. V.; Druzhaev, A. A.; Shchukin, N. V.; Dolgenko, S. G.; Solovyeva, I. V.; Ovchinnikova, E. A.
2012-07-01
The computer code ECRAN 3D (Experimental and Calculation Reactor Analysis) is designed for continuous monitoring and diagnostics of reactor cores and databases for RBMK-1000 on the basis of analytical methods for the interrelation parameters of nuclear safety. The code algorithms are based on the analysis of deviations between the physically obtained figures and the results of neutron-physical and thermal-hydraulic calculations. Discrepancies between the measured and calculated signals are equivalent to obtaining inadequacy between performance of the physical device and its simulator. The diagnostics system can solve the following problems: identification of facts and time for inconsistent results, localization of failures, identification and quantification of the causes for inconsistencies. These problems can be effectively solved only when the computer code is working in a real-time mode. This leads to increasing requirements for a higher code performance. As false operations can lead to significant economic losses, the diagnostics system must be based on the certified software tools. POLARIS, version 4.2.1 is used for the neutron-physical calculation in the computer code ECRAN 3D. (authors)
Riemer, Martin; Diersch, Nadine; Bublatzky, Florian; Wolbers, Thomas
2016-04-01
The mental representations of space, time, and number magnitude are inherently linked. The right posterior parietal cortex (PPC) has been suggested to contain a general magnitude system that underlies the overlap between various perceptual dimensions. However, comparative studies including spatial, temporal, and numerical dimensions are missing. In a unified paradigm, we compared the impact of right PPC inhibition on associations with spatial response codes (i.e., Simon, SNARC, and STARC effects) and on congruency effects between space, time, and numbers. Prolonged cortical inhibition was induced by continuous theta-burst stimulation (cTBS), a protocol for transcranial magnetic stimulation (TMS), at the right intraparietal sulcus (IPS). Our results show that congruency effects, but not response code associations, are affected by right PPC inhibition, indicating different neuronal mechanisms underlying these effects. Furthermore, the results demonstrate that interactions between space and time perception are reflected in congruency effects, but not in an association between time and spatial response codes. Taken together, these results implicate that the congruency between purely perceptual dimensions is processed in PPC areas along the IPS, while the congruency between percepts and behavioral responses is independent of this region. PMID:26808331
A new MIMO SAR system based on Alamouti space-time coding scheme and OFDM-LFM waveform design
NASA Astrophysics Data System (ADS)
Shi, Xiaojin; Zhang, Yunhua
2015-10-01
In recent years, multi-input and multi-output (MIMO) radar has attracted much attention of many researchers and institutions. MIMO radar transmits multiple signals, and receives the backscattered signals reflected from the targets. In contrast with conventional phased array radar and SAR system, MIMO radar system has significant potential advantages for achieving higher system SNR, more accurate parameter estimation, or high resolution of radar image. In this paper, we propose a new MIMO SAR system based on Alamouti space-time coding scheme and orthogonal frequency division multiplexing linearly frequency modulated (OFDM-LFM) for obtaining higher system signal-to-noise ratio (SNR) and better range resolution of SAR image.
Space-time signal processing for distributed pattern detection in sensor networks
NASA Astrophysics Data System (ADS)
Paffenroth, Randy C.; Du Toit, Philip C.; Scharf, Louis L.; Jayasumana, Anura P.; Banadara, Vidarshana; Nong, Ryan
2012-05-01
We present a theory and algorithm for detecting and classifying weak, distributed patterns in network data that provide actionable information with quantiable measures of uncertainty. Our work demonstrates the eectiveness of space-time inference on graphs, robust matrix completion, and second order analysis for the detection of distributed patterns that are not discernible at the level of individual nodes. Motivated by the importance of the problem, we are specically interested in detecting weak patterns in computer networks related to Cyber Situational Awareness. Our focus is on scenarios where the nodes (terminals, routers, servers, etc.) are sensors that provide measurements (of packet rates, user activity, central processing unit usage, etc.) that, when viewed independently, cannot provide a denitive determination of the underlying pattern, but when fused with data from across the network both spatially and temporally, the relevant patterns emerge. The approach is applicable to many types of sensor networks including computer networks, wireless networks, mobile sensor networks, and social networks, as well as in contexts such as databases and disease outbreaks.
C-Field Cosmological Model for Barotropic Fluid Distribution with Varying Λ in FRW Space Time
NASA Astrophysics Data System (ADS)
Bali, Raj; Saraf, Seema
2013-05-01
A cosmological model for barotropic fluid distribution in creation field cosmology with varying cosmological constant ( Λ) in FRW space-time is investigated. To get the deterministic model satisfying conservation equation, we have assumed \\varLambda = 1/R2 as considered by Chen and Wu (in Phys. Rev. D 41:695, 1990) where R is scale factor. We find that creation field ( C) increases with time, which matches with the result of HN Theory (in Hoyle and Narlikar, Proc. R. Astron. Soc. A 282:178, 1964), \\varLambda˜1/t2, the spatial volume increases with time and deceleration parameter q<0 which shows that universe is accelerating. This result matches with recent observations. The inflationary scenario exists in the models and the results so obtained match with astronomical observations. The various special cases of the model (21) viz. dust filled universe ( γ=0) and radiation dominated era ( ρ=3 p), ρ= p (stiff fluid universe) are also discussed. The models are free from horizon.
Blind and semi-blind ML detection for space-time block-coded OFDM wireless systems
NASA Astrophysics Data System (ADS)
Zaib, Alam; Al-Naffouri, Tareq Y.
2014-12-01
This paper investigates the joint maximum likelihood (ML) data detection and channel estimation problem for Alamouti space-time block-coded (STBC) orthogonal frequency-division multiplexing (OFDM) wireless systems. The joint ML estimation and data detection is generally considered a hard combinatorial optimization problem. We propose an efficient low-complexity algorithm based on branch-estimate-bound strategy that renders exact joint ML solution. However, the computational complexity of blind algorithm becomes critical at low signal-to-noise ratio (SNR) as the number of OFDM carriers and constellation size are increased especially in multiple-antenna systems. To overcome this problem, a semi-blind algorithm based on a new framework for reducing the complexity is proposed by relying on subcarrier reordering and decoding the carriers with different levels of confidence using a suitable reliability criterion. In addition, it is shown that by utilizing the inherent structure of Alamouti coding, the estimation performance improvement or the complexity reduction can be achieved. The proposed algorithms can reliably track the wireless Rayleigh fading channel without requiring any channel statistics. Simulation results presented against the perfect coherent detection demonstrate the effectiveness of blind and semi-blind algorithms over frequency-selective channels with different fading characteristics.
2011-01-01
Background Hemorrhagic fever with renal syndrome (HFRS) is a rodent-borne disease caused by Hantavirus, with characteristics of fever, hemorrhage, kidney damage, and hypotension. HFRS is recognized as a notifiable public health problem in China, and Liaoning Province is one of the most seriously affected areas with the most cases in China. It is necessary to investigate the spatial, temporal, and space-time distribution of confirmed cases of HFRS in Liaoning Province, China for future research into risk factors. Methods A cartogram map was constructed; spatial autocorrelation analysis and spatial, temporal, and space-time cluster analysis were conducted in Liaoning Province, China over the period 1988-2001. Results When the number of permutation test was set to 999, Moran's I was 0.3854, and was significant at significance level of 0.001. Spatial cluster analysis identified one most likely cluster and four secondary likely clusters. Temporal cluster analysis identified 1998-2001 as the most likely cluster. Space-time cluster analysis identified one most likely cluster and two secondary likely clusters. Conclusions Spatial, temporal, and space-time scan statistics may be useful in supervising the occurrence of HFRS in Liaoning Province, China. The result of this study can not only assist health departments to develop a better prevention strategy but also potentially increase the public health intervention's effectiveness. PMID:21867563
Angular distribution of cosmological parameters as a probe of space-time inhomogeneities
NASA Astrophysics Data System (ADS)
Carvalho, C. Sofia; Marques, Katrine
2016-08-01
We develop a method based on the angular distribution on the sky of cosmological parameters to probe the inhomogeneity of large-scale structure and cosmic acceleration. We demonstrate this method on the largest type Ia supernova (SN) data set available to date, as compiled by the Joint Light-curve Analysis (JLA) collaboration and, hence, consider the cosmological parameters that affect the luminosity distance. We divide the SN sample into equal surface area pixels and estimate the cosmological parameters that minimize the chi-square of the fit to the distance modulus in each pixel, hence producing maps of the cosmological parameters {ΩM,ΩΛ,H0} . In poorly sampled pixels, the measured fluctuations are mostly due to an inhomogeneous coverage of the sky by the SN surveys; in contrast, in well-sampled pixels, the measurements are robust enough to suggest a real fluctuation. We also measure the anisotropy of the parameters by computing the power spectrum of the corresponding maps of the parameters up to ℓ = 3. For an analytical toy model of an inhomogeneous ensemble of homogeneous pixels, we derive the backreaction term in the deceleration parameter due to the fluctuations of H0 across the sky and measure it to be of order 10-3 times the corresponding average over the pixels in the absence of backreaction. We conclude that, for the toy model considered, backreaction is not a viable dynamical mechanism to emulate cosmic acceleration.
Seismicity along the Main Marmara Fault, Turkey: from space-time distribution to repeating events
NASA Astrophysics Data System (ADS)
Schmittbuhl, Jean; Karabulut, Hayrullah; Lengliné, Olivier; Bouchon, Michel
2016-04-01
The North Anatolian Fault (NAF) poses a significant hazard for the large cities surrounding the Marmara Sea region particularly the megalopolis of Istanbul. Indeed, the NAF is presently hosting a long unruptured segment below the Sea of Marmara. This seismic gap is approximately 150 km long and corresponds to the Main Marmara Fault (MMF). The seismicity along the Main Marmara Fault (MMF) below the Marmara Sea is analyzed here during the 2007-2012 period to provide insights on the recent evolution of this important regional seismic gap. High precision locations show that seismicity is strongly varying along strike and depth providing fine details of the fault behavior that are inaccessible from geodetic inversions. The activity strongly clusters at the regions of transition between basins. The Central basin shows significant seismicity located below the shallow locking depth inferred from GPS measurements. Its b-value is low and the average seismic slip is high. Interestingly we found also several long term repeating earthquakes in this domain. Using a template matching technique, we evidenced two new families of repeaters: a first family that typically belongs to aftershock sequences and a second family of long lasting repeaters with a multi-month recurrence period. All observations are consistent with a deep creep of this segment. On the contrary, the Kumburgaz basin at the center of the fault shows sparse seismicity with the hallmarks of a locked segment. In the eastern Marmara Sea, the seismicity distribution along the Princes Island segment in the Cinarcik basin, is consistent with the geodetic locking depth of 10km and a low contribution to the regional seismic energy release. The assessment of the locked segment areas provide an estimate of the magnitude of the main forthcoming event to be about 7.3 assuming that the rupture will not enter significantly within creeping domains.
Space-time distribution of afterslip following the 2009 L'Aquila earthquake
NASA Astrophysics Data System (ADS)
D'Agostino, N.; Cheloni, D.; Fornaro, G.; Giuliani, R.; Reale, D.
2012-02-01
The inversion of multitemporal DInSAR and GPS measurements unravels the coseismic and postseismic (afterslip) slip distributions associated with the 2009 MW 6.3 L'Aquila earthquake and provides insights into the rheological properties and long-term behavior of the responsible structure, the Paganica fault. Well-resolved patches of high postseismic slip (10-20 cm) appear to surround the main coseismic patch (maximum slip ≈1 m) through the entire seismogenic layer above the hypocenter without any obvious depth-dependent control. Time series of postseismic displacement are well reproduced by an exponential function with best-fit decay constants in the range of 20-40 days. A sudden discontinuity in the evolution of released postseismic moment at ≈130 days after the main shock does not correlate with independent seismological and geodetic data and is attributed to residual noise in the InSAR time series. The data are unable to resolve migration of afterslip along the fault probably because of the time interval (six days) between the main shock and the first radar acquisition. Surface fractures observed along the Paganica fault follow the steepest gradients of postseismic line-of-sight satellite displacements and are consistent with a sudden and delayed failure of the shallow layer in response to upward tapering of slip. The occurrence of afterslip at various levels through the entire seismogenic layer argues against exclusive depth-dependent variations of frictional properties on the fault, supporting the hypothesis of significant horizontal frictional heterogeneities and/or geometrical complexities. We support the hypothesis that such heterogeneities and complexities may be at the origin of the long-term variable behavior suggested by the paleoseismological studies. Rupture of fault patches with dimensions similar to that activated in 2009 appears to have a ≈500 year recurrence time interval documented by paleoseismic and historical studies. In addition to that
UNIX code management and distribution
Hung, T.; Kunz, P.F.
1992-09-01
We describe a code management and distribution system based on tools freely available for the UNIX systems. At the master site, version control is managed with CVS, which is a layer on top of RCS, and distribution is done via NFS mounted file systems. At remote sites, small modifications to CVS provide for interactive transactions with the CVS system at the master site such that remote developers are true peers in the code development process.
Typical BWR/4 MSIV closure ATWS analysis using RAMONA-3B code with space-time neutron kinetics
Neymotin, L.; Saha, P.
1984-01-01
A best-estimate analysis of a typical BWR/4 MSIV closure ATWS has been performed using the RAMONA-3B code with three-dimensional neutron kinetics. All safety features, namely, the safety and relief valves, recirculation pump trip, high pressure safety injections and the standby liquid control system (boron injection), were assumed to work as designed. No other operator action was assumed. The results show a strong spatial dependence of reactor power during the transient. After the initial peak of pressure and reactor power, the reactor vessel pressure oscillated between the relief valve set points, and the reactor power oscillated between 20 to 50% of the steady state power until the hot shutdown condition was reached at approximately 1400 seconds. The suppression pool bulk water temperature at this time was predicted to be approx. 96/sup 0/C (205/sup 0/F). In view of code performance and reasonable computer running time, the RAMONA-3B code is recommended for further best-estimate analyses of ATWS-type events in BWRs.
Beal, Jacob; Viroli, Mirko
2015-07-28
Computation increasingly takes place not on an individual device, but distributed throughout a material or environment, whether it be a silicon surface, a network of wireless devices, a collection of biological cells or a programmable material. Emerging programming models embrace this reality and provide abstractions inspired by physics, such as computational fields, that allow such systems to be programmed holistically, rather than in terms of individual devices. This paper aims to provide a unified approach for the investigation and engineering of computations programmed with the aid of space-time abstractions, by bringing together a number of recent results, as well as to identify critical open problems. PMID:26078346
NASA Technical Reports Server (NTRS)
Villarreal, James A.; Shelton, Robert O.
1992-01-01
Concept of space-time neural network affords distributed temporal memory enabling such network to model complicated dynamical systems mathematically and to recognize temporally varying spatial patterns. Digital filters replace synaptic-connection weights of conventional back-error-propagation neural network.
NASA Astrophysics Data System (ADS)
Alexandru, Andrei; Draper, Terrence; Horváth, Ivan; Streuer, Thomas
2011-08-01
We propose a framework for quantitative evaluation of dynamical tendency for polarization in an arbitrary random variable that can be decomposed into a pair of orthogonal subspaces. The method uses measures based on comparisons of given dynamics to its counterpart with statistically independent components. The formalism of previously considered X-distributions is used to express the aforementioned comparisons, in effect putting the former approach on solid footing. Our analysis leads to the definition of a suitable correlation coefficient with clear statistical meaning. We apply the method to the dynamics induced by pure-glue lattice QCD in local left-right components of overlap Dirac eigenmodes. It is found that, in finite physical volume, there exists a non-zero physical scale in the spectrum of eigenvalues such that eigenmodes at smaller (fixed) eigenvalues exhibit convex X-distribution (positive correlation), while at larger eigenvalues the distribution is concave (negative correlation). This chiral polarization scale thus separates a regime where dynamics enhances chirality relative to statistical independence from a regime where it suppresses it, and gives an objective definition to the notion of "low" and "high" Dirac eigenmode. We propose to investigate whether the polarization scale remains non-zero in the infinite volume limit, in which case it would represent a new kind of low energy scale in QCD.
NASA Astrophysics Data System (ADS)
Diakogianni, Georgia; Papadopoulos, Gerassimos; Fokaefs, Anna; Papageorgiou, Antonia; Triantafyllou, Ioanna
2015-04-01
We have compiled a new tsunami catalogue covering the entire European and Mediterranean (EM) region from pre-historical times up to the present. The catalogue is of increased completeness and homogeneity with respect to previous ones containing more than 370 events with reliability assignment to all the events listed. New historical events were inserted, while revised parameters of historical tsunamigenic earthquakes were extensively adopted particularly for the most active region of the eastern Mediterranean. In association to the catalogue, an inventory of tsunami impact was created with the main attributes being the numbers of people killed and injured, the damage to buildings, vessels, cultivated land and to other property. The inventory includes also a record of the tsunami environmental impact, such as soil erosion, geomorphological changes, boulder replacement and tsunami sediment deposits. Data on the tsunami impact were used to assign tsunami intensity in the 12-point Papadopoulos-Imamura (2001) scale for the majority of the events listed. The tsunami impact was studied as for its space and time distribution. In space, the tsunami impact was mapped in terms of tsunami intensity and impact zones were determined. The time distribution of the tsunami impact was examined for each one of the impact zones. Leaving aside large pre-historical tsunamis, such as the one produced by the LBA or Minoan eruption of Thera (Santorini) volcano, due to the lack of certain impact data, it has been found that the main impact comes from extreme, earthquake tsunamigenic events, such the ones of AD 365 in Crete, 551 in Lebanon, 1303 in Crete, 1755 in Lisbon. However, high impact may also occur from events of lower magnitude, such as the 1908 tsunami in Messina straits and the 1956 tsunami in the South Aegean, which underlines the strong dependence of the impact on the community exposure. Another important finding is that the cumulative impact of relatively moderate or even small
NASA Astrophysics Data System (ADS)
Scradeanu, Daniel; Mezincescu, Matei; Pagnejer, Mihaela
2010-05-01
The monitoring an control of the humidity distribution of unsaturated soil in time and space allow forecast of two processes, essential in the management of groundwater resources: • recharge of aquifers; • multiphase migration fluid associated with groundwater. In order to obtain a wide range of information on fluid migration in heterogeneous zones was developed a sensitive informatic system. The main components of the system are: • rain gauge to measure rainfall in the experiment area; • batteries of sensors for recording temperature and electrical conductivity of water, suction potential; • drip irrigation system; • pressure transducers installed in boreholes made for hydrostatic level monitoring of aquifers • software and adequate computer system. The experiment was conducted in an agricultural area, and wishes to use the results to be followed to optimize the irrigation system in terms of global climate change affecting water resources of surfaces and underground. Making and installing the monitoring system was preceded by the creation of a 3D lithologic model that allows differentiation processes on soil types. Placing batteries sensors was chosen based on preliminary results of a geophysical exploration that allowed assessment of the initial state of the monitored soil volume. Battery configuration of sensors ensures measurement of temperature, electrical conductivity, moisture and suction potential at various depths and in real time. The data were integrated in a stochastic model that allows forecasting of fluid dynamic in the unsaturated zone and to assess uncertainties associated with different areas and periods of time.
A distributed particle simulation code in C++
Forslund, D.W.; Wingate, C.A.; Ford, P.S.; Junkins, J.S.; Pope, S.C.
1992-01-01
Although C++ has been successfully used in a variety of computer science applications, it has just recently begun to be used in scientific applications. We have found that the object-oriented properties of C++ lend themselves well to scientific computations by making maintenance of the code easier, by making the code easier to understand, and by providing a better paradigm for distributed memory parallel codes. We describe here aspects of developing a particle plasma simulation code using object-oriented techniques for use in a distributed computing environment. We initially designed and implemented the code for serial computation and then used the distributed programming toolkit ISIS to run it in parallel. In this connection we describe some of the difficulties presented by using C++ for doing parallel and scientific computation.
Distribution Coding in the Visual Pathway
Sanderson, A. C.; Kozak, W. M.; Calvert, T. W.
1973-01-01
Although a variety of types of spike interval histograms have been reported, little attention has been given to the spike interval distribution as a neural code and to how different distributions are transmitted through neural networks. In this paper we present experimental results showing spike interval histograms recorded from retinal ganglion cells of the cat. These results exhibit a clear correlation between spike interval distribution and stimulus condition at the retinal ganglion cell level. The averaged mean rates of the cells studied were nearly the same in light as in darkness whereas the spike interval histograms were much more regular in light than in darkness. We present theoretical models which illustrate how such a distribution coding at the retinal level could be “interpreted” or recorded at some higher level of the nervous system such as the lateral geniculate nucleus. Interpretation is an essential requirement of a neural code which has often been overlooked in modeling studies. Analytical expressions are derived describing the role of distribution coding in determining the transfer characteristics of a simple interaction model and of a lateral inhibition network. Our work suggests that distribution coding might be interpreted by simply interconnected neural networks such as relay cell networks, in general, and the primary thalamic sensory nuclei in particular. PMID:4697235
NASA Astrophysics Data System (ADS)
Garattini, Remo
In the context of a model of space-time foam, made by N wormholes we discuss the possibility of having a foam formed by different configurations. An equivalence between Schwarzschild and Schwarzschild-Anti-de Sitter wormholes in terms of Casimir energy is shown. An argument to discriminate which configuration could represent a foamy vacuum coming from Schwarzschild black hole transition frequencies is used. The case of a positive cosmological constant is also discussed. Finally, a discussion involving charged wormholes leads to the conclusion that they cannot be used to represent a ground state of the foamy type.
Distributed transform coding via source-splitting
NASA Astrophysics Data System (ADS)
Yahampath, Pradeepa
2012-12-01
Transform coding (TC) is one of the best known practical methods for quantizing high-dimensional vectors. In this article, a practical approach to distributed TC of jointly Gaussian vectors is presented. This approach, referred to as source-split distributed transform coding (SP-DTC), can be used to easily implement two terminal transform codes for any given rate-pair. The main idea is to apply source-splitting using orthogonal-transforms, so that only Wyner-Ziv (WZ) quantizers are required for compression of transform coefficients. This approach however requires optimizing the bit allocation among dependent sets of WZ quantizers. In order to solve this problem, a low-complexity tree-search algorithm based on analytical models for transform coefficient quantization is developed. A rate-distortion (RD) analysis of SP-DTCs for jointly Gaussian sources is presented, which indicates that these codes can significantly outperform the practical alternative of independent TC of each source, whenever there is a strong correlation between the sources. For practical implementation of SP-DTCs, the idea of using conditional entropy constrained (CEC) quantizers followed by Slepian-Wolf coding is explored. Experimental results obtained with SP-DTC designs based on both CEC scalar quantizers and CEC trellis-coded quantizers demonstrate that actual implementations of SP-DTCs can achieve RD performance close to the analytically predicted limits.
NASA Astrophysics Data System (ADS)
Chapline, George
It has been shown that a nonlinear Schrödinger equation in 2+1 dimensions equipped with an SU(N) Chern-Simons gauge field can provide an exact description of certain self-dual Einstein spaces in the limit N-=∞. Ricci flat Einstein spaces can then be viewed as arising from a quantum pairing of the classical self-dual and anti-self-dual solutions. In this chapter, we will outline how this theory of empty space-time might be generalized to include matter and vacuum energy by transplanting the nonlinear Schrödinger equation used to construct Einstein spaces to the 25+1-dimensional Lorentzian Leech lattice. If the distinguished 2 spatial dimensions underlying the construction of Einstein spaces are identified with a hexagonal lattice section of the Leech lattice, the wave-function becomes an 11 × 11 matrix that can represent fermion and boson degrees of freedom (DOF) associated with 2-form and Yang-Mills gauge symmetries. The resulting theory of gravity and matter in 3+1 dimensions is not supersymmetric, which provides an entry for a vacuum energy. Indeed, in the case of a Lemaitre cosmological model, the emergent space-time will naturally have a vacuum energy on the order of the observed cosmological constant.
Time coded distribution via broadcasting stations
NASA Technical Reports Server (NTRS)
Leschiutta, S.; Pettiti, V.; Detoma, E.
1979-01-01
The distribution of standard time signals via AM and FM broadcasting stations presents the distinct advantages to offer a wide area coverage and to allow the use of inexpensive receivers, but the signals are radiated a limited number of times per day, are not usually available during the night, and no full and automatic synchronization of a remote clock is possible. As an attempt to overcome some of these problems, a time coded signal with a complete date information is diffused by the IEN via the national broadcasting networks in Italy. These signals are radiated by some 120 AM and about 3000 FM and TV transmitters around the country. In such a way, a time ordered system with an accuracy of a couple of milliseconds is easily achieved.
Maia, M.D.
1981-03-01
The concept of contact between manifolds is applied to space--times of general relativity. For a given background space--time a contact approximation of second order is defined and interpreted both from the point of view of a metric pertubation and of a higher order tangent manifold. In the first case, an application to the high frequency gravitational wave hypothesis is suggested. In the second case, a constant curvature tangent bundle is constructed and suggested as a means to define a ten parameter local space--time symmetry.
The weight distribution and randomness of linear codes
NASA Technical Reports Server (NTRS)
Cheung, K.-M.
1989-01-01
Finding the weight distributions of block codes is a problem of theoretical and practical interest. Yet the weight distributions of most block codes are still unknown except for a few classes of block codes. Here, by using the inclusion and exclusion principle, an explicit formula is derived which enumerates the complete weight distribution of an (n,k,d) linear code using a partially known weight distribution. This expression is analogous to the Pless power-moment identities - a system of equations relating the weight distribution of a linear code to the weight distribution of its dual code. Also, an approximate formula for the weight distribution of most linear (n,k,d) codes is derived. It is shown that for a given linear (n,k,d) code over GF(q), the ratio of the number of codewords of weight u to the number of words of weight u approaches the constant Q = q(-)(n-k) as u becomes large. A relationship between the randomness of a linear block code and the minimum distance of its dual code is given, and it is shown that most linear block codes with rigid algebraic and combinatorial structure also display certain random properties which make them similar to random codes with no structure at all.
Adaptive Source Coding Schemes for Geometrically Distributed Integer Alphabets
NASA Technical Reports Server (NTRS)
Cheung, K-M.; Smyth, P.
1993-01-01
Revisit the Gallager and van Voorhis optimal source coding scheme for geometrically distributed non-negative integer alphabets and show that the various subcodes in the popular Rice algorithm can be derived from the Gallager and van Voorhis code.
Optimal source codes for geometrically distributed integer alphabets
NASA Technical Reports Server (NTRS)
Gallager, R. G.; Van Voorhis, D. C.
1975-01-01
An approach is shown for using the Huffman algorithm indirectly to prove the optimality of a code for an infinite alphabet if an estimate concerning the nature of the code can be made. Attention is given to nonnegative integers with a geometric probability assignment. The particular distribution considered arises in run-length coding and in encoding protocol information in data networks. Questions of redundancy of the optimal code are also investigated.
NASA Astrophysics Data System (ADS)
Meyers, Ronald E.; Deacon, Keith S.; Tunick, Arnold
2013-09-01
We report on an experimental demonstration of quantum imaging where the images are stored in both space and time. Quantum images of remote objects are produced with rotating ground glass induced chaotic laser light and two sensors measuring at different space-time points. Quantum images are observed to move depending on the time delay between the sensor measurements. The experiments provide a new testbed for exploring the time and space scale fundamental physics of quantum imaging and suggest new pathways for quantum information storage and processing. The moved quantum images are in fact new images that are stored in a space-time virtual memory process. The images are stored within the same quantum imaging data sets and thus quantum imaging can produce more information per photon measured than was previously realized.
NASA Astrophysics Data System (ADS)
Wong, Wing-Chun Godwin
This dissertation focused on Kant's conception of physical matter in the Opus postumum. In this work, Kant postulates the existence of an ether which fills the whole of space and time with its moving forces. Kant's arguments for the existence of an ether in the so-called Ubergang have been acutely criticized by commentators. Guyer, for instance, thinks that Kant pushes the technique of transcendental deduction too far in trying to deduce the empirical ether. In defense of Kant, I held that it is not the actual existence of the empirical ether, but the concept of the ether as a space-time filler that is subject to a transcendental deduction. I suggested that Kant is doing three things in the Ubergang: First, he deduces the pure concept of a space-time filler as a conceptual hybrid of the transcendental object and permanent substance to replace the category of substance in the Critique. Then he tries to prove the existence of such a space-time filler as a reworking of the First Analogy. Finally, he takes into consideration the empirical determinations of the ether by adding the concept of moving forces to the space -time filler. In reconstructing Kant's proofs, I pointed out that Kant is absolutely committed to the impossibility of action-at-a-distance. If we add this new principle of no-action-at-a-distance to the Third Analogy, the existence of a space-time filler follows. I argued with textual evidence that Kant's conception of ether satisfies the basic structure of a field: (1) the ether is a material continuum; (2) a physical quantity is definable on each point in the continuum; and (3) the ether provides a medium to support the continuous transmission of action. The thrust of Kant's conception of ether is to provide a holistic ontology for the transition to physics, which can best be understood from a field-theoretical point of view. This is the main thesis I attempted to establish in this dissertation.
NASA Astrophysics Data System (ADS)
Giménez-Forcada, Elena
2014-09-01
A new method has been developed to recognize and understand the temporal and spatial evolution of seawater intrusion in a coastal alluvial aquifer. The study takes into account that seawater intrusion is a dynamic process, and that seasonal and inter-annual variations in the balance of the aquifer cause changes in groundwater chemistry. Analysis of the main processes, by means of the Hydrochemical Facies Evolution Diagram (HFE-Diagram), provides essential knowledge about the main hydrochemical processes. Subsequently, analysis of the spatial distribution of hydrochemical facies using heatmaps helps to identify the general state of the aquifer with respect to seawater intrusion during different sampling periods. This methodology has been applied to the pilot area of the Vinaroz Plain, on the Mediterranean coast of Spain. The results appear to be very successful for differentiating variations through time in the salinization processes caused by seawater intrusion into the aquifer, distinguishing the phase of seawater intrusion from the phase of recovery, and their respective evolutions. The method shows that hydrochemical variations can be read in terms of the pattern of seawater intrusion, groundwater quality status, aquifer behaviour and hydrodynamic conditions. This leads to a better general understanding of the aquifers and a potential for improvement in the way they are managed.
Binary weight distributions of some Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Pollara, F.; Arnold, S.
1992-01-01
The binary weight distributions of the (7,5) and (15,9) Reed-Solomon (RS) codes and their duals are computed using the MacWilliams identities. Several mappings of symbols to bits are considered and those offering the largest binary minimum distance are found. These results are then used to compute bounds on the soft-decoding performance of these codes in the presence of additive Gaussian noise. These bounds are useful for finding large binary block codes with good performance and for verifying the performance obtained by specific soft-coding algorithms presently under development.
NASA Astrophysics Data System (ADS)
Field, F.; Goodbun, J.; Watson, V.
Architects have a role to play in interplanetary space that has barely yet been explored. The architectural community is largely unaware of this new territory, for which there is still no agreed method of practice. There is moreover a general confusion, in scientific and related fields, over what architects might actually do there today. Current extra-planetary designs generally fail to explore the dynamic and relational nature of space-time, and often reduce human habitation to a purely functional problem. This is compounded by a crisis over the representation (drawing) of space-time. The present work returns to first principles of architecture in order to realign them with current socio-economic and technological trends surrounding the space industry. What emerges is simultaneously the basis for an ecological space architecture, and the representational strategies necessary to draw it. We explore this approach through a work of design-based research that describes the construction of Ocean; a huge body of water formed by the collision of two asteroids at the Translunar Lagrange Point (L2), that would serve as a site for colonisation, and as a resource to fuel future missions. Ocean is an experimental model for extra-planetary space design and its representation, within the autonomous discipline of architecture.
NASA Technical Reports Server (NTRS)
Braverman, Amy; Nguyen, Hai; Olsen, Edward; Cressie, Noel
2011-01-01
Space-time Data Fusion (STDF) is a methodology for combing heterogeneous remote sensing data to optimally estimate the true values of a geophysical field of interest, and obtain uncertainties for those estimates. The input data sets may have different observing characteristics including different footprints, spatial resolutions and fields of view, orbit cycles, biases, and noise characteristics. Despite these differences all observed data can be linked to the underlying field, and therefore the each other, by a statistical model. Differences in footprints and other geometric characteristics are accounted for by parameterizing pixel-level remote sensing observations as spatial integrals of true field values lying within pixel boundaries, plus measurement error. Both spatial and temporal correlations in the true field and in the observations are estimated and incorporated through the use of a space-time random effects (STRE) model. Once the models parameters are estimated, we use it to derive expressions for optimal (minimum mean squared error and unbiased) estimates of the true field at any arbitrary location of interest, computed from the observations. Standard errors of these estimates are also produced, allowing confidence intervals to be constructed. The procedure is carried out on a fine spatial grid to approximate a continuous field. We demonstrate STDF by applying it to the problem of estimating CO2 concentration in the lower-atmosphere using data from the Atmospheric Infrared Sounder (AIRS) and the Japanese Greenhouse Gasses Observing Satellite (GOSAT) over one year for the continental US.
Codon Distribution in Error-Detecting Circular Codes
Fimmel, Elena; Strüngmann, Lutz
2016-01-01
In 1957, Francis Crick et al. suggested an ingenious explanation for the process of frame maintenance. The idea was based on the notion of comma-free codes. Although Crick’s hypothesis proved to be wrong, in 1996, Arquès and Michel discovered the existence of a weaker version of such codes in eukaryote and prokaryote genomes, namely the so-called circular codes. Since then, circular code theory has invariably evoked great interest and made significant progress. In this article, the codon distributions in maximal comma-free, maximal self-complementary C3 and maximal self-complementary circular codes are discussed, i.e., we investigate in how many of such codes a given codon participates. As the main (and surprising) result, it is shown that the codons can be separated into very few classes (three, or five, or six) with respect to their frequency. Moreover, the distribution classes can be hierarchically ordered as refinements from maximal comma-free codes via maximal self-complementary C3 codes to maximal self-complementary circular codes. PMID:26999215
Error resiliency of distributed video coding in wireless video communication
NASA Astrophysics Data System (ADS)
Ye, Shuiming; Ouaret, Mourad; Dufaux, Frederic; Ansorge, Michael; Ebrahimi, Touradj
2008-08-01
Distributed Video Coding (DVC) is a new paradigm in video coding, based on the Slepian-Wolf and Wyner-Ziv theorems. DVC offers a number of potential advantages: flexible partitioning of the complexity between the encoder and decoder, robustness to channel errors due to intrinsic joint source-channel coding, codec independent scalability, and multi-view coding without communications between the cameras. In this paper, we evaluate the performance of DVC in an error-prone wireless communication environment. We also present a hybrid spatial and temporal error concealment approach for DVC. Finally, we perform a comparison with a state-of-the-art AVC/H.264 video coding scheme in the presence of transmission errors.
Distributed Turbo Product Codes with Multiple Vertical Parities
NASA Astrophysics Data System (ADS)
Obiedat, Esam A.; Chen, Guotai; Cao, Lei
2009-12-01
We propose a Multiple Vertical Parities Distributed Turbo Product Code (MVP-DTPC) over cooperative network using block Bose Chaudhuri Hochquenghem (BCH) codes as component codes. The source broadcasts extended BCH coded frames to the destination and nearby relays. After decoding the received sequences, each relay constructs a product code by arranging the corrected bit sequences in rows and re-encoding them vertically using BCH as component codes to obtain an Incremental Redundancy (IR) for source's data. To obtain independent vertical parities from each relay in the same code space, we propose a new Circular Interleaver for source's data; different circular interleavers are used to interleave BCH rows before re-encoding vertically. The Maximum A posteriori Probability (MAP) decoding is achieved by applying maximum transfer of extrinsic information between the multiple decoding stages. This is employed in the modified turbo product decoder, which is proposed to cope with multiple parities. The a posteriori output from a vertical decoding stage is used to derive the soft extrinsic information, that are used as a priori input for the next horizontal decoding stage. Simulation results in Additive White Gaussian Noise (AWGN) channel using network scenarios show 0.3-0.5 dB gain improvement in Bit Error Rate (BER) performance over the non-cooperative Turbo Product Codes (TPC).
Static spherically symmetric space-times with six Killing vectors
Qadir, A.; Ziad, M.
1988-11-01
It had been proved earlier that spherically symmetric, static space-times have ten, seven, six, or four independent Killing vectors (KV's), but there are no cases in between. The case of six KV's is investigated here. It is shown that the space-time corresponds to a hyperboloid cross a sphere, reminiscent of Kaluza--Klein theory, with a compactification from four down to two dimensions. In effect, there is a unique metric for this space-time corresponding to a uniform mass distribution over all space.
Distributed Inference in Tree Networks Using Coding Theory
NASA Astrophysics Data System (ADS)
Kailkhura, Bhavya; Vempaty, Aditya; Varshney, Pramod K.
2015-07-01
In this paper, we consider the problem of distributed inference in tree based networks. In the framework considered in this paper, distributed nodes make a 1-bit local decision regarding a phenomenon before sending it to the fusion center (FC) via intermediate nodes. We propose the use of coding theory based techniques to solve this distributed inference problem in such structures. Data is progressively compressed as it moves towards the FC. The FC makes the global inference after receiving data from intermediate nodes. Data fusion at nodes as well as at the FC is implemented via error correcting codes. In this context, we analyze the performance for a given code matrix and also design the optimal code matrices at every level of the tree. We address the problems of distributed classification and distributed estimation separately and develop schemes to perform these tasks in tree networks. The proposed schemes are of practical significance due to their simple structure. We study the asymptotic inference performance of our schemes for two different classes of tree networks: fixed height tree networks, and fixed degree tree networks. We show that the proposed schemes are asymptotically optimal under certain conditions.
NASA Technical Reports Server (NTRS)
Steyn, J. J.; Born, U.
1970-01-01
A FORTRAN code was developed for the Univac 1108 digital computer to unfold lithium-drifted germanium semiconductor spectrometers, polyenergetic gamma photon experimental distributions. It was designed to analyze the combination continuous and monoenergetic gamma radiation field of radioisotope volumetric sources. The code generates the detector system response matrix function and applies it to monoenergetic spectral components discretely and to the continuum iteratively. It corrects for system drift, source decay, background, and detection efficiency. Results are presented in digital form for differential and integrated photon number and energy distributions, and for exposure dose.
Streamlined Genome Sequence Compression using Distributed Source Coding
Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel
2014-01-01
We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552
Distributed wavefront coding for wide angle imaging system
NASA Astrophysics Data System (ADS)
Larivière-Bastien, Martin; Zhang, Hu; Thibault, Simon
2011-10-01
The emerging paradigm of imaging systems, known as wavefront coding, which employs joint optimization of both the optical system and the digital post-processing system, has not only increased the degrees of design freedom but also brought several significant system-level benefits. The effectiveness of wavefront coding has been demonstrated by several proof-of-concept systems in the reduction of focus-related aberrations and extension of depth of focus. While previous research on wavefront coding was mainly targeted at imaging systems having a small or modest field of view (FOV), we present a preliminary study on wavefront coding applied to panoramic optical systems. Unlike traditional wavefront coding systems, which only require the constancy of the modulation transfer function (MTF) over an extended focus range, wavefront-coded panoramic systems particularly emphasize the mitigation of significant off-axis aberrations such as field curvature, coma, and astigmatism. The restrictions of using a traditional generalized cubic polynomial pupil phase mask for wide angle systems are studied in this paper. It is shown that a traditional approach can be used when the variation of the off-axis aberrations remains modest. Consequently, we propose to study how a distributed wavefront coding approach, where two surfaces are used for encoding the wavefront, can be applied to wide angle lenses. A few cases designed using Zemax are presented and discussed
Affine conformal vectors in space-time
NASA Astrophysics Data System (ADS)
Coley, A. A.; Tupper, B. O. J.
1992-05-01
All space-times admitting a proper affine conformal vector (ACV) are found. By using a theorem of Hall and da Costa, it is shown that such space-times either (i) admit a covariantly constant vector (timelike, spacelike, or null) and the ACV is the sum of a proper affine vector and a conformal Killing vector or (ii) the space-time is 2+2 decomposable, in which case it is shown that no ACV can exist (unless the space-time decomposes further). Furthermore, it is proved that all space-times admitting an ACV and a null covariantly constant vector (which are necessarily generalized pp-wave space-times) must have Ricci tensor of Segré type {2,(1,1)}. It follows that, among space-times admitting proper ACV, the Einstein static universe is the only perfect fluid space-time, there are no non-null Einstein-Maxwell space-times, and only the pp-wave space-times are representative of null Einstein-Maxwell solutions. Otherwise, the space-times can represent anisotropic fluids and viscous heat-conducting fluids, but only with restricted equations of state in each case.
Space-time formulation of quantum transitions
NASA Astrophysics Data System (ADS)
Petrosky, T.; Ordonez, G.; Prigogine, I.
2001-12-01
In a previous paper we have studied dressed excited states in the Friedrichs model, which describes a two-level atom interacting with radiation. In our approach, excited states are distributions (or generalized functions) in the Liouville space. These states decay in a strictly exponential way. In contrast, the states one may construct in the Hilbert space of wave functions always present deviations from exponential decay. We have considered the momentum representation, which is applicable to global quantities (trace, energy transfer). Here we study the space-time description of local quantities associated with dressed unstable states, such as, the intensity of the photon field. In this situation the excited states become factorized in Gamow states. To go from local quantities to global quantities, we have to proceed to an integration over space, which is far from trivial. There are various elements that appear in the space-time evolution of the system: the unstable cloud that surrounds the bare atom, the emitted real photons and the ``Zeno photons,'' which are associated with deviations from exponential decay. We consider a Hilbert space approximation to our dressed excited state. This approximation leads already to decay close to exponential in the field surrounding the atom, and to a line shape different from the Lorentzian line shape. Our results are compared with numerical simulations. We show that the time evolution of an unstable state satisfies a Boltzmann-like H theorem. This is applied to emission and absorption as well as scattering. The existence of a microscopic H theorem is not astonishing. The excited states are ``nonequilibrium'' states and their time evolution leads to the emission of photons, which distributes the energy of the unstable state among the field modes.
An introduction to curved space-times.
NASA Astrophysics Data System (ADS)
Williams, R. M.
1991-07-01
These lectures focus on understanding relativity from a geometrical viewpoint, based on the use of space-time diagrams and without the tools of tensor calculus. After a brief discussion of flat space-times, curved space-times are introduced and it is shown how many of their properties may be deduced from their metric interval. The space-time around a spherically symmetric star and its possible collapse to form a black hole is described. Finally, some simple cosmological models are discussed, with emphasis on their causal properties and the existence of horizons. The titles of the lectures are: I. Flat space-times. II. Curved space-times. III. Spherical stars and stellar collapse. IV. Some simple cosmological models.
A Model of Classical Space-Times.
ERIC Educational Resources Information Center
Maudlin, Tim
1989-01-01
Discusses some historically important reference systems including those by Newton, Leibniz, and Galileo. Provides models illustrating space-time relationship of the reference systems. Describes building models. (YP)
NASA Astrophysics Data System (ADS)
Thomas, C. K.; Selker, J. S.; Zeeman, M. J.
2011-12-01
We present a novel approach to observing the two-dimensional thermal structure of atmospheric near-surface turbulent and non-turbulent flows by measuring air temperatures in a vertical plane at a high resolution (0.25 m, every approximately 2 s) using distributed temperature sensing (DTS). Air temperature observations obtained from a fiber optics array of approximate dimensions 8 by 8 m and sonic anemometer data from two levels were collected for a period of 23 days over a short grass field located in the flat bottom of a wide valley with moderate surface heterogeneity. In addition to evaluating the DTS technique to resolve the rapidly changing gradients and small-scale perturbations associated with turbulence in the atmosphere for convective and stable boundary layers, the objective was to analyze the space-time dynamics of transient cold-air pools in the stable boundary layer. The time response and precision of the fiber temperatures were adequate to resolve individual sub-meter sized turbulent and non-turbulent structures of time scales >= 3 s and enabled calculation of meaningful sensible heat fluxes when combined with vertical wind observations. The small turbulence scales associated with strong vertical shear and low measurement heights pose limitations to the technique. The top of the transient cold-air pool was highly non-stationary. The thermal structure of the near-surface air is generally a superposition of various perturbations of different time and length scales, whereas no preferred scales were identified. Vertical length scales for turbulence in the strongly stratified transient cold-air pool directly derived from the DTS data agreed well with buoyancy length scales parameterized using the vertical velocity variance and the Brunt-Vaisala frequency, while scales for weak stratification disagreed. The high-resolution DTS technique opens a new window into spatially sampling geophysical fluid flows including turbulent energy exchange with a broad
Space-time disarray and visual awareness
Koenderink, Jan; Richards, Whitman; van Doorn, Andrea J
2012-01-01
Local space-time scrambling of optical data leads to violent jerks and dislocations. On masking these, visual awareness of the scene becomes cohesive, with dislocations discounted as amodally occluding foreground. Such cohesive space-time of awareness is technically illusory because ground truth is jumbled whereas awareness is coherent. Apparently the visual field is a construction rather than a (veridical) perception. PMID:23145276
Pseudo-Z symmetric space-times
Mantica, Carlo Alberto; Suh, Young Jin
2014-04-15
In this paper, we investigate Pseudo-Z symmetric space-time manifolds. First, we deal with elementary properties showing that the associated form A{sub k} is closed: in the case the Ricci tensor results to be Weyl compatible. This notion was recently introduced by one of the present authors. The consequences of the Weyl compatibility on the magnetic part of the Weyl tensor are pointed out. This determines the Petrov types of such space times. Finally, we investigate some interesting properties of (PZS){sub 4} space-time; in particular, we take into consideration perfect fluid and scalar field space-time, and interesting properties are pointed out, including the Petrov classification. In the case of scalar field space-time, it is shown that the scalar field satisfies a generalized eikonal equation. Further, it is shown that the integral curves of the gradient field are geodesics. A classical method to find a general integral is presented.
NASA Astrophysics Data System (ADS)
Schmitz, Oliver; Soenario, Ivan; Vaartjes, Ilonca; Strak, Maciek; Hoek, Gerard; Brunekreef, Bert; Dijst, Martin; Karssenberg, Derek
2016-04-01
of land, the 4 digit postal code area or neighbourhood of a persons' home, circular areas around the home, and spatial probability distributions of space-time paths during commuting. Personal exposure was estimated by averaging concentrations over these space-time paths, for each individual in a cohort. Preliminary results show considerable differences of a persons' exposure using these various approaches of space-time path aggregation, presumably because air pollution shows large variation over short distances.
Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing
NASA Technical Reports Server (NTRS)
Ozguner, Fusun
1996-01-01
Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.
Distributed magnetic field positioning system using code division multiple access
NASA Technical Reports Server (NTRS)
Prigge, Eric A. (Inventor)
2003-01-01
An apparatus and methods for a magnetic field positioning system use a fundamentally different, and advantageous, signal structure and multiple access method, known as Code Division Multiple Access (CDMA). This signal architecture, when combined with processing methods, leads to advantages over the existing technologies, especially when applied to a system with a large number of magnetic field generators (beacons). Beacons at known positions generate coded magnetic fields, and a magnetic sensor measures a sum field and decomposes it into component fields to determine the sensor position and orientation. The apparatus and methods can have a large `building-sized` coverage area. The system allows for numerous beacons to be distributed throughout an area at a number of different locations. A method to estimate position and attitude, with no prior knowledge, uses dipole fields produced by these beacons in different locations.
Energy efficient wireless sensor networks using asymmetric distributed source coding
NASA Astrophysics Data System (ADS)
Rao, Abhishek; Kulkarni, Murlidhar
2013-01-01
Wireless Sensor Networks (WSNs) are networks of sensor nodes deployed over a geographical area to perform a specific task. WSNs pose many design challenges. Energy conservation is one such design issue. In literature a wide range of solutions addressing this issue have been proposed. Generally WSNs are densely deployed. Thus the nodes with the close proximity are more likely to have the same data. Transmission of such non-aggregated data may lead to an inefficient energy management. Hence the data fusion has to be performed at the nodes so as to combine the edundant information into a single data unit. Distributed Source Coding is an efficient approach in achieving this task. In this paper an attempt has been made in modeling such a system. Various energy efficient codes were considered for the analysis. System performance in terms of energy efficiency has been made.
Selective video encryption of a distributed coded bitstream using LDPC codes
NASA Astrophysics Data System (ADS)
Um, Hwayoung; Delp, Edward J.
2006-02-01
Selective encryption is a technique that is used to minimizec omputational complexity or enable system functionality by only encrypting a portion of a compressed bitstream while still achieving reasonable security. For selective encryption to work, we need to rely not only on the beneficial effects of redundancy reduction, but also on the characteristics of the compression algorithm to concentrate important data representing the source in a relatively small fraction of the compressed bitstream. These important elements of the compressed data become candidates for selective encryption. In this paper, we combine encryption and distributed video source coding to consider the choices of which types of bits are most effective for selective encryption of a video sequence that has been compressed using a distributed source coding method based on LDPC codes. Instead of encrypting the entire video stream bit by bit, we encrypt only the highly sensitive bits. By combining the compression and encryption tasks and thus reducing the number of bits encrypted, we can achieve a reduction in system complexity.
MEST- avoid next extinction by a space-time effect
NASA Astrophysics Data System (ADS)
Cao, Dayong
2013-03-01
Sun's companion-dark hole seasonal took its dark comets belt and much dark matter to impact near our earth. And some of them probability hit on our earth. So this model kept and triggered periodic mass extinctions on our earth every 25 to 27 million years. After every impaction, many dark comets with very special tilted orbits were arrested and lurked in solar system. When the dark hole-Tyche goes near the solar system again, they will impact near planets. The Tyche, dark comet and Oort Cloud have their space-time center. Because the space-time are frequency and amplitude square of wave. Because the wave (space-time) can make a field, and gas has more wave and fluctuate. So they like dense gas ball and a dark dense field. They can absorb the space-time and wave. So they are ``dark'' like the dark matter which can break genetic codes of our lives by a dark space-time effect. So the upcoming next impaction will cause current ``biodiversity loss.'' The dark matter can change dead plants and animals to coal, oil and natural gas which are used as energy, but break our living environment. According to our experiments, which consciousness can use thought waves remotely to change their systemic model between Electron Clouds and electron holes of P-N Junction and can change output voltages of solar cells by a life information technology and a space-time effect, we hope to find a new method to the orbit of the Tyche to avoid next extinction. (see Dayong Cao, BAPS.2011.APR.K1.17 and BAPS.2012.MAR.P33.14) Support by AEEA
Distributed Coding/Decoding Complexity in Video Sensor Networks
Cordeiro, Paulo J.; Assunção, Pedro
2012-01-01
Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality. PMID:22736972
Weight distributions for turbo codes using random and nonrandom permutations
NASA Technical Reports Server (NTRS)
Dolinar, S.; Divsalar, D.
1995-01-01
This article takes a preliminary look at the weight distributions achievable for turbo codes using random, nonrandom, and semirandom permutations. Due to the recursiveness of the encoders, it is important to distinguish between self-terminating and non-self-terminating input sequences. The non-self-terminating sequences have little effect on decoder performance, because they accumulate high encoded weight until they are artificially terminated at the end of the block. From probabilistic arguments based on selecting the permutations randomly, it is concluded that the self-terminating weight-2 data sequences are the most important consideration in the design of constituent codes; higher-weight self-terminating sequences have successively decreasing importance. Also, increasing the number of codes and, correspondingly, the number of permutations makes it more and more likely that the bad input sequences will be broken up by one or more of the permuters. It is possible to design nonrandom permutations that ensure that the minimum distance due to weight-2 input sequences grows roughly as the square root of (2N), where N is the block length. However, these nonrandom permutations amplify the bad effects of higher-weight inputs, and as a result they are inferior in performance to randomly selected permutations. But there are 'semirandom' permutations that perform nearly as well as the designed nonrandom permutations with respect to weight-2 input sequences and are not as susceptible to being foiled by higher-weight inputs.
Practical distributed video coding in packet lossy channels
NASA Astrophysics Data System (ADS)
Qing, Linbo; Masala, Enrico; He, Xiaohai
2013-07-01
Improving error resilience of video communications over packet lossy channels is an important and tough task. We present a framework to optimize the quality of video communications based on distributed video coding (DVC) in practical packet lossy network scenarios. The peculiar characteristics of DVC indeed require a number of adaptations to take full advantage of its intrinsic robustness when dealing with data losses of typical real packet networks. This work proposes a new packetization scheme, an investigation of the best error-correcting codes to use in a noisy environment, a practical rate-allocation mechanism, which minimizes decoder feedback, and an improved side-information generation and reconstruction function. Performance comparisons are presented with respect to a conventional packet video communication using H.264/advanced video coding (AVC). Although currently the H.264/AVC rate-distortion performance in case of no loss is better than state-of-the-art DVC schemes, under practical packet lossy conditions, the proposed techniques provide better performance with respect to an H.264/AVC-based system, especially at high packet loss rates. Thus the error resilience of the proposed DVC scheme is superior to the one provided by H.264/AVC, especially in the case of transmission over packet lossy networks.
Space--Time from Topos Quantum Theory
NASA Astrophysics Data System (ADS)
Flori, Cecilia
One of the main challenges in theoretical physics in the past 50 years has been to define a theory of quantum gravity, i.e. a theory which consistently combines general relativity and quantum theory in order to define a theory of space-time itself seen as a fluctuating field. As such, a definition of space-time is of paramount importance, but it is precisely the attainment of such a definition which is one of the main stumbling blocks in quantum gravity. One of the striking features of quantum gravity is that although both general relativity and quantum theory treat space-time as a four-dimensional (4D) manifold equipped with a metric, quantum gravity would suggest that, at the microscopic scale, space-time is somewhat discrete. Therefore the continuum structure of space-time suggested by the two main ingredients of quantum gravity seems to be thrown into discussion by quantum gravity itself. This seems quite an odd predicament, but it might suggest that perhaps a different mathematical structure other than a smooth manifold should model space-time. These considerations seem to shed doubts on the use of the continuum in general in a possible theory of quantum gravity. An alternative would be to develop a mathematical formalism for quantum gravity in which no fundamental role is played by the continuum and where a new concept of space-time, not modeled on a differentiable manifold, will emerge. This is precisely one of the aims of the topos theory approach to quantum theory and quantum gravity put forward by Isham, Butterfield, and Doering and subsequently developed by other authors. The aim of this article is to precisely elucidate how such an approach gives rise to a new definition of space-time which might be more appropriate for quantum gravity.
Metastring theory and modular space-time
NASA Astrophysics Data System (ADS)
Freidel, Laurent; Leigh, Robert G.; Minic, Djordje
2015-06-01
String theory is canonically accompanied with a space-time interpretation which determines S-matrix-like observables, and connects to the standard physics at low energies in the guise of local effective field theory. Recently, we have introduced a reformulation of string theory which does not rely on an a priori space-time interpretation or a pre-assumption of locality. This metastring theory is formulated in such a way that stringy symmetries (such as T-duality) are realized linearly. In this paper, we study metastring theory on a flat background and develop a variety of technical and interpretational ideas. These include a formulation of the moduli space of Lorentzian worldsheets, a careful study of the symplectic structure and consequently consistent closed and open boundary conditions, and the string spectrum and operator algebra. What emerges from these studies is a new quantum notion of space-time that we refer to as a quantum Lagrangian or equivalently a modular space-time. This concept embodies the standard tenets of quantum theory and implements in a precise way a notion of relative locality. The usual string backgrounds (non-compact space-time along with some toroidally compactified spatial directions) are obtained from modular space-time by a limiting procedure that can be thought of as a correspondence limit.
Sparsey™: event recognition via deep hierarchical sparse distributed codes
Rinkus, Gerard J.
2014-01-01
The visual cortex's hierarchical, multi-level organization is captured in many biologically inspired computational vision models, the general idea being that progressively larger scale (spatially/temporally) and more complex visual features are represented in progressively higher areas. However, most earlier models use localist representations (codes) in each representational field (which we equate with the cortical macrocolumn, “mac”), at each level. In localism, each represented feature/concept/event (hereinafter “item”) is coded by a single unit. The model we describe, Sparsey, is hierarchical as well but crucially, it uses sparse distributed coding (SDC) in every mac in all levels. In SDC, each represented item is coded by a small subset of the mac's units. The SDCs of different items can overlap and the size of overlap between items can be used to represent their similarity. The difference between localism and SDC is crucial because SDC allows the two essential operations of associative memory, storing a new item and retrieving the best-matching stored item, to be done in fixed time for the life of the model. Since the model's core algorithm, which does both storage and retrieval (inference), makes a single pass over all macs on each time step, the overall model's storage/retrieval operation is also fixed-time, a criterion we consider essential for scalability to the huge (“Big Data”) problems. A 2010 paper described a nonhierarchical version of this model in the context of purely spatial pattern processing. Here, we elaborate a fully hierarchical model (arbitrary numbers of levels and macs per level), describing novel model principles like progressive critical periods, dynamic modulation of principal cells' activation functions based on a mac-level familiarity measure, representation of multiple simultaneously active hypotheses, a novel method of time warp invariant recognition, and we report results showing learning/recognition of spatiotemporal
Robust video transmission with distributed source coded auxiliary channel.
Wang, Jiajun; Majumdar, Abhik; Ramchandran, Kannan
2009-12-01
We propose a novel solution to the problem of robust, low-latency video transmission over lossy channels. Predictive video codecs, such as MPEG and H.26x, are very susceptible to prediction mismatch between encoder and decoder or "drift" when there are packet losses. These mismatches lead to a significant degradation in the decoded quality. To address this problem, we propose an auxiliary codec system that sends additional information alongside an MPEG or H.26x compressed video stream to correct for errors in decoded frames and mitigate drift. The proposed system is based on the principles of distributed source coding and uses the (possibly erroneous) MPEG/H.26x decoder reconstruction as side information at the auxiliary decoder. The distributed source coding framework depends upon knowing the statistical dependency (or correlation) between the source and the side information. We propose a recursive algorithm to analytically track the correlation between the original source frame and the erroneous MPEG/H.26x decoded frame. Finally, we propose a rate-distortion optimization scheme to allocate the rate used by the auxiliary encoder among the encoding blocks within a video frame. We implement the proposed system and present extensive simulation results that demonstrate significant gains in performance both visually and objectively (on the order of 2 dB in PSNR over forward error correction based solutions and 1.5 dB in PSNR over intrarefresh based solutions for typical scenarios) under tight latency constraints. PMID:19703801
Behavioral correlates of the distributed coding of spatial context.
Anderson, Michael I; Killing, Sarah; Morris, Caitlin; O'Donoghue, Alan; Onyiagha, Dikennam; Stevenson, Rosemary; Verriotis, Madeleine; Jeffery, Kathryn J
2006-01-01
Hippocampal place cells respond heterogeneously to elemental changes of a compound spatial context, suggesting that they form a distributed code of context, whereby context information is shared across a population of neurons. The question arises as to what this distributed code might be useful for. The present study explored two possibilities: one, that it allows contexts with common elements to be disambiguated, and the other, that it allows a given context to be associated with more than one outcome. We used two naturalistic measures of context processing in rats, rearing and thigmotaxis (boundary-hugging), to explore how rats responded to contextual novelty and to relate this to the behavior of place cells. In experiment 1, rats showed dishabituation of rearing to a novel reconfiguration of familiar context elements, suggesting that they perceived the reconfiguration as novel, a behavior that parallels that of place cells in a similar situation. In experiment 2, rats were trained in a place preference task on an open-field arena. A change in the arena context triggered renewed thigmotaxis, and yet navigation continued unimpaired, indicating simultaneous representation of both the altered contextual and constant spatial cues. Place cells similarly exhibited a dual population of responses, consistent with the hypothesis that their activity underlies spatial behavior. Together, these experiments suggest that heterogeneous context encoding (or "partial remapping") by place cells may function to allow the flexible assignment of associations to contexts, a faculty that could be useful in episodic memory encoding. PMID:16921500
Pressure distribution based optimization of phase-coded acoustical vortices
Zheng, Haixiang; Gao, Lu; Dai, Yafei; Ma, Qingyu; Zhang, Dong
2014-02-28
Based on the acoustic radiation of point source, the physical mechanism of phase-coded acoustical vortices is investigated with formulae derivations of acoustic pressure and vibration velocity. Various factors that affect the optimization of acoustical vortices are analyzed. Numerical simulations of the axial, radial, and circular pressure distributions are performed with different source numbers, frequencies, and axial distances. The results prove that the acoustic pressure of acoustical vortices is linearly proportional to the source number, and lower fluctuations of circular pressure distributions can be produced for more sources. With the increase of source frequency, the acoustic pressure of acoustical vortices increases accordingly with decreased vortex radius. Meanwhile, increased vortex radius with reduced acoustic pressure is also achieved for longer axial distance. With the 6-source experimental system, circular and radial pressure distributions at various frequencies and axial distances have been measured, which have good agreements with the results of numerical simulations. The favorable results of acoustic pressure distributions provide theoretical basis for further studies of acoustical vortices.
Dynamic algorithm for correlation noise estimation in distributed video coding
NASA Astrophysics Data System (ADS)
Thambu, Kuganeswaran; Fernando, Xavier; Guan, Ling
2010-01-01
Low complexity encoders at the expense of high complexity decoders are advantageous in wireless video sensor networks. Distributed video coding (DVC) achieves the above complexity balance, where the receivers compute Side information (SI) by interpolating the key frames. Side information is modeled as a noisy version of input video frame. In practise, correlation noise estimation at the receiver is a complex problem, and currently the noise is estimated based on a residual variance between pixels of the key frames. Then the estimated (fixed) variance is used to calculate the bit-metric values. In this paper, we have introduced the new variance estimation technique that rely on the bit pattern of each pixel, and it is dynamically calculated over the entire motion environment which helps to calculate the soft-value information required by the decoder. Our result shows that the proposed bit based dynamic variance estimation significantly improves the peak signal to noise ratio (PSNR) performance.
NASA Astrophysics Data System (ADS)
Stachel, John
2003-01-01
A brief survey of the space-time structures used in theoretical physics from Newton to Einstein is followed by a discussion of the ways in which the space-time structure of general relativity differs radically from that of all previous theories by virtue of its dynamization of chrono-geometry and the consequent loss of any possibility of a kinematical coordinatization of the points of space-time. After a discussion of the extent to which these features of general relativity can be generalized and extended to any future fundamental theory, a principle of general permutation invariance is proposed and used to evaluate some current attempts to develop a theory of quantum gravity.
Space-time framework of internal measurement
NASA Astrophysics Data System (ADS)
Matsuno, Koichiro
1998-07-01
Measurement internal to material bodies is ubiquitous. The internal observer has its own local space-time framework that enables the observer to distinguish, even to a slightest degree, those material bodies fallen into that framework. Internal measurement proceeding among the internal observers come to negotiate a construction of more encompassing local framework of space and time. The construction takes place through friction among the internal observers. Emergent phenomena are related to an occurrence of enlarging the local space-time framework through the frictional negotiation among the material participants serving as the internal observers. Unless such a negotiation is obtained, the internal observers would have to move around in the local space-time frameworks of their own that are mutually incommensurable. Enhancement of material organization as demonstrated in biological evolutionary processes manifests an inexhaustible negotiation for enlarging the local space-time framework available to the internal observers. In contrast, Newtonian space-time framework, that remains absolute and all encompassing, is an asymptote at which no further emergent phenomena could be expected. It is thus ironical to expect something to emerge within the framework of Newtonian absolute space and time. Instead of being a complex and organized configuration of interaction to appear within the global space-time framework, emergent phenomena are a consequence of negotiation among the local space-time frameworks available to internal measurement. Most indicative of the negotiation of local space-time frameworks is emergence of a conscious self grounding upon the reflexive nature of perceptions, that is, a self-consciousness in short, that certainly goes beyond the Kantian transcendental subject. Accordingly, a synthetic discourse on securing consciousness upon the ground of self-consciousness can be developed, though linguistic exposition of consciousness upon self
Non-coding RNAs and complex distributed genetic networks
NASA Astrophysics Data System (ADS)
Zhdanov, Vladimir P.
2011-08-01
In eukaryotic cells, the mRNA-protein interplay can be dramatically influenced by non-coding RNAs (ncRNAs). Although this new paradigm is now widely accepted, an understanding of the effect of ncRNAs on complex genetic networks is lacking. To clarify what may happen in this case, we propose a mean-field kinetic model describing the influence of ncRNA on a complex genetic network with a distributed architecture including mutual protein-mediated regulation of many genes transcribed into mRNAs. ncRNA is considered to associate with mRNAs and inhibit their translation and/or facilitate degradation. Our results are indicative of the richness of the kinetics under consideration. The main complex features are found to be bistability and oscillations. One could expect to find kinetic chaos as well. The latter feature has however not been observed in our calculations. In addition, we illustrate the difference in the regulation of distributed networks by mRNA and ncRNA.
Space, Time, Matter:. 1918-2012
NASA Astrophysics Data System (ADS)
Veneziano, Gabriele
2013-12-01
Almost a century has elapsed since Hermann Weyl wrote his famous "Space, Time, Matter" book. After recalling some amazingly premonitory writings by him and Wolfgang Pauli in the fifties, I will try to asses the present status of the problematics they were so much concerned with.
Relativistic positioning in Schwarzschild space-time
NASA Astrophysics Data System (ADS)
Puchades, Neus; Sáez, Diego
2015-04-01
In the Schwarzschild space-time created by an idealized static spherically symmetric Earth, two approaches -based on relativistic positioning- may be used to estimate the user position from the proper times broadcast by four satellites. In the first approach, satellites move in the Schwarzschild space-time and the photons emitted by the satellites follow null geodesics of the Minkowski space-time asymptotic to the Schwarzschild geometry. This assumption leads to positioning errors since the photon world lines are not geodesics of any Minkowski geometry. In the second approach -the most coherent one- satellites and photons move in the Schwarzschild space-time. This approach is a first order one in the dimensionless parameter GM/R (with the speed of light c=1). The two approaches give different inertial coordinates for a given user. The differences are estimated and appropriately represented for users located inside a great region surrounding Earth. The resulting values (errors) are small enough to justify the use of the first approach, which is the simplest and the most manageable one. The satellite evolution mimics that of the GALILEO global navigation satellite system.
Adaptive distributed video coding with correlation estimation using expectation propagation
NASA Astrophysics Data System (ADS)
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2012-10-01
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.
FPGA based digital phase-coding quantum key distribution system
NASA Astrophysics Data System (ADS)
Lu, XiaoMing; Zhang, LiJun; Wang, YongGang; Chen, Wei; Huang, DaJun; Li, Deng; Wang, Shuang; He, DeYong; Yin, ZhenQiang; Zhou, Yu; Hui, Cong; Han, ZhengFu
2015-12-01
Quantum key distribution (QKD) is a technology with the potential capability to achieve information-theoretic security. Phasecoding is an important approach to develop practical QKD systems in fiber channel. In order to improve the phase-coding modulation rate, we proposed a new digital-modulation method in this paper and constructed a compact and robust prototype of QKD system using currently available components in our lab to demonstrate the effectiveness of the method. The system was deployed in laboratory environment over a 50 km fiber and continuously operated during 87 h without manual interaction. The quantum bit error rate (QBER) of the system was stable with an average value of 3.22% and the secure key generation rate is 8.91 kbps. Although the modulation rate of the photon in the demo system was only 200 MHz, which was limited by the Faraday-Michelson interferometer (FMI) structure, the proposed method and the field programmable gate array (FPGA) based electronics scheme have a great potential for high speed QKD systems with Giga-bits/second modulation rate.
Space-time separation of electronic correlations
NASA Astrophysics Data System (ADS)
Tomczak, Jan M.; Schäfer, Thomas; Klebel, Benjamin; Toschi, Alessandro
While second-order phase transitions always cause strong nonlocal fluctuations, their effect on spectral properties crucially depends on the dimensionality. First, we show that for the important case of three dimensions the electron self-energy is well separable into a local dynamical part and static nonlocal contributions. In particular, using the dynamical vertex approximation for the doped 3D Hubbard model, we demonstrate that the quasiparticle weight remains essentially momentum independent, despite overall large nonlocal corrections to the self-energy when approaching the spin-ordered state. This generalizes earlier empirical findings of this property in the iron pnictides and transition metal oxides based on Hedin's GW approximation. With this insight, we here propose a ''space-time-separated'' scheme for many-body perturbation theory that is up to ten times more efficient than current implementations. Finally, we discuss limits of the space-time separation of correlation effects by studying the crossover from three to two dimensions.
Covariant non-commutative space-time
NASA Astrophysics Data System (ADS)
Heckman, Jonathan J.; Verlinde, Herman
2015-05-01
We introduce a covariant non-commutative deformation of 3 + 1-dimensional conformal field theory. The deformation introduces a short-distance scale ℓp, and thus breaks scale invariance, but preserves all space-time isometries. The non-commutative algebra is defined on space-times with non-zero constant curvature, i.e. dS4 or AdS4. The construction makes essential use of the representation of CFT tensor operators as polynomials in an auxiliary polarization tensor. The polarization tensor takes active part in the non-commutative algebra, which for dS4 takes the form of so (5, 1), while for AdS4 it assembles into so (4, 2). The structure of the non-commutative correlation functions hints that the deformed theory contains gravitational interactions and a Regge-like trajectory of higher spin excitations.
NASA Astrophysics Data System (ADS)
Géré, Antoine; Hack, Thomas-Paul; Pinamonti, Nicola
2016-05-01
We develop a renormalisation scheme for time-ordered products in interacting field theories on curved space-times that consists of an analytic regularisation of Feynman amplitudes and a minimal subtraction of the resulting pole parts. This scheme is directly applicable to space-times with Lorentzian signature, manifestly generally covariant, invariant under any space-time isometries present, and constructed to all orders in perturbation theory. Moreover, the scheme correctly captures the nongeometric state-dependent contribution of Feynman amplitudes, and it is well suited for practical computations. To illustrate this last point, we compute explicit examples on a generic curved space-time and demonstrate how momentum space computations in cosmological space-times can be performed in our scheme. In this work, we discuss only scalar fields in four space-time dimensions, but we argue that the renormalisation scheme can be directly generalised to other space-time dimensions and field theories with higher spin as well as to theories with local gauge invariance.
Hypermotion due to space-time deformation
NASA Astrophysics Data System (ADS)
Fil'Chenkov, Michael; Laptev, Yuri
2016-03-01
A superluminal motion (hypermotion) via M. Alcubierre’s warp drive is considered. Parameters of the warp drive have been estimated. The equations of starship geodesics have been solved. The starship velocity have been shown to exceed the speed of light, with the local velocity relative to the deformed space-time being below it. Hawking’s radiation does not prove to affect the ship interior considerably. Difficulties related to a practical realization of the hypermotion are indicated.
The Space-Time Model According to Dimensional Continuous Space-Time Theory
NASA Astrophysics Data System (ADS)
Martini, Luiz Cesar
2014-04-01
This article results from the Dimensional Continuous Space-Time Theory for which the introductory theoretician was presented in [1]. A theoretical model of the Continuous Space-Time is presented. The wave equation of time into absolutely stationary empty space referential will be described in detail. The complex time, that is the time fixed on the infinite phase time speed referential, is deduced from the New View of Relativity Theory that is being submitted simultaneously with this article in this congress. Finally considering the inseparable Space-Time is presented the duality equation wave-particle.
Optical Properties of Quantum Vacuum. Space-Time Engineering
Gevorkyan, A. S.; Gevorkyan, A. A.
2011-03-28
The propagation of electromagnetic waves in the vacuum is considered taking into account quantum fluctuations in the limits of Maxwell-Langevin (ML) type stochastic differential equations. For a model of fluctuations, type of 'white noise', using ML equations a partial differential equation of second order is obtained which describes the quantum distribution of virtual particles in vacuum. It is proved that in order to satisfy observed facts, the Lamb Shift etc, the virtual particles should be quantized in unperturbed vacuum. It is shown that the quantized virtual particles in toto (approximately 86 percent) are condensed on the 'ground state' energy level. It is proved that the extension of Maxwell electrodynamics with inclusion of quantum vacuum fluctuations may be constructed on a 6D space-time continuum, where 4D is Minkowski space-time and 2D is a compactified subspace. In detail is studied of vacuum's refraction indexes under the influence of external electromagnetic fields.
Space-time complexity in Hamiltonian dynamics.
Afraimovich, V; Zaslavsky, G M
2003-06-01
New notions of the complexity function C(epsilon;t,s) and entropy function S(epsilon;t,s) are introduced to describe systems with nonzero or zero Lyapunov exponents or systems that exhibit strong intermittent behavior with "flights," trappings, weak mixing, etc. The important part of the new notions is the first appearance of epsilon-separation of initially close trajectories. The complexity function is similar to the propagator p(t(0),x(0);t,x) with a replacement of x by the natural lengths s of trajectories, and its introduction does not assume of the space-time independence in the process of evolution of the system. A special stress is done on the choice of variables and the replacement t-->eta=ln t, s-->xi=ln s makes it possible to consider time-algebraic and space-algebraic complexity and some mixed cases. It is shown that for typical cases the entropy function S(epsilon;xi,eta) possesses invariants (alpha,beta) that describe the fractal dimensions of the space-time structures of trajectories. The invariants (alpha,beta) can be linked to the transport properties of the system, from one side, and to the Riemann invariants for simple waves, from the other side. This analog provides a new meaning for the transport exponent mu that can be considered as the speed of a Riemann wave in the log-phase space of the log-space-time variables. Some other applications of new notions are considered and numerical examples are presented. PMID:12777116
Null surfaces in static space-times
NASA Astrophysics Data System (ADS)
Vollick, Dan N.
2015-07-01
In this paper I consider surfaces in a space-time with a Killing vector ξ α that is time-like and hypersurface-orthogonal on one side of the surface. The Killing vector may be either time-like or space-like on the other side of the surface. It has been argued that the surface is null if ξ α ξ α → 0 as the surface is approached from the static region. This implies that, in a coordinate system adapted to ξ, surfaces with g tt = 0 are null. In spherically symmetric space-times the condition g rr = 0 instead of g tt = 0 is sometimes used to locate null surfaces. In this paper I examine the arguments that lead to these two different criteria and show that both arguments are incorrect. A surface ξ = const has a normal vector whose norm is proportional to ξ α ξ α . This lead to the conclusion that surfaces with ξ α ξ α = 0 are null. However, the proportionality factor generally diverges when g tt = 0, leading to a different condition for the norm to be null. In static spherically symmetric space-times this condition gives g rr = 0, not g tt = 0. The problem with the condition g rr = 0 is that the coordinate system is singular on the surface. One can either use a nonsingular coordinate system or examine the induced metric on the surface to determine if it is null. By using these approaches it is shown that the correct criteria is g tt = 0. I also examine the condition required for the surface to be nonsingular.
NASA Astrophysics Data System (ADS)
Bertolami, Orfeu
Since the nineteenth century, it is known, through the work of Lobatchevski, Riemann, and Gauss, that spaces do not need to have a vanishing curvature. This was for sure a revolution on its own, however, from the point of view of these mathematicians, the space of our day to day experience, the physical space, was still an essentially a priori concept that preceded all experience and was independent of any physical phenomena. Actually, that was also the view of Newton and Kant with respect to time, even though, for these two space-time explorers, the world was Euclidean.
NASA Technical Reports Server (NTRS)
Villarreal, James A.; Shelton, Robert O.
1991-01-01
Introduced here is a novel technique which adds the dimension of time to the well known back propagation neural network algorithm. Cited here are several reasons why the inclusion of automated spatial and temporal associations are crucial to effective systems modeling. An overview of other works which also model spatiotemporal dynamics is furnished. A detailed description is given of the processes necessary to implement the space-time network algorithm. Several demonstrations that illustrate the capabilities and performance of this new architecture are given.
NASA Astrophysics Data System (ADS)
García, José A.; Alvarez, Samantha; Flores, Alejandro; Govezensky, Tzipe; Bobadilla, Juan R.; José, Marco V.
2004-10-01
The genetic code is considered to be universal. In order to test if some statistical properties of the coding bacterial genome were due to inherent properties of the genetic code, we compared the autocorrelation function, the scaling properties and the maximum entropy of the distribution of distances of amino acids in sequences obtained by translating protein-coding regions from the genome of Borrelia burgdorferi, under different genetic codes. Overall our results indicate that these properties are very stable to perturbations made by altering the genetic code. We also discuss the evolutionary likely implications of the present results.
Syndrome Surveillance Using Parametric Space-Time Clustering
KOCH, MARK W.; MCKENNA, SEAN A.; BILISOLY, ROGER L.
2002-11-01
As demonstrated by the anthrax attack through the United States mail, people infected by the biological agent itself will give the first indication of a bioterror attack. Thus, a distributed information system that can rapidly and efficiently gather and analyze public health data would aid epidemiologists in detecting and characterizing emerging diseases, including bioterror attacks. We propose using clusters of adverse health events in space and time to detect possible bioterror attacks. Space-time clusters can indicate exposure to infectious diseases or localized exposure to toxins. Most space-time clustering approaches require individual patient data. To protect the patient's privacy, we have extended these approaches to aggregated data and have embedded this extension in a sequential probability ratio test (SPRT) framework. The real-time and sequential nature of health data makes the SPRT an ideal candidate. The result of space-time clustering gives the statistical significance of a cluster at every location in the surveillance area and can be thought of as a ''health-index'' of the people living in this area. As a surrogate to bioterrorism data, we have experimented with two flu data sets. For both databases, we show that space-time clustering can detect a flu epidemic up to 21 to 28 days earlier than a conventional periodic regression technique. We have also tested using simulated anthrax attack data on top of a respiratory illness diagnostic category. Results show we do very well at detecting an attack as early as the second or third day after infected people start becoming severely symptomatic.
Emergence of space-time from topologically homogeneous causal networks
NASA Astrophysics Data System (ADS)
Mauro D'Ariano, Giacomo; Tosini, Alessandro
2013-08-01
In this paper we study the emergence of Minkowski space-time from a discrete causal network representing a classical information flow. Differently from previous approaches, we require the network to be topologically homogeneous, so that the metric is derived from pure event-counting. Emergence from events has an operational motivation in requiring that every physical quantity-including space-time-be defined through precise measurement procedures. Topological homogeneity is a requirement for having space-time metric emergent from the pure topology of causal connections, whereas physically homogeneity corresponds to the universality of the physical law. We analyze in detail the case of 1+1 dimensions. If we consider the causal connections as an exchange of classical information, we can establish coordinate systems via an Einsteinian protocol, and this leads to a digital version of the Lorentz transformations. In a computational analogy, the foliation construction can be regarded as the synchronization with a global clock of the calls to independent subroutines (corresponding to the causally independent events) in a parallel distributed computation. Thus the Lorentz time-dilation emerges as an increased density of leaves within a single tic-tac of a clock, whereas space-contraction results from the corresponding decrease of density of events per leaf. The operational procedure of building up the coordinate system introduces an in-principle indistinguishability between neighboring events, resulting in a network that is coarse-grained, the thickness of the event being a function of the observer's clock. The illustrated simple classical construction can be extended to space dimension greater than one, with the price of anisotropy of the maximal speed, due to the Weyl-tiling problem. This issue is cured if the causal network is quantum, as e.g. in a quantum cellular automaton, and isotropy is recovered by quantum coherence via superposition of causal paths. We thus argue
Space-time geometry of topological phases
Burnell, F.J.; Simon, Steven H.
2010-11-15
The 2 + 1 dimensional lattice models of Levin and Wen (2005) provide the most general known microscopic construction of topological phases of matter. Based heavily on the mathematical structure of category theory, many of the special properties of these models are not obvious. In the current paper, we present a geometrical space-time picture of the partition function of the Levin-Wen models which can be described as doubles (two copies with opposite chiralities) of underlying anyon theories. Our space-time picture describes the partition function as a knot invariant of a complicated link, where both the lattice variables of the microscopic Levin-Wen model and the terms of the Hamiltonian are represented as labeled strings of this link. This complicated link, previously studied in the mathematical literature, and known as Chain-Mail, can be related directly to known topological invariants of 3-manifolds such as the so-called Turaev-Viro invariant and the Witten-Reshitikhin-Turaev invariant. We further consider quasi-particle excitations of the Levin-Wen models and we see how they can be understood by adding additional strings to the Chain-Mail link representing quasi-particle world-lines. Our construction gives particularly important new insight into how a doubled theory arises from these microscopic models.
Casimir energy in Kerr space-time
NASA Astrophysics Data System (ADS)
Sorge, F.
2014-10-01
We investigate the vacuum energy of a scalar massless field confined in a Casimir cavity moving in a circular equatorial orbit in the exact Kerr space-time geometry. We find that both the orbital motion of the cavity and the underlying space-time geometry conspire in lowering the absolute value of the (renormalized) Casimir energy ⟨ɛvac⟩ren , as measured by a comoving observer, with respect to whom the cavity is at rest. This, in turn, causes a weakening in the attractive force between the Casimir plates. In particular, we show that the vacuum energy density ⟨ɛvac⟩ren→0 when the orbital path of the Casimir cavity comes close to the corotating or counter-rotating circular null orbits (possibly geodesic) allowed by the Kerr geometry. Such an effect could be of some astrophysical interest on relevant orbits, such as the Kerr innermost stable circular orbits, being potentially related to particle confinement (as in some interquark models). The present work generalizes previous results obtained by several authors in the weak field approximation.
On the binary weight distribution of some Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Lin, S.
1985-01-01
Consider an (n,k) linear code with symbols from GF(2 sup M). If each code symbol is represented by a m-tuple over GF(2) using certain basis for GF(2 sup M), a binary (nm,km) linear code is obtained. The weight distribution of a binary linear code obtained in this manner is investigated. Weight enumerators for binary linear codes obtained from Reed-Solomon codes over GF(2 sup M) generated by polynomials, (X-alpha), (X-l)(X-alpha), (X-alpha)(X-alpha squared) and (X-l)(X-alpha)(X-alpha squared) and their extended codes are presented, where alpha is a primitive element of GF(2 sup M). Binary codes derived from Reed-Solomon codes are often used for correcting multiple bursts of errors.
Space-time structure of climate variability
NASA Astrophysics Data System (ADS)
Laepple, Thomas; Reschke, Maria; Huybers, Peter; Rehfeld, Kira
2016-04-01
The spatial scale of climate variability is closely linked to the temporal scale. Whereas fast variations such as weather are regional, glacial-interglacial cycles appear to be globally coherent. Quantifying the relationship between local and large-scale climate variations is essential for mapping the extent of past climate changes. Larger spatial scales of climate variations on longer time scales are expected if one views the atmosphere and oceans as primarily diffusive with respect to heat. On the other hand, the interaction of a dynamical system with spatially variable boundary conditions --- for example: topography, gradients in insolation, and variations in rotational effects --- will lead to spatially heterogeneous structures that are largely independent of time scale. It has been argued that the increase in spatial scales continues across all time scales [Mitchell, 1976], but up to now, the space-time structure of variations beyond the decadal scale is basically unexplored. Here, we attempt to estimate the spatial extent of temperature changes up to millennial time-scales using instrumental observations, paleo-observations and climate model simulations. Although instrumental and climate model data show an increase in spatial scale towards slower variations, paleo-proxy data, if interpreted as temperature signals, lead to ambiguous results. An analysis of a global Holocene stack [Marcott et al., 2013], for example, suggests a jump towards more localized patterns when leaving the instrumental time scale. Localization contradicts physical expectations and may instead reflect the presence of various types of noise. Turning the problem around, and imposing a consistent space-time structure across instruments and proxy records allows us to constrain the interpretation of the climate signal in proxy records. In the case of the Holocene stack, preliminary results suggest that the time-uncertainty on the Holocene records would have to be much larger than published in
Space-time formulation for finite element modeling of superconductors
Ashworth, Stephen P; Grilli, Francesco; Sirois, Frederic; Laforest, Marc
2008-01-01
In this paper we present a new model for computing the current density and field distributions in superconductors by means of a periodic space-time formulation for finite elements (FE). By considering a space dimension as time, we can use a static model to solve a time dependent problem. This allows overcoming one of the major problems of FE modeling of superconductors: the length of simulations, even for relatively simple cases. We present our first results and compare them to those obtained with a 'standard' time-dependent method and with analytical solutions.
Space-Time in Quantum Gravity: Does Space-Time have Quantum Properties?
NASA Astrophysics Data System (ADS)
Hedrich, Reiner
The conceptual incompatibility between General Relativity and Quantum Mechanics is generally seen as sufficient motivation for the development of a theory of Quantum Gravity. If--so a typical argument goes -- Quantum Mechanics gives a universally valid basis for the description of the dynamical behavior of all natural systems, then the gravitational field should have quantum properties, like all other fundamental interaction fields. And if General Relativity can be seen as an adequate description of the classical aspects of gravity and space-time -- and their mutual relation -- this leads, together with the rather convincing arguments against semi-classical theories of gravity, to a strategy which takes a quantization of General Relativity as the natural avenue to a theory of Quantum Gravity. And because in General Relativity, the gravitational field is represented by the space-time metric, a quantization of the gravitational field would in some sense correspond to a quantization of geometry. Space-time would have quantum properties...
Phantom space-times in fake supergravity
NASA Astrophysics Data System (ADS)
Bu Taam, Maryam; Sabra, Wafic A.
2015-12-01
We discuss phantom metrics admitting Killing spinors in fake N = 2, D = 4 supergravity coupled to vector multiplets. The Abelian U (1) gauge fields in the fake theory have kinetic terms with the wrong sign. We solve the Killing spinor equations for the standard and fake theories in a unified fashion by introducing a parameter which distinguishes between the two theories. The solutions found are fully determined in terms of algebraic conditions, the so-called stabilisation equations, in which the symplectic sections are related to a set of functions. These functions are harmonic in the case of the standard supergravity theory and satisfy the wave-equation in flat (2 + 1)-space-time in the fake theory. Explicit examples are given for the minimal models with quadratic prepotentials.
Distributed Estimation, Coding, and Scheduling in Wireless Visual Sensor Networks
ERIC Educational Resources Information Center
Yu, Chao
2013-01-01
In this thesis, we consider estimation, coding, and sensor scheduling for energy efficient operation of wireless visual sensor networks (VSN), which consist of battery-powered wireless sensors with sensing (imaging), computation, and communication capabilities. The competing requirements for applications of these wireless sensor networks (WSN)…
SAMDIST: A Computer Code for Calculating Statistical Distributions for R-Matrix Resonance Parameters
Leal, L.C.
1995-01-01
The: SAMDIST computer code has been developed to calculate distribution of resonance parameters of the Reich-Moore R-matrix type. The program assumes the parameters are in the format compatible with that of the multilevel R-matrix code SAMMY. SAMDIST calculates the energy-level spacing distribution, the resonance width distribution, and the long-range correlation of the energy levels. Results of these calculations are presented in both graphic and tabular forms.
SAMDIST: A computer code for calculating statistical distributions for R-matrix resonance parameters
Leal, L.C.; Larson, N.M.
1995-09-01
The SAMDIST computer code has been developed to calculate distribution of resonance parameters of the Reich-Moore R-matrix type. The program assumes the parameters are in the format compatible with that of the multilevel R-matrix code SAMMY. SAMDIST calculates the energy-level spacing distribution, the resonance width distribution, and the long-range correlation of the energy levels. Results of these calculations are presented in both graphic and tabular forms.
A New Solution of Distributed Disaster Recovery Based on Raptor Code
NASA Astrophysics Data System (ADS)
Deng, Kai; Wang, Kaiyun; Ma, Danyang
For the large cost, low data availability in the condition of multi-node storage and poor capacity of intrusion tolerance of traditional disaster recovery which is based on simple copy, this paper put forward a distributed disaster recovery scheme based on raptor codes. This article introduces the principle of raptor codes, and analyses its coding advantages, and gives a comparative analysis between this solution and traditional solutions through the aspects of redundancy, data availability and capacity of intrusion tolerance. The results show that the distributed disaster recovery solution based on raptor codes can achieve higher data availability as well as better intrusion tolerance capabilities in the premise of lower redundancy.
Joint Channel-Network Coding (JCNC) for Distributed Storage in Wireless Network
NASA Astrophysics Data System (ADS)
Wang, Ning; Lin, Jiaru
We propose to construct a joint channel-network coding (knosswn as Random Linear Coding) scheme based on improved turbo codes for the distributed storage in wireless communication network with k data nodes, s storage nodes (kdistributed storage with erasure channel to AWGN and fading channel scenario. We investigate the throughput performance of the Joint Channel-Network Coding (JCNC) system benefits from network coding, compared with that of system without network coding based only on store and forward (S-F) approach. Another helpful parameter: node degree (L) indicates how many storage nodes one data packet should fall onto. L characterizes the en/decoding complexity of the system. Moreover, this proposed framework can be extended to ad-hoc and sensor network easily.
Beyond Archimedean Space-Time Structure
NASA Astrophysics Data System (ADS)
Rosinger, Elemér E.; Khrennikov, Andrei
2011-03-01
It took two millennia after Euclid and until in the early 1880s, when we went beyond the ancient axiom of parallels, and inaugurated geometries of curved spaces. In less than one more century, General Relativity followed. At present, physical thinking is still beheld by the yet deeper and equally ancient Archimedean assumption. In view of that, it is argued with some rather easily accessible mathematical support that Theoretical Physics may at last venture into the non-Archimedean realms. In this introductory paper we stress two fundamental consequences of the non-Archimedean approach to Theoretical Physics: one of them for quantum theory and another for relativity theory. From the non-Archimedean viewpoint, the assumption of the existence of minimal quanta of light (of the fixed frequency) is an artifact of the present Archimedean mathematical basis of quantum mechanics. In the same way the assumption of the existence of the maximal velocity, the velocity of light, is a feature of the real space-time structure which is fundamentally Archimedean. Both these assumptions are not justified in corresponding non-Archimedean models.
Beyond Archimedean Space-Time Structure
Rosinger, Elemer E.; Khrennikov, Andrei
2011-03-28
It took two millennia after Euclid and until in the early 1880s, when we went beyond the ancient axiom of parallels, and inaugurated geometries of curved spaces. In less than one more century, General Relativity followed. At present, physical thinking is still beheld by the yet deeper and equally ancient Archimedean assumption. In view of that, it is argued with some rather easily accessible mathematical support that Theoretical Physics may at last venture into the non-Archimedean realms. In this introductory paper we stress two fundamental consequences of the non-Archimedean approach to Theoretical Physics: one of them for quantum theory and another for relativity theory. From the non-Archimedean viewpoint, the assumption of the existence of minimal quanta of light (of the fixed frequency) is an artifact of the present Archimedean mathematical basis of quantum mechanics. In the same way the assumption of the existence of the maximal velocity, the velocity of light, is a feature of the real space-time structure which is fundamentally Archimedean. Both these assumptions are not justified in corresponding non-Archimedean models.
On Adler space-time extremes during ocean storms
NASA Astrophysics Data System (ADS)
Romolo, Alessandra; Arena, Felice
2015-04-01
The paper concerns the statistical properties of extreme ocean waves in the space-time domain. In this regard, a solution for the exceedance probability of the maximum crest height during a sea state over a certain area is obtained. The approach is based on the Adler's solution for the extremal probability for Gaussian random processes in a multidimensional domain. The method is able to include the effects of spatial variability of three-dimensional sea waves on short-term prediction, both over an assigned area XY and in a given direction. Next, the storm-term predictions in the space-time are investigated. For this purpose, the exceedance probability of ηmaxfunc during an ocean storm over an assigned area A is derived. This solution gives a generalization to the space-time of the Borgman's time-based model for nonstationary processes. The validity of the model is assessed from wave data of two buoys of the NOOA-NDBC network located along the Pacific and the Atlantic U.S. coasts. The results show that the size of the spatial domain A remarkably influences the expected maximum crest height during a sea storm. Indeed, the exceedance probabilities of the maximum crest height during an ocean storm over a certain area significantly deviate from the classical Borgman's model in time for increasing area. Then, to account for the nonlinear contributions on crest height, the proposed model is exploited jointly with the Forristall's distribution for nonlinear crest amplitudes in a given sea state. Finally, Monte Carlo simulations of a sea storm are performed showing a very good agreement with theoretical results.
Utilities for master source code distribution: MAX and Friends
NASA Technical Reports Server (NTRS)
Felippa, Carlos A.
1988-01-01
MAX is a program for the manipulation of FORTRAN master source code (MSC). This is a technique by which one maintains one and only one master copy of a FORTRAN program under a program developing system, which for MAX is assumed to be VAX/VMS. The master copy is not intended to be directly compiled. Instead it must be pre-processed by MAX to produce compilable instances. These instances may correspond to different code versions (for example, double precision versus single precision), different machines (for example, IBM, CDC, Cray) or different operating systems (i.e., VAX/VMS versus VAX/UNIX). The advantage os using a master source is more pronounced in complex application programs that are developed and maintained over many years and are to be transported and executed on several computer environments. The version lag problem that plagues many such programs is avoided by this approach. MAX is complemented by several auxiliary programs that perform nonessential functions. The ensemble is collectively known as MAX and Friends. All of these programs, including MAX, are executed as foreign VAX/VMS commands and can easily be hidden in customized VMS command procedures.
Parameter Estimation for a Model of Space-Time Rainfall
NASA Astrophysics Data System (ADS)
Smith, James A.; Karr, Alan F.
1985-08-01
In this paper, parameter estimation procedures, based on data from a network of rainfall gages, are developed for a class of space-time rainfall models. The models, which are designed to represent the spatial distribution of daily rainfall, have three components, one that governs the temporal occurrence of storms, a second that distributes rain cells spatially for a given storm, and a third that determines the rainfall pattern within a rain cell. Maximum likelihood and method of moments procedures are developed. We illustrate that limitations on model structure are imposed by restricting data sources to rain gage networks. The estimation procedures are applied to a 240-mi2 (621 km2) catchment in the Potomac River basin.
Quantum circuit for optimal eavesdropping in quantum key distribution using phase-time coding
Kronberg, D. A.; Molotkov, S. N.
2010-07-15
A quantum circuit is constructed for optimal eavesdropping on quantum key distribution proto- cols using phase-time coding, and its physical implementation based on linear and nonlinear fiber-optic components is proposed.
Source coding with escort distributions and Rényi entropy bounds
NASA Astrophysics Data System (ADS)
Bercher, J.-F.
2009-08-01
We discuss the interest of escort distributions and Rényi entropy in the context of source coding. We first recall a source coding theorem by Campbell relating a generalized measure of length to the Rényi-Tsallis entropy. We show that the associated optimal codes can be obtained using considerations on escort-distributions. We propose a new family of measure of length involving escort-distributions and we show that these generalized lengths are also bounded below by the Rényi entropy. Furthermore, we obtain that the standard Shannon codes lengths are optimum for the new generalized lengths measures, whatever the entropic index. Finally, we show that there exists in this setting an interplay between standard and escort distributions.
TEMPEST code simulations of hydrogen distribution in reactor containment structures. Final report
Trent, D.S.; Eyler, L.L.
1985-03-01
The mass transport version of the TEMPEST computer code was used to simulate hydrogen distribution in geometric configurations relevant to reactor containment structures. Predicted results of Battelle-Frankfurt hydrogen distribution tests 1 to 6, and 12 are presented. Agreement between predictions and experimental data is good. Best agreement is obtained using the k-epsilon turbulence model in TEMPEST in flow cases where turbulent diffusion and stable stratification are dominant mechanisms affecting transport. The code's general analysis capabilities are summarized.
Parallel Processing of Distributed Video Coding to Reduce Decoding Time
NASA Astrophysics Data System (ADS)
Tonomura, Yoshihide; Nakachi, Takayuki; Fujii, Tatsuya; Kiya, Hitoshi
This paper proposes a parallelized DVC framework that treats each bitplane independently to reduce the decoding time. Unfortunately, simple parallelization generates inaccurate bit probabilities because additional side information is not available for the decoding of subsequent bitplanes, which degrades encoding efficiency. Our solution is an effective estimation method that can calculate the bit probability as accurately as possible by index assignment without recourse to side information. Moreover, we improve the coding performance of Rate-Adaptive LDPC (RA-LDPC), which is used in the parallelized DVC framework. This proposal selects a fitting sparse matrix for each bitplane according to the syndrome rate estimation results at the encoder side. Simulations show that our parallelization method reduces the decoding time by up to 35[%] and achieves a bit rate reduction of about 10[%].
Physics in space-time with scale-dependent metrics
NASA Astrophysics Data System (ADS)
Balankin, Alexander S.
2013-10-01
We construct three-dimensional space Rγ3 with the scale-dependent metric and the corresponding Minkowski space-time Mγ,β4 with the scale-dependent fractal (DH) and spectral (DS) dimensions. The local derivatives based on scale-dependent metrics are defined and differential vector calculus in Rγ3 is developed. We state that Mγ,β4 provides a unified phenomenological framework for dimensional flow observed in quite different models of quantum gravity. Nevertheless, the main attention is focused on the special case of flat space-time M1/3,14 with the scale-dependent Cantor-dust-like distribution of admissible states, such that DH increases from DH=2 on the scale ≪ℓ0 to DH=4 in the infrared limit ≫ℓ0, where ℓ0 is the characteristic length (e.g. the Planck length, or characteristic size of multi-fractal features in heterogeneous medium), whereas DS≡4 in all scales. Possible applications of approach based on the scale-dependent metric to systems of different nature are briefly discussed.
Context-based lossless image compression with optimal codes for discretized Laplacian distributions
NASA Astrophysics Data System (ADS)
Giurcaneanu, Ciprian Doru; Tabus, Ioan; Stanciu, Cosmin
2003-05-01
Lossless image compression has become an important research topic, especially in relation with the JPEG-LS standard. Recently, the techniques known for designing optimal codes for sources with infinite alphabets have been applied for the quantized Laplacian sources which have probability mass functions with two geometrically decaying tails. Due to the simple parametric model of the source distribution the Huffman iterations are possible to be carried out analytically, using the concept of reduced source, and the final codes are obtained as a sequence of very simple arithmetic operations, avoiding the need to store coding tables. We propose the use of these (optimal) codes in conjunction with context-based prediction, for noiseless compression of images. To reduce further the average code length, we design Escape sequences to be employed when the estimation of the distribution parameter is unreliable. Results on standard test files show improvements in compression ratio when comparing with JPEG-LS.
Computer code for the calculation of the temperature distribution of cooled turbine blades
NASA Astrophysics Data System (ADS)
Tietz, Thomas A.; Koschel, Wolfgang W.
A generalized computer code for the calculation of the temperature distribution in a cooled turbine blade is presented. Using an iterative procedure, this program especially allows the coupling of the aerothermodynamic values of the internal flow with the corresponding temperature distribution of the blade material. The temperature distribution of the turbine blade is calculated using a fully three-dimensional finite element computer code, so that the radial heat flux is taken into account. This code was extended to 4-node tetrahedral elements enabling an adaptive grid generation. To facilitate the mesh generation of the usually complex blade geometries, a computer program was developed, which performs the grid generation of blades having basically arbitrary shape on the basis of two-dimensional cuts. The performance of the code is demonstrated with reference to a typical cooling configuration of a modern turbine blade.
Analysis of a mixed space-time diffusion equation
NASA Astrophysics Data System (ADS)
Momoniat, Ebrahim
2015-06-01
An energy method is used to analyze the stability of solutions of a mixed space-time diffusion equation that has application in the unidirectional flow of a second-grade fluid and the distribution of a compound Poisson process. Solutions to the model equation satisfying Dirichlet boundary conditions are proven to dissipate total energy and are hence stable. The stability of asymptotic solutions satisfying Neumann boundary conditions coincides with the condition for the positivity of numerical solutions of the model equation from a Crank-Nicolson scheme. The Crank-Nicolson scheme is proven to yield stable numerical solutions for both Dirichlet and Neumann boundary conditions for positive values of the critical parameter. Numerical solutions are compared to analytical solutions that are valid on a finite domain.
Linear operation of PRIZ space-time light modulators
NASA Astrophysics Data System (ADS)
Bryskin, L. I.; Korovin, L. I.; Petrov, M. P.
1984-08-01
A theory is presented for describing the dynamics of the field and charge distributions in a PRIZ space-time light modulator (STLM) using the internal transverse electrooptic effect. The PRIZ STLM consists of transparent electrodes deposited on the front and back sides of a photorefractive crystal wafer and operates at the writing (input) light wavelengths of 0.44 to 0.48 microns. The diffraction efficiency of the time-linear modulator is obtained for a case when the phase difference between the ordinary and the extraordinary rays is proportional to the exposure to the writing light. It is noted that a dielectric film placed between the sample and the metal electrode increases the diffraction efficiency at low frequencies, however requiring larger voltages to be applied. The efficiency is also analyzed with respect to the spatial modulation frequency of the writing light.
High-capacity quantum Fibonacci coding for key distribution
NASA Astrophysics Data System (ADS)
Simon, David S.; Lawrence, Nate; Trevino, Jacob; Dal Negro, Luca; Sergienko, Alexander V.
2013-03-01
Quantum cryptography and quantum key distribution (QKD) have been the most successful applications of quantum information processing, highlighting the unique capability of quantum mechanics, through the no-cloning theorem, to securely share encryption keys between two parties. Here, we present an approach to high-capacity, high-efficiency QKD by exploiting cross-disciplinary ideas from quantum information theory and the theory of light scattering of aperiodic photonic media. We propose a unique type of entangled-photon source, as well as a physical mechanism for efficiently sharing keys. The key-sharing protocol combines entanglement with the mathematical properties of a recursive sequence to allow a realization of the physical conditions necessary for implementation of the no-cloning principle for QKD, while the source produces entangled photons whose orbital angular momenta (OAM) are in a superposition of Fibonacci numbers. The source is used to implement a particular physical realization of the protocol by randomly encoding the Fibonacci sequence onto entangled OAM states, allowing secure generation of long keys from few photons. Unlike in polarization-based protocols, reference frame alignment is unnecessary, while the required experimental setup is simpler than other OAM-based protocols capable of achieving the same capacity and its complexity grows less rapidly with increasing range of OAM used.
Complex phylogenetic distribution of a non-canonical genetic code in green algae
2010-01-01
Background A non-canonical nuclear genetic code, in which TAG and TAA have been reassigned from stop codons to glutamine, has evolved independently in several eukaryotic lineages, including the ulvophycean green algal orders Dasycladales and Cladophorales. To study the phylogenetic distribution of the standard and non-canonical genetic codes, we generated sequence data of a representative set of ulvophycean green algae and used a robust green algal phylogeny to evaluate different evolutionary scenarios that may account for the origin of the non-canonical code. Results This study demonstrates that the Dasycladales and Cladophorales share this alternative genetic code with the related order Trentepohliales and the genus Blastophysa, but not with the Bryopsidales, which is sister to the Dasycladales. This complex phylogenetic distribution whereby all but one representative of a single natural lineage possesses an identical deviant genetic code is unique. Conclusions We compare different evolutionary scenarios for the complex phylogenetic distribution of this non-canonical genetic code. A single transition to the non-canonical code followed by a reversal to the canonical code in the Bryopsidales is highly improbable due to the profound genetic changes that coincide with codon reassignment. Multiple independent gains of the non-canonical code, as hypothesized for ciliates, are also unlikely because the same deviant code has evolved in all lineages. Instead we favor a stepwise acquisition model, congruent with the ambiguous intermediate model, whereby the non-canonical code observed in these green algal orders has a single origin. We suggest that the final steps from an ambiguous intermediate situation to a non-canonical code have been completed in the Trentepohliales, Dasycladales, Cladophorales and Blastophysa but not in the Bryopsidales. We hypothesize that in the latter lineage an initial stage characterized by translational ambiguity was not followed by final
NASA Astrophysics Data System (ADS)
Muanenda, Yonas; Oton, Claudio J.; Faralli, Stefano; Di Pasquale, Fabrizio
2015-07-01
We propose and experimentally demonstrate a Distributed Acoustic Sensor exploiting cyclic Simplex coding in a phase-sensitive OTDR on standard single mode fibers based on direct detection. Suitable design of the source and use of cyclic coding is shown to improve the SNR of the coherent back-scattered signal by up to 9 dB, reducing fading due to modulation instability and enabling accurate long-distance measurement of vibrations with minimal post-processing.
Domain structure of black hole space-times
Harmark, Troels
2009-07-15
We introduce the domain structure for stationary black hole space-times. The domain structure lives on the submanifold of fixed points of the Killing vector fields. Depending on which Killing vector field has fixed points the submanifold is naturally divided into domains. The domain structure provides invariants of the space-time, both topological and continuous. It is defined for any space-time dimension and any number of Killing vector fields. We examine the domain structure for asymptotically flat space-times and find a canonical form for the metric of such space-times. The domain structure generalizes the rod structure introduced for space-times with D-2 commuting Killing vector fields. We analyze in detail the domain structure for Minkowski space, the Schwarzschild-Tangherlini black hole and the Myers-Perry black hole in six and seven dimensions. Finally, we consider the possible domain structures for asymptotically flat black holes in six and seven dimensio0008.
Space-time transformation sky brightness at a horizontal position of the sun
NASA Astrophysics Data System (ADS)
Galileiskii, Viktor P.; Elizarov, Alexey I.; Kokarev, Dmitrii V.; Morozov, Aleksandr M.
2015-11-01
This report discusses some simulation results of the angular distribution of brightness of the sky in the case of molecular scattering in the atmosphere for the benefit of the study of space-time changes of this distribution during the civil twilight.
NASA Astrophysics Data System (ADS)
Li, Li; Hu, Xiao; Zeng, Rui
2007-11-01
The development of practical distributed video coding schemes is based on the consequence of information-theoretic bounds established in the 1970s by Slepian and Wolf for distributed lossless coding, and by Wyner and Ziv for lossy coding with decoder side information. In distributed video compression application, it is hard to accurately describe the non-stationary behavior of the virtual correlation channel between X and side information Y although it plays a very important role in overall system performance. In this paper, we implement a practical Slepian-Wolf asymmetric distributed video compression system using irregular LDPC codes. Moreover, based on exploiting the dependencies of previously decode bit planes from video frame X and side information Y, we present improvement schemes to divide different reliable regions. Our simulation results show improving schemes of exploiting the dependencies between previously decoded bit planes can get better overall encoding rate performance as BER approach zero. We also show, compared with BSC model, BC channel model is more suitable for distributed video compression scenario because of the non-stationary properties of the virtual correlation channel and adaptive detecting channel model parameters from previously adjacent decoded bit planes can provide more accurately initial belief messages from channel at LDPC decoder.
Space-time correlations in urban sprawl.
Hernando, A; Hernando, R; Plastino, A
2014-02-01
Understanding demographic and migrational patterns constitutes a great challenge. Millions of individual decisions, motivated by economic, political, demographic, rational and/or emotional reasons underlie the high complexity of demographic dynamics. Significant advances in quantitatively understanding such complexity have been registered in recent years, as those involving the growth of cities but many fundamental issues still defy comprehension. We present here compelling empirical evidence of a high level of regularity regarding time and spatial correlations in urban sprawl, unravelling patterns about the inertia in the growth of cities and their interaction with each other. By using one of the world's most exhaustive extant demographic data basis--that of the Spanish Government's Institute INE, with records covering 111 years and (in 2011) 45 million people, distributed among more than 8000 population nuclei--we show that the inertia of city growth has a characteristic time of 15 years, and its interaction with the growth of other cities has a characteristic distance of 80 km. Distance is shown to be the main factor that entangles two cities (60% of total correlations). The power of our current social theories is thereby enhanced. PMID:24258159
Newman-Penrose constants of stationary electrovacuum space-times
Zhang Xiangdong; Gao Sijie; Wu Xiaoning
2009-05-15
A theorem related to the Newman-Penrose constants is proven. The theorem states that all the Newman-Penrose constants of asymptotically flat, stationary, asymptotically algebraically special electrovacuum space-times are zero. Straightforward application of this theorem shows that all the Newman-Penrose constants of the Kerr-Newman space-time must vanish.
Photoelectric Effect for Twist-deformed Space-time
NASA Astrophysics Data System (ADS)
Daszkiewicz, M.
In this article, we investigate the impact of twisted space-time on the photoelectric effect, i.e., we derive the $\\theta$-deformed threshold frequency. In such a way we indicate that the space-time noncommutativity strongly enhances the photoelectric process.
Modelling dose distribution in tubing and cable using CYLTRAN and ACCEPT Monte Carlo simulation code
Weiss, D.E.; Kensek, R.P.
1993-12-31
One of the difficulties in the irradiation of non-slab geometries, such as a tube, is the uneven penetration of the electrons. A simple model of the distribution of dose in a tube or cable in relationship to voltage, composition, wall thickness and diameter can be mapped using the cylinder geometry provided for in the ITS/CYLTRAN code, complete with automatic subzoning. The reality of more complex 3D geometry to include effects of window foil, backscattering fixtures and beam scanning angles can be more completely accounted for by using the ITS/ACCEPT code with a line source update and a system of intersecting wedges to define input zones for mapping dose distributions in a tube. Thus, all of the variables that affect dose distribution can be modelled without the need to run time consuming and costly factory experiments. The effects of composition changes on dose distribution can also be anticipated.
A distributed code for color in natural scenes derived from center-surround filtered cone signals
Kellner, Christian J.; Wachtler, Thomas
2013-01-01
In the retina of trichromatic primates, chromatic information is encoded in an opponent fashion and transmitted to the lateral geniculate nucleus (LGN) and visual cortex via parallel pathways. Chromatic selectivities of neurons in the LGN form two separate clusters, corresponding to two classes of cone opponency. In the visual cortex, however, the chromatic selectivities are more distributed, which is in accordance with a population code for color. Previous studies of cone signals in natural scenes typically found opponent codes with chromatic selectivities corresponding to two directions in color space. Here we investigated how the non-linear spatio-chromatic filtering in the retina influences the encoding of color signals. Cone signals were derived from hyper-spectral images of natural scenes and preprocessed by center-surround filtering and rectification, resulting in parallel ON and OFF channels. Independent Component Analysis (ICA) on these signals yielded a highly sparse code with basis functions that showed spatio-chromatic selectivities. In contrast to previous analyses of linear transformations of cone signals, chromatic selectivities were not restricted to two main chromatic axes, but were more continuously distributed in color space, similar to the population code of color in the early visual cortex. Our results indicate that spatio-chromatic processing in the retina leads to a more distributed and more efficient code for natural scenes. PMID:24098289
Space-Time Characteristics of Rainfall Diurnal Variations
NASA Technical Reports Server (NTRS)
Yang, Song; Kummerow, Chris; Olson, Bill; Smith, Eric A.; Einaudi, Franco (Technical Monitor)
2001-01-01
The space-time features of rainfall diurnal variation of precipitation are systematically investigated by using the Tropical Rainfall Measuring Mission (TRMM) precipitation products retrieved from TRMM microwave imager (TMI), precipitation radar (PR) and TMI/PR combined algorithms. Results demonstrate that diurnal variability of precipitation is obvious over tropical regions. The dominant feature of rainfall diurnal cycle over, ocean is that there is consistent rainfall peak in early morning, while there is a consistent rainfall peak in mid-late afternoon over land. The seasonal variation on intensity of rainfall diurnal cycle is clearly evidenced. Horizontal distributions of rainfall diurnal variations indicate that there is a clearly early-morning peak with a secondary peak in the middle-late afternoon in ocean rainfall at latitudes dominated by large-scale convergence and deep convection. There is also an analogous early-morning peak in land rainfall along with a stronger afternoon peak forced by surface heating. Amplitude analysis shows that the patterns and its evolution of rainfall diurnal cycle are very close to rainfall distribution pattern and its evolution. These results indicate that rainfall diurnal variations are strongly associated with large-scale convective systems and climate weather systems. Phase studies clearly present the regional and seasonal features of rainfall diurnal activities. Further studies on convective and stratiform rainfall show different characteristics of diurnal cycles. Their spatial and temporal variations of convective and stratiform rainfall indicate that mechanisms for rainfall diurnal variations vary with time and space.
Frequency-coded quantum key distribution using amplitude-phase modulation
NASA Astrophysics Data System (ADS)
Morozov, Oleg G.; Gabdulkhakov, Il'daris M.; Morozov, Gennady A.; Zagrieva, Aida R.; Sarvarova, Lutsia M.
2016-03-01
Design principals of universal microwave photonics system for quantum key distribution with frequency coding are concerned. Its concept is based on the possibility of creating the multi-functional units to implement the most commonly used technologies of frequency coding: amplitude, phase and combined amplitude-phase modulation and re-modulation of optical carrier. The characteristics of advanced systems based on classical approaches and prospects of their development using a combination of amplitude modulation and phase commutation are discussed. These are the valuations how to build advanced systems with frequency coding quantum key distribution, including at their symmetric and asymmetric constructions, using of the mechanisms of the photon polarization states passive detection, based on the filters for wavelength division multiplexing of modulated optical carrier side components.
Offset Manchester coding for Rayleigh noise suppression in carrier-distributed WDM-PONs
NASA Astrophysics Data System (ADS)
Xu, Jing; Yu, Xiangyu; Lu, Weichao; Qu, Fengzhong; Deng, Ning
2015-07-01
We propose a novel offset Manchester coding in upstream to simultaneously realize Rayleigh noise suppression and differential detection in a carrier-distributed wavelength division multiplexed passive optical network. Error-free transmission of 2.5-Gb/s upstream signals over 50-km standard single mode fiber is experimentally demonstrated, with a 7-dB enhanced tolerance to Rayleigh noise.
Ricci collineation vectors in fluid space-times
NASA Astrophysics Data System (ADS)
Tsamparlis, M.; Mason, D. P.
1990-07-01
The properties of fluid space-times that admit a Ricci collineation vector (RCV) parallel to the fluid unit four-velocity vector ua are briefly reviewed. These properties are expressed in terms of the kinematic quantities of the timelike congruence generated by ua. The cubic equation derived by Oliver and Davis [Ann. Inst. Henri Poincaré 30, 339 (1979)] for the equation of state p=p(μ) of a perfect fluid space-time that admits an RCV, which does not degenerate to a Killing vector, is solved for physically realistic fluids. Necessary and sufficient conditions for a fluid space-time to admit a spacelike RCV parallel to a unit vector na orthogonal to ua are derived in terms of the expansion, shear, and rotation of the spacelike congruence generated by na. Perfect fluid space-times are studied in detail and analogues of the results for timelike RCVs parallel to ua are obtained. Properties of imperfect fluid space-times for which the energy flux vector qa vanishes and na is a spacelike eigenvector of the anisotropic stress tensor πab are derived. Fluid space-times with anisotropic pressure are discussed as a special case of imperfect fluid space-times for which na is an eigenvector of πab.
A Space-Time Adaptive Method for Simulating Complex Cardiac Dynamics
NASA Astrophysics Data System (ADS)
Cherry, E. M.; Greenside, H. S.; Henriquez, C. S.
2000-03-01
A new space-time adaptive mesh refinement algorithm (AMRA) is presented and analyzed which, by automatically adding and deleting local patches of higher-resolution Cartesian meshes, can simulate quantitatively accurate models of cardiac electrical dynamics efficiently in large domains. We find in two space dimensions that the AMRA is able to achieve a factor of 5 speedup and a factor of 5 reduction in memory while achieving the same accuracy compared to a code based on a uniform space-time mesh at the highest resolution of the AMRA method. We summarize applications of the code to the Luo-Rudy 1 cardiac model in large two- and three-dimensional domains and discuss the implications of our results for understanding the initiation of arrhythmias.
Gravitation theory in a fractal space-time
Agop, M.; Gottlieb, I.
2006-05-15
Assimilating the physical space-time with a fractal, a general theory is built. For a fractal dimension D=2, the virtual geodesics of this space-time implies a generalized Schroedinger type equation. Subsequently, a geometric formulation of the gravitation theory on a fractal space-time is given. Then, a connection is introduced on a tangent bundle, the connection coefficients, the Riemann curvature tensor and the Einstein field equation are calculated. It results, by means of a dilation operator, the equivalence of this model with quantum Einstein gravity.
Quaternion wave equations in curved space-time
NASA Technical Reports Server (NTRS)
Edmonds, J. D., Jr.
1974-01-01
The quaternion formulation of relativistic quantum theory is extended to include curvilinear coordinates and curved space-time in order to provide a framework for a unified quantum/gravity theory. Six basic quaternion fields are identified in curved space-time, the four-vector basis quaternions are identified, and the necessary covariant derivatives are obtained. Invariant field equations are derived, and a general invertable coordinate transformation is developed. The results yield a way of writing quaternion wave equations in curvilinear coordinates and curved space-time as well as a natural framework for solving the problem of second quantization for gravity.
Performance and Application of Parallel OVERFLOW Codes on Distributed and Shared Memory Platforms
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Rizk, Yehia M.
1999-01-01
The presentation discusses recent studies on the performance of the two parallel versions of the aerodynamics CFD code, OVERFLOW_MPI and _MLP. Developed at NASA Ames, the serial version, OVERFLOW, is a multidimensional Navier-Stokes flow solver based on overset (Chimera) grid technology. The code has recently been parallelized in two ways. One is based on the explicit message-passing interface (MPI) across processors and uses the _MPI communication package. This approach is primarily suited for distributed memory systems and workstation clusters. The second, termed the multi-level parallel (MLP) method, is simple and uses shared memory for all communications. The _MLP code is suitable on distributed-shared memory systems. For both methods, the message passing takes place across the processors or processes at the advancement of each time step. This procedure is, in effect, the Chimera boundary conditions update, which is done in an explicit "Jacobi" style. In contrast, the update in the serial code is done in more of the "Gauss-Sidel" fashion. The programming efforts for the _MPI code is more complicated than for the _MLP code; the former requires modification of the outer and some inner shells of the serial code, whereas the latter focuses only on the outer shell of the code. The _MPI version offers a great deal of flexibility in distributing grid zones across a specified number of processors in order to achieve load balancing. The approach is capable of partitioning zones across multiple processors or sending each zone and/or cluster of several zones into a single processor. The message passing across the processors consists of Chimera boundary and/or an overlap of "halo" boundary points for each partitioned zone. The MLP version is a new coarse-grain parallel concept at the zonal and intra-zonal levels. A grouping strategy is used to distribute zones into several groups forming sub-processes which will run in parallel. The total volume of grid points in each
Age-space-time CAR models in Bayesian disease mapping.
Goicoa, T; Ugarte, M D; Etxeberria, J; Militino, A F
2016-06-30
Mortality counts are usually aggregated over age groups assuming similar effects of both time and region, yet the spatio-temporal evolution of cancer mortality rates may depend on changing age structures. In this paper, mortality rates are analyzed by region, time period and age group, and models including space-time, space-age, and age-time interactions are considered. The integrated nested Laplace approximation method, known as INLA, is adopted for model fitting and inference in order to reduce computing time in comparison with Markov chain Monte Carlo (McMC) methods. The methodology provides full posterior distributions of the quantities of interest while avoiding complex simulation techniques. The proposed models are used to analyze prostate cancer mortality data in 50 Spanish provinces over the period 1986-2010. The results reveal a decline in mortality since the late 1990s, particularly in the age group [65,70), probably because of the inclusion of the PSA (prostate-specific antigen) test and better treatment of early-stage disease. The decline is not clearly observed in the oldest age groups. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26814019
A hybrid quantum key distribution protocol based on extended unitary operations and fountain codes
NASA Astrophysics Data System (ADS)
Lai, Hong; Xue, Liyin; Orgun, Mehmet A.; Xiao, Jinghua; Pieprzyk, Josef
2015-02-01
In 1984, Bennett and Brassard designed the first quantum key distribution protocol, whose security is based on quantum indeterminacy. Since then, there has been growing research activities, aiming in designing new, more efficient and secure key distribution protocols. The work presents a novel hybrid quantum key distribution protocol. The key distribution is derived from both quantum and classical data. This is why it is called hybrid. The protocol applies extended unitary operations derived from four basic unitary operations and distributed fountain codes. Compared to other protocols published so far, the new one is more secure (provides authentication of parties and detection of eavesdropping) and efficient. Moreover, our protocol still works over noisy and lossy channels.
Tensor analysis and curvature in quantum space-time
Namsrai, K.
1987-03-01
Introducing quantum space-time into physics by means of the transformation language of noncommuting coordinates gives a simple scheme of generalizing the tensor analysis. The general covariance principle for the quantum space-time case is discussed, within which one can obtain the covariant structure of basic tensor quantities and the motion equation for a particle in a gravitational field. Definitions of covariant derivatives and curvature are also generalized in the give case. It turns out that the covariant structure of the Riemann-Christoffel curvature tensor is not preserved in quantum space-time. However, if the curvature tensor R/sub ..mu.. nu lambda chi/(z) is redetermined up to the value of the L/sup 2/ term, then its covariant structure is achieved, and it, in turn, allows them to reconstruct the Einstein equation in quantum space-time.
Geodesic Structure of Janis-Newman-Winicour Space-time
NASA Astrophysics Data System (ADS)
Zhou, Sheng; Zhang, Ruanjing; Chen, Juhua; Wang, Yongjiu
2015-08-01
In the present paper we study the geodesic structure of the Janis-Newman-Winicour(JNW) space-time which contains a strong curvature naked singularity. This metric is an extension of the Schwarzschild geometry included a massless scalar field. We find that the strength parameter μ of the scalar field takes affection on the geodesic structure of the JNW space-time. By solving the geodesic equation and analyzing the behavior of effective potential, we investigate all geodesic types of the test particle and the photon in the JNW space-time. At the same time we simulate all the geodesic orbits corresponding to the energy levels of the effective potential in the JNW space-time.
Quantized Space-Time and Black Hole Entropy
NASA Astrophysics Data System (ADS)
Ma, Meng-Sen; Li, Huai-Fan; Zhao, Ren
2014-06-01
On the basis of Snyder’s idea of quantized space-time, we derive a new generalized uncertainty principle and a new modified density of states. Accordingly, we obtain a corrected black hole entropy with a logarithmic correction term by employing the new generalized uncertainty principle. In addition, we recalculate the entropy of spherically symmetric black holes using statistical mechanics. Because of the use of the minimal length in quantized space-time as a natural cutoff, the entanglement entropy we obtained does not have the usual form A/4 but has a coefficient dependent on the minimal length, which shows differences between black hole entropy in quantized space-time and that in continuous space-time.
Electrodynamics on {kappa}-Minkowski space-time
Harikumar, E.; Juric, T.; Meljanac, S.
2011-10-15
In this paper, we derive Lorentz force and Maxwell's equations on kappa-Minkowski space-time up to the first order in the deformation parameter. This is done by elevating the principle of minimal coupling to noncommutative space-time. We also show the equivalence of minimal coupling prescription and Feynman's approach. It is shown that the motion in kappa space-time can be interpreted as motion in a background gravitational field, which is induced by this noncommutativity. In the static limit, the effect of kappa deformation is to scale the electric charge. We also show that the laws of electrodynamics depend on the mass of the charged particle, in kappa space-time.
Differentiating space-time optical signals using resonant nanophotonics structures
NASA Astrophysics Data System (ADS)
Emelyanov, S. V.; Bykov, D. A.; Golovastikov, N. V.; Doskolovich, L. L.; Soifer, V. A.
2016-03-01
A theoretical description of the space-time transformations of an optical signal, which passes through resonant gratings and Bragg gratings with a defect, is proposed. The problem of differentiating a space-time optical signal using a resonant grating has been solved. The strict solution to the Maxwell equations using the Fourier modal method is involved to determine the parameters of the transfer function of the resonant diffraction structure and to carry out numerical modeling, which has confirmed the proposed theoretical description.
Building up Space-Time with Quantum Entanglement
NASA Astrophysics Data System (ADS)
van Raamsdonk, Mark
In this essay, we argue that the emergence of classically connected space-times is intimately related to the quantum entanglement of degrees of freedom in a nonperturbative description of quantum gravity. Disentangling the degrees of freedom associated with two regions of space-time results in these regions pulling apart and pinching off from each other in a way that can be quantified by standard measures of entanglement.
Non-contact assessment of melanin distribution via multispectral temporal illumination coding
NASA Astrophysics Data System (ADS)
Amelard, Robert; Scharfenberger, Christian; Wong, Alexander; Clausi, David A.
2015-03-01
Melanin is a pigment that is highly absorptive in the UV and visible electromagnetic spectra. It is responsible for perceived skin tone, and protects against harmful UV effects. Abnormal melanin distribution is often an indicator for melanoma. We propose a novel approach for non-contact melanin distribution via multispectral temporal illumination coding to estimate the two-dimensional melanin distribution based on its absorptive characteristics. In the proposed system, a novel multispectral, cross-polarized, temporally-coded illumination sequence is synchronized with a camera to measure reflectance under both multispectral and ambient illumination. This allows us to eliminate the ambient illumination contribution from the acquired reflectance measurements, and also to determine the melanin distribution in an observed region based on the spectral properties of melanin using the Beer-Lambert law. Using this information, melanin distribution maps can be generated for objective, quantitative assessment of skin type of individuals. We show that the melanin distribution map correctly identifies areas with high melanin densities (e.g., nevi).
NASA Astrophysics Data System (ADS)
Reale, F.; Barbera, M.; Sciortino, S.
1992-11-01
We illustrate a general and straightforward approach to develop FORTRAN parallel two-dimensional data-domain applications on distributed-memory systems, such as those based on transputers. We have aimed at achieving flexibility for different processor topologies and processor numbers, non-homogeneous processor configurations and coarse load-balancing. We have assumed a master-slave architecture as basic programming model in the framework of a domain decomposition approach. After developing a library of high-level general network and communication routines, based on low-level system-dependent libraries, we have used it to parallelize some specific applications: an elementary 2-D code, useful as a pattern and guide for other more complex applications, and a 2-D hydrodynamic code for astrophysical studies. Code parallelization is achieved by splitting the original code into two independent codes, one for the master and the other for the slaves, and then by adding coordinated calls to network setting and message-passing routines into the programs. The parallel applications have been implemented on a Meiko Computing Surface hosted by a SUN 4 workstation and running CSTools software package. After the basic network and communication routines were developed, the task of parallelizing the 2-D hydrodynamic code took approximately 12 man hours. The parallel efficiency of the code ranges between 98% and 58% on arrays between 2 and 20 T800 transputers, on a relatively small computational mesh (≈3000 cells). Arrays consisting of a limited number of faster Intel i860 processors achieve a high parallel efficiency on large computational grids (> 10000 grid points) with performances in the class of minisupercomputers.
Examination of nanoparticle dispersion using a novel GPU based radial distribution function code
NASA Astrophysics Data System (ADS)
Rosch, Thomas; Wade, Matthew; Phelan, Frederick
We have developed a novel GPU-based code that rapidly calculates radial distribution function (RDF) for an entire system, with no cutoff, ensuring accuracy. Built on top of this code, we have developed tools to calculate the second virial coefficient (B2) and the structure factor from the RDF, two properties that are directly related to the dispersion of nanoparticles in nancomposite systems. We validate the RDF calculations by comparison with previously published results, and also show how our code, which takes into account bonding in polymeric systems, enables more accurate predictions of g(r) than current state of the art GPU-based RDF codes currently available for these systems. In addition, our code reduces the computational time by approximately an order of magnitude compared to CPU-based calculations. We demonstrate the application of our toolset by the examination of a coarse-grained nanocomposite system and show how different surface energies between particle and polymer lead to different dispersion states, and effect properties such as viscosity, yield strength, elasticity, and thermal conductivity.
Further results on fault-tolerant distributed classification using error-correcting codes
NASA Astrophysics Data System (ADS)
Wang, Tsang-Yi; Han, Yunghsiang S.; Varshney, Pramod K.
2004-04-01
In this paper, we consider the distributed classification problem in wireless sensor networks. The DCFECC-SD approach employing the binary code matrix has recently been proposed to cope with the errors caused by both sensor faults and the effect of fading channels. The DCFECC-SD approach extends the DCFECC approach by using soft decision decoding to combat channel fading. However, the performance of the system employing the binary code matrix could be degraded if the distance between different hypotheses can not be kept large. This situation could happen when the number of sensor is small or the number of hypotheses is large. In this paper, we design the DCFECC-SD approach employing the D-ary code matrix, where D>2. Simulation results show that the performance of the DCFECC-SD approach employing the D-ary code matrix is better than that of the DCFECC-SD approach employing the binary code matrix. Performance evaluation of DCFECC-SD using different number of bits of local decision information is also provided when the total channel energy output from each sensor node is fixed.
Trent, D.S.; Eyler, L.L.
1982-09-01
In this study several aspects of simulating hydrogen distribution in geometric configurations relevant to reactor containment structures were investigated using the TEMPEST computer code. Of particular interest was the performance of the TEMPEST turbulence model in a density-stratified environment. Computed results illustrated that the TEMPEST numerical procedures predicted the measured phenomena with good accuracy under a variety of conditions and that the turbulence model used is a viable approach in complex turbulent flow simulation.
Kawamura, E.; Verboncoeur, J.P.; Birdsall, C.K.
1996-12-31
The goal is to obtain the ion angular and energy distributions at the wafer of inductive and capacitive discharges. To do this on a standard uniform mesh with particle codes alone would be impractical because of the long time scale nature of the problem (i.e., 10{sup 6} time steps). A solution is to use a fluid code to simulate the bulk source region, while using a particle-in-cell code to simulate the sheath region. Induct95 is a 2d fluid code which can simulate inductive and capacitive discharges. Though it does not resolve the sheath region near the wafer, it provides diagnostics for the collisional bulk plasma (i.e., potentials, temperatures, fluxes, etc.). Also, fluid codes converge to equilibrium much faster than particle codes in collisional regimes PDP1 is a 1d3v particle-in-cell code which can simulate rf discharges. It can resolve the sheath region and obtain the ion angular and energy distributions at the wafer target. The overall running time is expected to be that of the fluid code.
Ricci collineation vectors in fluid space-times
Tsamparlis, M. ); Mason, D.P. )
1990-07-01
The properties of fluid space-times that admit a Ricci collineation vector (RCV) parallel to the fluid unit four-velocity vector {ital u}{sup {ital a}} are briefly reviewed. These properties are expressed in terms of the kinematic quantities of the timelike congruence generated by {ital u}{sup {ital a}}. The cubic equation derived by Oliver and Davis (Ann. Inst. Henri Poincare {bold 30}, 339 (1979)) for the equation of state {ital p}={ital p}({mu}) of a perfect fluid space-time that admits an RCV, which does not degenerate to a Killing vector, is solved for physically realistic fluids. Necessary and sufficient conditions for a fluid space-time to admit a spacelike RCV parallel to a unit vector {ital n}{sup {ital a}} orthogonal to {ital u}{sup {ital a}} are derived in terms of the expansion, shear, and rotation of the spacelike congruence generated by {ital n}{sup {ital a}}. Perfect fluid space-times are studied in detail and analogues of the results for timelike RCVs parallel to {ital u}{sup {ital a}} are obtained. Properties of imperfect fluid space-times for which the energy flux vector {ital q}{sup {ital a}} vanishes and {ital n}{sup {ital a}} is a spacelike eigenvector of the anisotropic stress tensor {pi}{sub {ital ab}} are derived. Fluid space-times with anisotropic pressure are discussed as a special case of imperfect fluid space-times for which {ital n}{sup {ital a}} is an eigenvector of {pi}{sub {ital ab}}.
Kim, Steve M.; Ganguli, Surya; Frank, Loren M.
2012-01-01
Hippocampal place cells convey spatial information through a combination of spatially-selective firing and theta phase precession. The way in which this information influences regions like the subiculum that receive input from the hippocampus remains unclear. The subiculum receives direct inputs from area CA1 of the hippocampus and sends divergent output projections to many other parts of the brain, so we examined the firing patterns of rat subicular neurons. We found a substantial transformation in the subicular code for space from sparse to dense firing rate representations along a proximal-distal anatomical gradient: neurons in the proximal subiculum are more similar to canonical, sparsely firing hippocampal place cells, whereas neurons in the distal subiculum have higher firing rates and more distributed spatial firing patterns. Using information theory, we found that the more distributed spatial representation in the subiculum carries, on average, more information about spatial location and context than the sparse spatial representation in CA1. Remarkably, despite the disparate firing rate properties of subicular neurons, we found that neurons at all proximal-distal locations exhibit robust theta phase precession, with similar spiking oscillation frequencies as neurons in area CA1. Our findings suggest that the subiculum is specialized to compress sparse hippocampal spatial codes into highly informative distributed codes suitable for efficient communication to other brain regions. Moreover, despite this substantial compression, the subiculum maintains finer scale temporal properties that may allow it to participate in oscillatory phase coding and spike timing-dependent plasticity in coordination with other regions of the hippocampal circuit. PMID:22915100
Turbulence-free space-time quantum imaging
NASA Astrophysics Data System (ADS)
Meyers, Ronald E.; Deacon, Keith S.; Tunick, Arnold
2013-09-01
We experimentally demonstrate turbulence-free space-time quantum imaging. Quantum images of remote objects are produced with two sensors measuring at different space-time points under turbulent conditions. The quantum images generated move depending on the time delay between the two sensor measurements and the speed of a rotating ground glass that is part of a chaotic laser light source. For small delay times turbulence has virtually no adverse affect on the moving quantum images. The experimental setup and findings contribute to understanding the fundamentals of multi-photon quantum interference in complex media. Furthermore, the space-time memory demonstrated in our research provides important new pathways for investigating quantum imaging, quantum information storage and quantum computing. The turbulence-free space-time quantum imaging procedure greatly increases the information content of each photon measured. The moved quantum images are in fact new images that are stored in a space-time virtual memory process. The images are stored within the same quantum imaging data sets and thus quantum imaging can produce more information per photon measured than was previously realized.
A Reparametrization Approach for Dynamic Space-Time Models
Lee, Hyeyoung; Ghosh, Sujit K.
2009-01-01
Researchers in diverse areas such as environmental and health sciences are increasingly working with data collected across space and time. The space-time processes that are generally used in practice are often complicated in the sense that the auto-dependence structure across space and time is non-trivial, often non-separable and non-stationary in space and time. Moreover, the dimension of such data sets across both space and time can be very large leading to computational difficulties due to numerical instabilities. Hence, space-time modeling is a challenging task and in particular parameter estimation based on complex models can be problematic due to the curse of dimensionality. We propose a novel reparametrization approach to fit dynamic space-time models which allows the use of a very general form for the spatial covariance function. Our modeling contribution is to present an unconstrained reparametrization method for a covariance function within dynamic space-time models. A major benefit of the proposed unconstrained reparametrization method is that we are able to implement the modeling of a very high dimensional covariance matrix that automatically maintains the positive definiteness constraint. We demonstrate the applicability of our proposed reparametrized dynamic space-time models for a large data set of total nitrate concentrations. PMID:21593998
Holsclaw, Tracy; Hallgren, Kevin A.; Steyvers, Mark; Smyth, Padhraic; Atkins, David C.
2015-01-01
Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non-normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased type-I and type-II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally-technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in supplementary materials. PMID:26098126
Holsclaw, Tracy; Hallgren, Kevin A; Steyvers, Mark; Smyth, Padhraic; Atkins, David C
2015-12-01
Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased Type I and Type II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in online supplemental materials. PMID:26098126
Reliability of Calderbank Shor Steane codes and security of quantum key distribution
NASA Astrophysics Data System (ADS)
Hamada, Mitsuru
2004-08-01
After Mayers (1996 Advances in Cryptography: Proc. Crypto'96 pp 343-57 2001 J. Assoc. Comput. Mach. 48 351-406) gave a proof of the security of the Bennett-Brassard (1984 Proc. IEEE Int. Conf. on Computers, Systems and Signal Processing (Bangalore, India) pp 175-9) (BB84) quantum key distribution protocol, Shor and Preskill (2000 Phys. Rev. Lett. 85 441-4) made a remarkable observation that a Calderbank-Shor-Steane (CSS) code had been implicitly used in the BB84 protocol, and suggested its security could be proved by bounding the fidelity, say Fn, of the incorporated CSS code of length n in the form 1-F_n \\le \\exp[-n E \\ {+}\\ o(n)] for some positive number E. This work presents such a number E = E(R) as a function of the rate of codes R, and a threshold R0 such that E(R) > 0 whenever R < R0, which is larger than the achievable rate based on the Gilbert-Varshamov bound that is essentially given by Shor and Preskill. The codes in the present work are robust against fluctuations of channel parameters, which fact is needed to establish the security rigorously and was not proved for rates above the Gilbert-Varshamov rate before in the literature. As a byproduct, the security of a modified BB84 protocol against any joint (coherent) attacks is proved quantitatively.
FLRW cosmology in Weyl-integrable space-time
Gannouji, Radouane; Nandan, Hemwati; Dadhich, Naresh E-mail: hntheory@yahoo.co.in
2011-11-01
We investigate the Weyl space-time extension of general relativity (GR) for studying the FLRW cosmology through focusing and defocusing of the geodesic congruences. We have derived the equations of evolution for expansion, shear and rotation in the Weyl space-time. In particular, we consider the Starobinsky modification, f(R) = R+βR{sup 2}−2Λ, of gravity in the Einstein-Palatini formalism, which turns out to reduce to the Weyl integrable space-time (WIST) with the Weyl vector being a gradient. The modified Raychaudhuri equation takes the form of the Hill-type equation which is then analysed to study the formation of the caustics. In this model, it is possible to have a Big Bang singularity free cyclic Universe but unfortunately the periodicity turns out to be extremely short.
Dynamic analysis of space time effects in the ISU RACE configuration
NASA Astrophysics Data System (ADS)
Kulik, Viktoriya V.; Lee, John C.; Beller, Denis E.
2006-06-01
We present reactor physics analyses for the accelerator-driven thermal reactor configuration of the Reactor-Accelerator Coupling Experiments Project (RACE) at Idaho State University. A full-core model is developed using the ERANOS deterministic code coupled with the JEF2.2 nuclear data library. A pulsed source experiment is simulated to test performance of the traditional point kinetics and space-time α-methods for reactivity determination. Analysis of the simulated 235U detector responses to a neutron source pulse indicates the inability of point kinetics theory to correctly determine the reactivity for the subcritical RACE configuration. The ERANOS simulation indicates that the traditional α-method overestimates keff by 4.5%, whereas the space-time α-method could yield significant improvements if sufficiently accurate simulations are available.
Visual Data Mining of Large, Multivariate Space-Time Data
NASA Astrophysics Data System (ADS)
Cook, D.
2001-12-01
Interest in understanding global climate change is generating monitoring efforts that yield a huge amount of multivariate space-time data. While analytical methods for univariate space-time data may be mature and substantial, methods for multivariate space-time data analysis are still in their infancy. The urgency of understanding climate change on a global scale begs for input from data analysts, and to work effectively they need new tools to explore multivariate aspects of climate. This talk describes interactive and dynamic visual tools for mining information from multivariate space-time data. Methods for small amounts of data will be discussed, followed by approaches to scaling up methods for large quantities of data. We focus on the ``multiple views'' approach for viewing multivariate data, and how these extend to include space-time contextual information. We also will describe dynamic graphics methods such as tours in the space-time context. Data mining is the current terminology for exploratory analyses of data, typically associated with large databases. Exploratory analysis has a goal of finding anomalies, quirks and deviations from a trend, and basically extracting unexpected information from data. It oft-times emphasizes model-free methods, although model-based approaches are also integral components to the analysis process. Visual data mining concentrates on the use of visual tools in the exploratory process. As such it often involves highly interactive and dynamic graphics environments which facilitate quick queries and visual responses. Visual methods are especially important in exploratory analysis because they provide an interface for using the human eye to digest complex information. A good plot can convey far more information than a numerical summary. Visual tools enhance the chances of discovering the unexpected, and detecting the anomalous events.
Space-time curvature signatures in Bose-Einstein condensates
NASA Astrophysics Data System (ADS)
Matos, Tonatiuh; Gomez, Eduardo
2015-05-01
We derive a generalized Gross-Pitaevski (GP) equation for a Bose Einstein Condensate (BEC) immersed in a weak gravitational field starting from the covariant Complex Klein-Gordon field in a curved space-time. We compare it with the traditional GP equation where the gravitational field is added by hand as an external potential. We show that there is a small difference of order gz/c2 between them that could be measured in the future using Bose-Einstein Condensates. This represents the next order correction to the Newtonian gravity in a curved space-time.
Quantum Detectors in Generic Non Flat FLRW Space-Times
NASA Astrophysics Data System (ADS)
Rabochaya, Yevgeniya; Zerbini, Sergio
2016-05-01
We discuss a quantum field theoretical approach, in which a quantum probe is used to investigate the properties of generic non-flat FLRW space-times. The probe is identified with a conformally coupled massless scalar field defined on a space-time with horizon and the procedure to investigate the local properties is realized by the use of Unruh-DeWitt detector and by the evaluation of the regularized quantum fluctuations. In the case of de Sitter space, the coordinate independence of our results is checked, and the Gibbons-Hawking temperature is recovered. A possible generalization to the electromagnetic probe is also briefly indicated.
Quinlan, D; Barany, G; Panas, T
2007-08-30
Many forms of security analysis on large scale applications can be substantially automated but the size and complexity can exceed the time and memory available on conventional desktop computers. Most commercial tools are understandably focused on such conventional desktop resources. This paper presents research work on the parallelization of security analysis of both source code and binaries within our Compass tool, which is implemented using the ROSE source-to-source open compiler infrastructure. We have focused on both shared and distributed memory parallelization of the evaluation of rules implemented as checkers for a wide range of secure programming rules, applicable to desktop machines, networks of workstations and dedicated clusters. While Compass as a tool focuses on source code analysis and reports violations of an extensible set of rules, the binary analysis work uses the exact same infrastructure but is less well developed into an equivalent final tool.
Partially Key Distribution with Public Key Cryptosystem Based on Error Control Codes
NASA Astrophysics Data System (ADS)
Tavallaei, Saeed Ebadi; Falahati, Abolfazl
Due to the low level of security in public key cryptosystems based on number theory, fundamental difficulties such as "key escrow" in Public Key Infrastructure (PKI) and a secure channel in ID-based cryptography, a new key distribution cryptosystem based on Error Control Codes (ECC) is proposed . This idea is done by some modification on McEliece cryptosystem. The security of ECC cryptosystem obtains from the NP-Completeness of block codes decoding. The capability of generating public keys with variable lengths which is suitable for different applications will be provided by using ECC. It seems that usage of these cryptosystems because of decreasing in the security of cryptosystems based on number theory and increasing the lengths of their keys would be unavoidable in future.
LineCast: line-based distributed coding and transmission for broadcasting satellite images.
Wu, Feng; Peng, Xiulian; Xu, Jizheng
2014-03-01
In this paper, we propose a novel coding and transmission scheme, called LineCast, for broadcasting satellite images to a large number of receivers. The proposed LineCast matches perfectly with the line scanning cameras that are widely adopted in orbit satellites to capture high-resolution images. On the sender side, each captured line is immediately compressed by a transform-domain scalar modulo quantization. Without syndrome coding, the transmission power is directly allocated to quantized coefficients by scaling the coefficients according to their distributions. Finally, the scaled coefficients are transmitted over a dense constellation. This line-based distributed scheme features low delay, low memory cost, and low complexity. On the receiver side, our proposed line-based prediction is used to generate side information from previously decoded lines, which fully utilizes the correlation among lines. The quantized coefficients are decoded by the linear least square estimator from the received data. The image line is then reconstructed by the scalar modulo dequantization using the generated side information. Since there is neither syndrome coding nor channel coding, the proposed LineCast can make a large number of receivers reach the qualities matching their channel conditions. Our theoretical analysis shows that the proposed LineCast can achieve Shannon's optimum performance by using a high-dimensional modulo-lattice quantization. Experiments on satellite images demonstrate that it achieves up to 1.9-dB gain over the state-of-the-art 2D broadcasting scheme and a gain of more than 5 dB over JPEG 2000 with forward error correction. PMID:24474371
A proposal test of the space-time metricity.
NASA Astrophysics Data System (ADS)
Grassi, A. M.; Strini, G.
Among the standard hypothesis about gravitational theories, there is the "metricity" hypothesis for the space-time metric. Hehl, McCrea, Ne'eman and others have proposed a non-metricity. With the help of simple additional hypothesis, based on a previous experiment by Harris et al., the authors propose a metricity test by means of spectroscopic tests on meteorites.
Space-time metrical fluctuations induced by cosmic turbulence
NASA Technical Reports Server (NTRS)
Rosen, G.
1980-01-01
For a stochastic stress-energy tensor associated with cosmic turbulence, it is observed that Einstein's equations imply fluctuations in the space-time metric tensor. Such metrical fluctuations are shown to engender modified values for the average effective proper density and total pressure and thus to alter the solutions to the Friedman equations.
Parabosonic string and space-time non-commutativity
Seridi, M. A.; Belaloui, N.
2012-06-27
We investigate the para-quantum extension of the bosonic strings in a non-commutative space-time. We calculate the trilinear relations between the mass-center variables and the modes and we derive the Virasoro algebra where a new anomaly term due to the non-commutativity is obtained.
Confinement from gluodynamics in curved space-time
Gaete, Patricio; Spallucci, Euro
2008-01-15
We determine the static potential for a heavy quark-antiquark pair from gluodynamics in curved space-time. Our calculation is done within the framework of the gauge-invariant, path-dependent, variables formalism. The potential energy is the sum of a Yukawa and a linear potential, leading to the confinement of static charges.
Confinement from gluodynamics in curved space-time
NASA Astrophysics Data System (ADS)
Gaete, Patricio; Spallucci, Euro
2008-01-01
We determine the static potential for a heavy quark-antiquark pair from gluodynamics in curved space-time. Our calculation is done within the framework of the gauge-invariant, path-dependent, variables formalism. The potential energy is the sum of a Yukawa and a linear potential, leading to the confinement of static charges.
Spin fluids in stationary axis-symmetric space-times
NASA Astrophysics Data System (ADS)
Krisch, J. P.
1987-07-01
The relations establishing the equivalence of an ordinary perfect fluid stress-energy tensor and a spin fluid stress-energy tensor are derived for stationary axis-symmetric space-times in general relativity. Spin fluid sources for the Gödel cosmology and the van Stockum metric are given.
Complete Distributed Hyper-Entangled-Bell-State Analysis and Quantum Super Dense Coding
NASA Astrophysics Data System (ADS)
Zheng, Chunhong; Gu, Yongjian; Li, Wendong; Wang, Zhaoming; Zhang, Jiying
2016-02-01
We propose a protocol to implement the distributed hyper-entangled-Bell-state analysis (HBSA) for photonic qubits with weak cross-Kerr nonlinearities, QND photon-number-resolving detection, and some linear optical elements. The distinct feature of our scheme is that the BSA for two different degrees of freedom can be implemented deterministically and nondestructively. Based on the present HBSA, we achieve quantum super dense coding with double information capacity, which makes our scheme more significant for long-distance quantum communication.
Joint space-time geostatistical model for air quality surveillance
NASA Astrophysics Data System (ADS)
Russo, A.; Soares, A.; Pereira, M. J.
2009-04-01
Air pollution and peoples' generalized concern about air quality are, nowadays, considered to be a global problem. Although the introduction of rigid air pollution regulations has reduced pollution from industry and power stations, the growing number of cars on the road poses a new pollution problem. Considering the characteristics of the atmospheric circulation and also the residence times of certain pollutants in the atmosphere, a generalized and growing interest on air quality issues led to research intensification and publication of several articles with quite different levels of scientific depth. As most natural phenomena, air quality can be seen as a space-time process, where space-time relationships have usually quite different characteristics and levels of uncertainty. As a result, the simultaneous integration of space and time is not an easy task to perform. This problem is overcome by a variety of methodologies. The use of stochastic models and neural networks to characterize space-time dispersion of air quality is becoming a common practice. The main objective of this work is to produce an air quality model which allows forecasting critical concentration episodes of a certain pollutant by means of a hybrid approach, based on the combined use of neural network models and stochastic simulations. A stochastic simulation of the spatial component with a space-time trend model is proposed to characterize critical situations, taking into account data from the past and a space-time trend from the recent past. To identify near future critical episodes, predicted values from neural networks are used at each monitoring station. In this paper, we describe the design of a hybrid forecasting tool for ambient NO2 concentrations in Lisbon, Portugal.
NASA Astrophysics Data System (ADS)
Ioan, M.-R.
2016-08-01
In ionizing radiation related experiments, precisely knowing of the involved parameters it is a very important task. Some of these experiments are involving the use of electromagnetic ionizing radiation such are gamma rays and X rays, others make use of energetic charged or not charged small dimensions particles such are protons, electrons, neutrons and even, in other cases, larger accelerated particles such are helium or deuterium nuclei are used. In all these cases the beam used to hit an exposed target must be previously collimated and precisely characterized. In this paper, a novel method to determine the distribution of the collimated beam involving Matlab coding is proposed. The method was implemented by using of some Pyrex glass test samples placed in the beam where its distribution and dimension must be determined, followed by taking high quality pictures of them and then by digital processing the resulted images. By this method, information regarding the doses absorbed in the exposed samples volume are obtained too.
3-D model-based frame interpolation for distributed video coding of static scenes.
Maitre, Matthieu; Guillemot, Christine; Morin, Luce
2007-05-01
This paper addresses the problem of side information extraction for distributed coding of videos captured by a camera moving in a 3-D static environment. Examples of targeted applications are augmented reality, remote-controlled robots operating in hazardous environments, or remote exploration by drones. It explores the benefits of the structure-from-motion paradigm for distributed coding of this type of video content. Two interpolation methods constrained by the scene geometry, based either on block matching along epipolar lines or on 3-D mesh fitting, are first developed. These techniques are based on a robust algorithm for sub-pel matching of feature points, which leads to semi-dense correspondences between key frames. However, their rate-distortion (RD) performances are limited by misalignments between the side information and the actual Wyner-Ziv (WZ) frames due to the assumption of linear motion between key frames. To cope with this problem, two feature point tracking techniques are introduced, which recover the camera parameters of the WZ frames. A first technique, in which the frames remain encoded separately, performs tracking at the decoder and leads to significant RD performance gains. A second technique further improves the RD performances by allowing a limited tracking at the encoder. As an additional benefit, statistics on tracks allow the encoder to adapt the key frame frequency to the video motion content. PMID:17491456
Wan, Jan; Xiong, Naixue; Zhang, Wei; Zhang, Qinchao; Wan, Zheng
2012-01-01
The reliability of wireless sensor networks (WSNs) can be greatly affected by failures of sensor nodes due to energy exhaustion or the influence of brutal external environment conditions. Such failures seriously affect the data persistence and collection efficiency. Strategies based on network coding technology for WSNs such as LTCDS can improve the data persistence without mass redundancy. However, due to the bad intermediate performance of LTCDS, a serious ‘cliff effect’ may appear during the decoding period, and source data are hard to recover from sink nodes before sufficient encoded packets are collected. In this paper, the influence of coding degree distribution strategy on the ‘cliff effect’ is observed and the prioritized data storage and dissemination algorithm PLTD-ALPHA is presented to achieve better data persistence and recovering performance. With PLTD-ALPHA, the data in sensor network nodes present a trend that their degree distribution increases along with the degree level predefined, and the persistent data packets can be submitted to the sink node according to its degree in order. Finally, the performance of PLTD-ALPHA is evaluated and experiment results show that PLTD-ALPHA can greatly improve the data collection performance and decoding efficiency, while data persistence is not notably affected. PMID:23235451
Trajectory data analyses for pedestrian space-time activity study.
Qi, Feng; Du, Fei
2013-01-01
It is well recognized that human movement in the spatial and temporal dimensions has direct influence on disease transmission(1-3). An infectious disease typically spreads via contact between infected and susceptible individuals in their overlapped activity spaces. Therefore, daily mobility-activity information can be used as an indicator to measure exposures to risk factors of infection. However, a major difficulty and thus the reason for paucity of studies of infectious disease transmission at the micro scale arise from the lack of detailed individual mobility data. Previously in transportation and tourism research detailed space-time activity data often relied on the time-space diary technique, which requires subjects to actively record their activities in time and space. This is highly demanding for the participants and collaboration from the participants greatly affects the quality of data(4). Modern technologies such as GPS and mobile communications have made possible the automatic collection of trajectory data. The data collected, however, is not ideal for modeling human space-time activities, limited by the accuracies of existing devices. There is also no readily available tool for efficient processing of the data for human behavior study. We present here a suite of methods and an integrated ArcGIS desktop-based visual interface for the pre-processing and spatiotemporal analyses of trajectory data. We provide examples of how such processing may be used to model human space-time activities, especially with error-rich pedestrian trajectory data, that could be useful in public health studies such as infectious disease transmission modeling. The procedure presented includes pre-processing, trajectory segmentation, activity space characterization, density estimation and visualization, and a few other exploratory analysis methods. Pre-processing is the cleaning of noisy raw trajectory data. We introduce an interactive visual pre-processing interface as well as an
NASA Astrophysics Data System (ADS)
Rubert Godoy, A.; Nykanen, D.
2003-04-01
Characterizing the space-time scaling and dynamics of convective precipitation in mountainous terrain and the development of downscaling methods to transfer precipitation fields from one scale to another is the overall motivation for this research. Subtantiing a space-time statistical downscaling model for orographic convective precipitation based on the interplay between meteorological forcings and topographic influences on the scale-invariant properties of precipitation will be assessed.al progress has been made on characterizing the space-time organization of mid-western convective systems and tropical rainfall, which has lead to the development of statistical/dynamical downscaling models. Space-time analysis and downscaling of orographic precipitation has received much less attention due to the complexities of topographic influences. This study uses multi-scale statistical analysis to investigate the space-time scaling of organized thunderstorms that produced heavy rainfall and catastrophic flooding in mountainous regions. Focus is placed on the eastern and western slopes of the Appalachian region and the Front Range of the Rocky Mountains. Parameter estimates are analyzed over time and focus is placed on linking changes in the multi-scale parameters with meteorological forcings and orographic influences on the rainfall. Influences of geographic region (e.g., western versus eastern United States) and predominant orographic controls (e.g., windward versus leeward forcing)on trends in multi-scale properties of precipitation are investigated. Spatial resolutions from 1 km to 50 km and temporal integrations from 5 minutes to 3 hours ae considered. This range of space-time scales is needed to bridge typical scale gaps between distributed hydrologic models and numerical weather prediction (NWP) forecasts and attempts to address the open research problem of scaling organized thunderstorms and convection in mountainous terrain down to 1-4 km scales. The potential for
Inflation on a non-commutative space-time
NASA Astrophysics Data System (ADS)
Calmet, Xavier; Fritz, Christopher
2015-07-01
We study inflation on a non-commutative space-time within the framework of enveloping algebra approach which allows for a consistent formulation of general relativity and of the standard model of particle physics. We show that within this framework, the effects of the non-commutativity of spacetime are very subtle. The dominant effect comes from contributions to the process of structure formation. We describe the bound relevant to this class of non-commutative theories and derive the tightest bound to date of the value of the non-commutative scale within this framework. Assuming that inflation took place, we get a model independent bound on the scale of space-time non-commutativity of the order of 19 TeV.
Measuring Space-Time Geometry over the Ages
Stebbins, Albert; /Fermilab
2012-05-01
Theorists are often told to express things in the 'observational plane'. One can do this for space-time geometry, considering 'visual' observations of matter in our universe by a single observer over time, with no assumptions about isometries, initial conditions, nor any particular relation between matter and geometry, such as Einstein's equations. Using observables as coordinates naturally leads to a parametrization of space-time geometry in terms of other observables, which in turn prescribes an observational program to measure the geometry. Under the assumption of vorticity-free matter flow we describe this observational program, which includes measurements of gravitational lensing, proper motion, and redshift drift. Only 15% of the curvature information can be extracted without long time baseline observations, and this increases to 35% with observations that will take decades. The rest would likely require centuries of observations. The formalism developed is exact, non-perturbative, and more general than the usual cosmological analysis.
Mediterranean space-time extremes of wind wave sea states
NASA Astrophysics Data System (ADS)
Barbariol, Francesco; Carniel, Sandro; Sclavo, Mauro; Marcello Falcieri, Francesco; Bonaldo, Davide; Bergamasco, Andrea; Benetazzo, Alvise
2014-05-01
Traditionally, wind wave sea states during storms have been observed, modeled, and predicted mostly in the time domain, i.e. at a fixed point. In fact, the standard statistical models used in ocean waves analysis rely on the implicit assumption of long-crested waves. Nevertheless, waves in storms are mainly short-crested. Hence, spatio-temporal features of the wave field are crucial to accurately model the sea state characteristics and to provide reliable predictions, particurly of wave extremes. Indeed, the experimental evidence provided by novel instrumentations, e.g. WASS (Wave Acquisition Stereo System), showed that the maximum sea surface elevation gathered in time over an area, i.e. the space-time extreme, is larger than that one measured in time at a point, i.e. the time extreme. Recently, stochastic models used to estimate maxima of multidimensional Gaussian random fields have been applied to ocean waves statistics. These models are based either on Piterbarg's theorem or Adler and Taylor's Euler Characteristics approach. Besides a probability of exceedance of a certain threshold, they can provide the expected space-time extreme of a sea state, as long as space-time wave features (i.e. some parameters of the directional variance density spectrum) are known. These models have been recently validated against WASS observation from fixed and moving platforms. In this context, our focus was modeling and predicting extremes of wind waves during storms. Thus, to intensively gather space-time extremes data over the Mediterranean region, we used directional spectra provided by the numerical wave model SWAN (Simulating WAves Nearshore). Therefore, we set up a 6x6 km2 resolution grid entailing most of the Mediterranean Sea and we forced it with COSMO-I7 high resolution (7x7 km2) hourly wind fields, within 2007-2013 period. To obtain the space-time features, i.e. the spectral parameters, at each grid node and over the 6 simulated years, we developed a modified version
Ultrafast Optical Signal Processing Based Upon Space-Time Dualities
NASA Astrophysics Data System (ADS)
van Howe, James; Xu, Chris
2006-07-01
The last two decades have seen a wealth of optical instrumentation based upon the concepts of space-time duality. A historical overview of how this beautiful framework has been exploited to develop instruments for optical signal processing is presented. The power of this framework is then demonstrated by reviewing four devices in detail based upon space-time dualities that have been experimentally demonstrated: 1) a time-lens timing-jitter compensator for ultralong-haul dense-wavelength-division-multiplexed dispersion-managed soliton transmission, 2) a multiwavelength pulse generator using time-lens compression, 3) a programmable ultrafast optical delay line by use of a time-prism pair, and 4) an enhanced ultrafast optical delay line by use of soliton propagation between a time-prism pair.
k-Inflation in noncommutative space-time
NASA Astrophysics Data System (ADS)
Feng, Chao-Jun; Li, Xin-Zhou; Liu, Dao-Jun
2015-02-01
The power spectra of the scalar and tensor perturbations in the noncommutative k-inflation model are calculated in this paper. In this model, all the modes created when the stringy space-time uncertainty relation is satisfied, and they are generated inside the sound/Hubble horizon during inflation for the scalar/tensor perturbations. It turns out that a linear term describing the noncommutative space-time effect contributes to the power spectra of the scalar and tensor perturbations. Confronting the general noncommutative k-inflation model with latest results from Planck and BICEP2, and taking and as free parameters, we find that it is well consistent with observations. However, for the two specific models, i.e. the tachyon and DBI inflation models, it is found that the DBI model is not favored, while the tachyon model lies inside the contour, when the e-folding number is assumed to be around.
Effect of Heat on Space-Time Correlations in Jets
NASA Technical Reports Server (NTRS)
Bridges, James
2006-01-01
Measurements of space-time correlations of velocity, acquired in jets from acoustic Mach number 0.5 to 1.5 and static temperature ratios up to 2.7 are presented and analyzed. Previous reports of these experiments concentrated on the experimental technique and on validating the data. In the present paper the dataset is analyzed to address the question of how space-time correlations of velocity are different in cold and hot jets. The analysis shows that turbulent kinetic energy intensities, lengthscales, and timescales are impacted by the addition of heat, but by relatively small amounts. This contradicts the models and assumptions of recent aeroacoustic theory trying to predict the noise of hot jets. Once the change in jet potential core length has been factored out, most one- and two-point statistics collapse for all hot and cold jets.
Micro-Macro Duality and Space-Time Emergence
Ojima, Izumi
2011-03-28
The microscopic origin of space-time geometry is explained on the basis of an emergence process associated with the condensation of infinite number of microscopic quanta responsible for symmetry breakdown, which implements the basic essence of 'Quantum-Classical Correspondence' and of the forcing method in physical and mathematical contexts, respectively. From this viewpoint, the space-time dependence of physical quantities arises from the 'logical extension' to change 'constant objects' into 'variable objects' by tagging the order parameters associated with the condensation onto ''constant objects''; the logical direction here from a value y to a domain variable x(to materialize the basic mechanism behind the Gel'fand isomorphism) is just opposite to that common in the usual definition of a function f : x->f(x) from its domain variable x to a value y = f(x).
Static Isotropic Space-Times with Radially Imperfect Fluids
NASA Astrophysics Data System (ADS)
Konopka, Tomasz
When one is solving the equations of general relativity in a symmetric sector, it is natural to consider the same symmetry for the geometry and stress-energy. This implies that for static and isotropic space-times, the most general natural stress-energy tensor is a sum of a perfect fluid and a radially imperfect fluid component. In the special situations where the perfect fluid component vanishes or is a space-time constant, the solutions to Einstein's equations can be thought of as modified Schwarzschild and Schwarzschild-de Sitter spaces. Exact solutions of this type are derived and it is shown that whereas deviations from the unmodified solutions can be made small, among the manifestations of the imperfect fluid component is a shift in angular momentum scaling for orbiting test bodies at large radius. Based on this effect, the question of whether the imperfect fluid component can feasibly describe dark matter phenomenology is addressed.
Monte Carlo N Particle code - Dose distribution of clinical electron beams in inhomogeneous phantoms
Nedaie, H. A.; Mosleh-Shirazi, M. A.; Allahverdi, M.
2013-01-01
Electron dose distributions calculated using the currently available analytical methods can be associated with large uncertainties. The Monte Carlo method is the most accurate method for dose calculation in electron beams. Most of the clinical electron beam simulation studies have been performed using non- MCNP [Monte Carlo N Particle] codes. Given the differences between Monte Carlo codes, this work aims to evaluate the accuracy of MCNP4C-simulated electron dose distributions in a homogenous phantom and around inhomogeneities. Different types of phantoms ranging in complexity were used; namely, a homogeneous water phantom and phantoms made of polymethyl methacrylate slabs containing different-sized, low- and high-density inserts of heterogeneous materials. Electron beams with 8 and 15 MeV nominal energy generated by an Elekta Synergy linear accelerator were investigated. Measurements were performed for a 10 cm × 10 cm applicator at a source-to-surface distance of 100 cm. Individual parts of the beam-defining system were introduced into the simulation one at a time in order to show their effect on depth doses. In contrast to the first scattering foil, the secondary scattering foil, X and Y jaws and applicator provide up to 5% of the dose. A 2%/2 mm agreement between MCNP and measurements was found in the homogenous phantom, and in the presence of heterogeneities in the range of 1-3%, being generally within 2% of the measurements for both energies in a "complex" phantom. A full-component simulation is necessary in order to obtain a realistic model of the beam. The MCNP4C results agree well with the measured electron dose distributions. PMID:23533162
Nedaie, H A; Mosleh-Shirazi, M A; Allahverdi, M
2013-01-01
Electron dose distributions calculated using the currently available analytical methods can be associated with large uncertainties. The Monte Carlo method is the most accurate method for dose calculation in electron beams. Most of the clinical electron beam simulation studies have been performed using non- MCNP [Monte Carlo N Particle] codes. Given the differences between Monte Carlo codes, this work aims to evaluate the accuracy of MCNP4C-simulated electron dose distributions in a homogenous phantom and around inhomogeneities. Different types of phantoms ranging in complexity were used; namely, a homogeneous water phantom and phantoms made of polymethyl methacrylate slabs containing different-sized, low- and high-density inserts of heterogeneous materials. Electron beams with 8 and 15 MeV nominal energy generated by an Elekta Synergy linear accelerator were investigated. Measurements were performed for a 10 cm × 10 cm applicator at a source-to-surface distance of 100 cm. Individual parts of the beam-defining system were introduced into the simulation one at a time in order to show their effect on depth doses. In contrast to the first scattering foil, the secondary scattering foil, X and Y jaws and applicator provide up to 5% of the dose. A 2%/2 mm agreement between MCNP and measurements was found in the homogenous phantom, and in the presence of heterogeneities in the range of 1-3%, being generally within 2% of the measurements for both energies in a "complex" phantom. A full-component simulation is necessary in order to obtain a realistic model of the beam. The MCNP4C results agree well with the measured electron dose distributions. PMID:23533162
Uniqueness of Kerr space-time near null infinity
Wu Xiaoning; Bai Shan
2008-12-15
We reexpress the Kerr metric in standard Bondi-Sachs coordinates near null infinity I{sup +}. Using the uniqueness result of the characteristic initial value problem, we prove the Kerr metric is the only asymptotically flat, stationary, axially symmetric, type-D solution of the vacuum Einstein equation. The Taylor series of Kerr space-time is expressed in terms of Bondi-Sachs coordinates, and the Newman-Penrose constants have been calculated.
Detecting space-time cancer clusters using residential histories
NASA Astrophysics Data System (ADS)
Jacquez, Geoffrey M.; Meliker, Jaymie R.
2007-04-01
Methods for analyzing geographic clusters of disease typically ignore the space-time variability inherent in epidemiologic datasets, do not adequately account for known risk factors (e.g., smoking and education) or covariates (e.g., age, gender, and race), and do not permit investigation of the latency window between exposure and disease. Our research group recently developed Q-statistics for evaluating space-time clustering in cancer case-control studies with residential histories. This technique relies on time-dependent nearest neighbor relationships to examine clustering at any moment in the life-course of the residential histories of cases relative to that of controls. In addition, in place of the widely used null hypothesis of spatial randomness, each individual's probability of being a case is instead based on his/her risk factors and covariates. Case-control clusters will be presented using residential histories of 220 bladder cancer cases and 440 controls in Michigan. In preliminary analyses of this dataset, smoking, age, gender, race and education were sufficient to explain the majority of the clustering of residential histories of the cases. Clusters of unexplained risk, however, were identified surrounding the business address histories of 10 industries that emit known or suspected bladder cancer carcinogens. The clustering of 5 of these industries began in the 1970's and persisted through the 1990's. This systematic approach for evaluating space-time clustering has the potential to generate novel hypotheses about environmental risk factors. These methods may be extended to detect differences in space-time patterns of any two groups of people, making them valuable for security intelligence and surveillance operations.
Class of Einstein-Maxwell-dilaton-axion space-times
Matos, Tonatiuh; Miranda, Galaxia; Sanchez-Sanchez, Ruben; Wiederhold, Petra
2009-06-15
We use the harmonic maps ansatz to find exact solutions of the Einstein-Maxwell-dilaton-axion (EMDA) equations. The solutions are harmonic maps invariant to the symplectic real group in four dimensions Sp(4,R){approx}O(5). We find solutions of the EMDA field equations for the one- and two-dimensional subspaces of the symplectic group. Specially, for illustration of the method, we find space-times that generalize the Schwarzschild solution with dilaton, axion, and electromagnetic fields.
Corrected Hawking Temperature in Snyder's Quantized Space-time
NASA Astrophysics Data System (ADS)
Ma, Meng-Sen; Liu, Fang; Zhao, Ren
2015-06-01
In the quantized space-time of Snyder, generalized uncertainty relation and commutativity are both included. In this paper we analyze the possible form for the corrected Hawking temperature and derive it from the both effects. It is shown that the corrected Hawking temperature has a form similar to the one of noncommutative geometry inspired Schwarzschild black hole, however with an requirement for the noncommutative parameter 𝜃 and the minimal length a.
Space-time structure of weak and electromagnetic interactions
Hestenes, D.
1982-02-01
The generator of electromagnetic gauge transformations in the Dirac equation has a unique geometric interpretation and a unique extension to the generators of the gauge group SU(2) x U(1) for the Weinberg--Salam theory of weak and electromagnetic interactions. It follows that internal symmetries of the weak interactions can be interpreted as space-time symmetries of spinor fields in the Dirac algebra. The possibilities for interpreting strong interaction symmetries in a similar way are highly restricted.
The wave equation on static singular space-times
NASA Astrophysics Data System (ADS)
Mayerhofer, Eberhard
2008-02-01
The first part of my thesis lays the foundations to generalized Lorentz geometry. The basic algebraic structure of finite-dimensional modules over the ring of generalized numbers is investigated. The motivation for this part of my thesis evolved from the main topic, the wave equation on singular space-times. The second and main part of my thesis is devoted to establishing a local existence and uniqueness theorem for the wave equation on singular space-times. The singular Lorentz metric subject to our discussion is modeled within the special algebra on manifolds in the sense of Colombeau. Inspired by an approach to generalized hyperbolicity of conical-space times due to Vickers and Wilson, we succeed in establishing certain energy estimates, which by a further elaborated equivalence of energy integrals and Sobolev norms allow us to prove existence and uniqueness of local generalized solutions of the wave equation with respect to a wide class of generalized metrics. The third part of my thesis treats three different point value resp. uniqueness questions in algebras of generalized functions
Review of software for space-time disease surveillance
2010-01-01
Disease surveillance makes use of information technology at almost every stage of the process, from data collection and collation, through to analysis and dissemination. Automated data collection systems enable near-real time analysis of incoming data. This context places a heavy burden on software used for space-time surveillance. In this paper, we review software programs capable of space-time disease surveillance analysis, and outline some of their salient features, shortcomings, and usability. Programs with space-time methods were selected for inclusion, limiting our review to ClusterSeer, SaTScan, GeoSurveillance and the Surveillance package for R. We structure the review around stages of analysis: preprocessing, analysis, technical issues, and output. Simulated data were used to review each of the software packages. SaTScan was found to be the best equipped package for use in an automated surveillance system. ClusterSeer is more suited to data exploration, and learning about the different methods of statistical surveillance. PMID:20226054
Relativistic helicity and link in Minkowski space-time
Yoshida, Z.; Kawazura, Y.; Yokoyama, T.
2014-04-15
A relativistic helicity has been formulated in the four-dimensional Minkowski space-time. Whereas the relativistic distortion of space-time violates the conservation of the conventional helicity, the newly defined relativistic helicity conserves in a barotropic fluid or plasma, dictating a fundamental topological constraint. The relation between the helicity and the vortex-line topology has been delineated by analyzing the linking number of vortex filaments which are singular differential forms representing the pure states of Banach algebra. While the dimension of space-time is four, vortex filaments link, because vorticities are primarily 2-forms and the corresponding 2-chains link in four dimension; the relativistic helicity measures the linking number of vortex filaments that are proper-time cross-sections of the vorticity 2-chains. A thermodynamic force yields an additional term in the vorticity, by which the vortex filaments on a reference-time plane are no longer pure states. However, the vortex filaments on a proper-time plane remain to be pure states, if the thermodynamic force is exact (barotropic), thus, the linking number of vortex filaments conserves.
Space-time coordinated metadata for the Virtual Observatory
NASA Astrophysics Data System (ADS)
Rots, A. H.
2007-08-01
Space-time coordinate metadata are at the very core of understanding astronomical data and information. This aspect of data description requires very careful consideration. The design needs to be sufficiently general that it can adequately represent the many coordinate systems and conventions that are in use in the community. On the other hand the most basic requirement is that the space-time metadata for queries, for resource descriptions, and for data be complete and self-consistent. It is important to keep in mind that space, time, redshift, and spectrum are strongly intertwined coordinates: time has little meaning without knowing the location, and vice-versa; redshift and spectral data require position and velocity for correct interpretation. The design of the metadata structure has been completed at this time and will support most, if not all, coordinate systems and transformations between them for the Virtual Observatory, either immediately or through extensions. This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.
Space-time modeling of a rainfall field ; Application to daily rainfall in the Loire basin
NASA Astrophysics Data System (ADS)
Lepioufle, Jean-Marie; Leblois, Etienne; Creutin, Jean-Dominique
2010-05-01
Water resources management for a watershed necessitates to assess both high flow volumes and the impact of the management practice for different stakeholders (hydropower, irrigation, ecology...). To test different management strategies, hydrologists have developed hydrological distributed models incorporating several computational objects such as digital elevation model, sub-basins, and distances to the basin outlet. A good characterization of rainfall variability in space and time is crucial for the relevance of a hydrological model as a basis for the choice of water management strategy. Climatological references of rainfall hazard must be built from observation over decades. Daily rainfall measurements from raingauge networks are therefore still an invaluable source of information for a precise representation of precipitation hazard despite the recent availability of radar estimates. Based on either raingauge or radar observations, it is possible to mathematically model rainfall field as a space-time intermittent process (superposition of inner variability field and rainfall indicator field, both influenced by advection). Geostatistics enables to investigate the link between an instantaneous process space-time structure and the evolution of spatial structure with time aggregation.. A method is proposed to infer a relevant instantaneous process from observed rainfall statistics. After fitting the parameters of the instantaneous space-time variogram with the simplex method, spatial variograms for different duration respecting time aggregated variograms is calculated. With this basis, an avenue is open to simulate homogeneous rainfall fields which respect major statistical characteristics for hydrologists: expectation and variance of rainfall distribution and spatial variogram for different durations. Benefits and limits of this approach are investigated using daily rainfall data from the Loire basin in France. Two sub-regions are highlighted. A downstream zone
A MAPLE Package for Energy-Momentum Tensor Assessment in Curved Space-Time
Murariu, Gabriel; Praisler, Mirela
2010-01-21
One of the most interesting problem which remain unsolved, since the birth of the General Theory of Relativity (GR), is the energy-momentum localization. All our reflections are within the Lagrange formalism of the field theory. The concept of the energy-momentum tensor for gravitational interactions has a long history. To find a generally accepted expression, there have been different attempts. This paper is dedicated to the investigation of the energy-momentum problem in the theory of General Relativity. We use Einstein [1], Landau-Lifshitz [2], Bergmann-Thomson [3] and Moller's [4] prescriptions to evaluate energy-momentum distribution. In order to cover the huge volume of computation and, bearing in mind to make a general approaching for different space-time configurations, a MAPLE application to succeed in studying the energy momentum tensor was built. In the second part of the paper for two space-time configuration, the comparative results were presented.
Inter-bit prediction based on maximum likelihood estimate for distributed video coding
NASA Astrophysics Data System (ADS)
Klepko, Robert; Wang, Demin; Huchet, Grégory
2010-01-01
Distributed Video Coding (DVC) is an emerging video coding paradigm for the systems that require low complexity encoders supported by high complexity decoders. A typical real world application for a DVC system is mobile phones with video capture hardware that have a limited encoding capability supported by base-stations with a high decoding capability. Generally speaking, a DVC system operates by dividing a source image sequence into two streams, key frames and Wyner-Ziv (W) frames, with the key frames being used to represent the source plus an approximation to the W frames called S frames (where S stands for side information), while the W frames are used to correct the bit errors in the S frames. This paper presents an effective algorithm to reduce the bit errors in the side information of a DVC system. The algorithm is based on the maximum likelihood estimation to help predict future bits to be decoded. The reduction in bit errors in turn reduces the number of parity bits needed for error correction. Thus, a higher coding efficiency is achieved since fewer parity bits need to be transmitted from the encoder to the decoder. The algorithm is called inter-bit prediction because it predicts the bit-plane to be decoded from previously decoded bit-planes, one bitplane at a time, starting from the most significant bit-plane. Results provided from experiments using real-world image sequences show that the inter-bit prediction algorithm does indeed reduce the bit rate by up to 13% for our test sequences. This bit rate reduction corresponds to a PSNR gain of about 1.6 dB for the W frames.
Photoplus: auxiliary information for printed images based on distributed source coding
NASA Astrophysics Data System (ADS)
Samadani, Ramin; Mukherjee, Debargha
2008-01-01
A printed photograph is difficult to reuse because the digital information that generated the print may no longer be available. This paper describes a mechanism for approximating the original digital image by combining a scan of the printed photograph with small amounts of digital auxiliary information kept together with the print. The auxiliary information consists of a small amount of digital data to enable accurate registration and color-reproduction, followed by a larger amount of digital data to recover residual errors and lost frequencies by distributed Wyner-Ziv coding techniques. Approximating the original digital image enables many uses, including making good quality reprints from the original print, even when they are faded many years later. In essence, the print itself becomes the currency for archiving and repurposing digital images, without requiring computer infrastructure.
Liang, Wen-Ye; Wang, Shuang; Li, Hong-Wei; Yin, Zhen-Qiang; Chen, Wei; Yao, Yao; Huang, Jing-Zheng; Guo, Guang-Can; Han, Zheng-Fu
2014-01-01
We have demonstrated a proof-of-principle experiment of reference-frame-independent phase coding quantum key distribution (RFI-QKD) over an 80-km optical fiber. After considering the finite-key bound, we still achieve a distance of 50 km. In this scenario, the phases of the basis states are related by a slowly time-varying transformation. Furthermore, we developed and realized a new decoy state method for RFI-QKD systems with weak coherent sources to counteract the photon-number-splitting attack. With the help of a reference-frame-independent protocol and a Michelson interferometer with Faraday rotator mirrors, our system is rendered immune to the slow phase changes of the interferometer and the polarization disturbances of the channel, making the procedure very robust. PMID:24402550
2010-01-01
HGT and intra-genomic shuffling. Conclusions We describe novel features of PARCELs (Palindromic Amphipathic Repeat Coding ELements), a set of widely distributed repeat protein domains and coding sequences that were likely acquired through HGT by diverse unicellular microbes, further mobilized and diversified within genomes, and co-opted for expression in the membrane proteome of some taxa. Disseminated by multiple gene-centric vehicles, ORFs harboring these elements enhance accessory gene pools as part of the "mobilome" connecting genomes of various clades, in taxa sharing common niches. PMID:20626840
Space-time investigation of the effects of fishing on fish populations.
Ono, Kotaro; Shelton, Andrew O; Ward, Eric J; Thorson, James T; Feist, Blake E; Hilborn, Ray
2016-03-01
Species distribution models (SDMs) are important statistical tools for obtaining ecological insight into species-habitat relationships and providing advice for natural resource management. Many SDMs have been developed over the past decades, with a focus on space- and more recently, time-dependence. However, most of these studies have been on terrestrial species and applications to marine species have been limited. In this study, we used three large spatio-temporal data sources (habitat maps, survey-based fish density estimates, and fishery catch data) and a novel space-time model to study how the distribution of fishing may affect the seasonal dynamics of a commercially important fish species (Pacific Dover sole, Microstomus pacificus) off the west coast of the USA. Dover sole showed a large scale change in seasonal and annual distribution of biomass, and its distribution shifted from mid-depth zones to inshore or deeper waters during late summer/early fall. In many cases, the scale of fishery removal was small compared to these broader changes in biomass, suggesting that seasonal dynamics were primarily driven by movement and not by fishing. The increasing availability of appropriate data and space-time modeling software should facilitate extending this work to many other species, particularly those in marine ecosystems, and help tease apart the role of growth, natural mortality, recruitment, movement, and fishing on spatial patterns of species distribution in marine systems. PMID:27209782
NASA Astrophysics Data System (ADS)
Lake, Kayll
2010-12-01
The title immediately brings to mind a standard reference of almost the same title [1]. The authors are quick to point out the relationship between these two works: they are complementary. The purpose of this work is to explain what is known about a selection of exact solutions. As the authors state, it is often much easier to find a new solution of Einstein's equations than it is to understand it. Even at first glance it is very clear that great effort went into the production of this reference. The book is replete with beautifully detailed diagrams that reflect deep geometric intuition. In many parts of the text there are detailed calculations that are not readily available elsewhere. The book begins with a review of basic tools that allows the authors to set the notation. Then follows a discussion of Minkowski space with an emphasis on the conformal structure and applications such as simple cosmic strings. The next two chapters give an in-depth review of de Sitter space and then anti-de Sitter space. Both chapters contain a remarkable collection of useful diagrams. The standard model in cosmology these days is the ICDM model and whereas the chapter on the Friedmann-Lemaître-Robertson-Walker space-times contains much useful information, I found the discussion of the currently popular a representation rather too brief. After a brief but interesting excursion into electrovacuum, the authors consider the Schwarzschild space-time. This chapter does mention the Swiss cheese model but the discussion is too brief and certainly dated. Space-times related to Schwarzschild are covered in some detail and include not only the addition of charge and the cosmological constant but also the addition of radiation (the Vaidya solution). Just prior to a discussion of the Kerr space-time, static axially symmetric space-times are reviewed. Here one can find a very interesting discussion of the Curzon-Chazy space-time. The chapter on rotating black holes is rather brief and, for
Re-Examination of Globally Flat Space-Time
NASA Astrophysics Data System (ADS)
Feldman, Michael R.
2013-11-01
In the following, we offer a novel approach to modeling the observed effects currently attributed to the theoretical concepts of "dark energy," "dark matter," and "dark flow." Instead of assuming the existence of these theoretical concepts, we take an alternative route and choose to redefine what we consider to be inertial motion as well as what constitutes an inertial frame of reference in flat space-time. We adopt none of the features of our current cosmological models except for the requirement that special and general relativity be local approximations within our revised definition of inertial systems. Implicit in our ideas is the assumption that at "large enough" scales one can treat objects within these inertial systems as point-particles having an insignificant effect on the curvature of space-time. We then proceed under the assumption that time and space are fundamentally intertwined such that time- and spatial-translational invariance are not inherent symmetries of flat space-time (i.e., observable clock rates depend upon both relative velocity and spatial position within these inertial systems) and take the geodesics of this theory in the radial Rindler chart as the proper characterization of inertial motion. With this commitment, we are able to model solely with inertial motion the observed effects expected to be the result of "dark energy," "dark matter," and "dark flow." In addition, we examine the potential observable implications of our theory in a gravitational system located within a confined region of an inertial reference frame, subsequently interpreting the Pioneer anomaly as support for our redefinition of inertial motion. As well, we extend our analysis into quantum mechanics by quantizing for a real scalar field and find a possible explanation for the asymmetry between matter and antimatter within the framework of these redefined inertial systems.
Re-Examination of Globally Flat Space-Time
Feldman, Michael R.
2013-01-01
In the following, we offer a novel approach to modeling the observed effects currently attributed to the theoretical concepts of “dark energy,” “dark matter,” and “dark flow.” Instead of assuming the existence of these theoretical concepts, we take an alternative route and choose to redefine what we consider to be inertial motion as well as what constitutes an inertial frame of reference in flat space-time. We adopt none of the features of our current cosmological models except for the requirement that special and general relativity be local approximations within our revised definition of inertial systems. Implicit in our ideas is the assumption that at “large enough” scales one can treat objects within these inertial systems as point-particles having an insignificant effect on the curvature of space-time. We then proceed under the assumption that time and space are fundamentally intertwined such that time- and spatial-translational invariance are not inherent symmetries of flat space-time (i.e., observable clock rates depend upon both relative velocity and spatial position within these inertial systems) and take the geodesics of this theory in the radial Rindler chart as the proper characterization of inertial motion. With this commitment, we are able to model solely with inertial motion the observed effects expected to be the result of “dark energy,” “dark matter,” and “dark flow.” In addition, we examine the potential observable implications of our theory in a gravitational system located within a confined region of an inertial reference frame, subsequently interpreting the Pioneer anomaly as support for our redefinition of inertial motion. As well, we extend our analysis into quantum mechanics by quantizing for a real scalar field and find a possible explanation for the asymmetry between matter and antimatter within the framework of these redefined inertial systems. PMID:24250790
Re-examination of globally flat space-time.
Feldman, Michael R
2013-01-01
In the following, we offer a novel approach to modeling the observed effects currently attributed to the theoretical concepts of "dark energy," "dark matter," and "dark flow." Instead of assuming the existence of these theoretical concepts, we take an alternative route and choose to redefine what we consider to be inertial motion as well as what constitutes an inertial frame of reference in flat space-time. We adopt none of the features of our current cosmological models except for the requirement that special and general relativity be local approximations within our revised definition of inertial systems. Implicit in our ideas is the assumption that at "large enough" scales one can treat objects within these inertial systems as point-particles having an insignificant effect on the curvature of space-time. We then proceed under the assumption that time and space are fundamentally intertwined such that time- and spatial-translational invariance are not inherent symmetries of flat space-time (i.e., observable clock rates depend upon both relative velocity and spatial position within these inertial systems) and take the geodesics of this theory in the radial Rindler chart as the proper characterization of inertial motion. With this commitment, we are able to model solely with inertial motion the observed effects expected to be the result of "dark energy," "dark matter," and "dark flow." In addition, we examine the potential observable implications of our theory in a gravitational system located within a confined region of an inertial reference frame, subsequently interpreting the Pioneer anomaly as support for our redefinition of inertial motion. As well, we extend our analysis into quantum mechanics by quantizing for a real scalar field and find a possible explanation for the asymmetry between matter and antimatter within the framework of these redefined inertial systems. PMID:24250790
High Resolution Space-Time Ozone Modeling for Assessing Trends
Sahu, Sujit K.; Gelfand, Alan E.; Holland, David M.
2008-01-01
The assessment of air pollution regulatory programs designed to improve ground level ozone concentrations is a topic of considerable interest to environmental managers. To aid this assessment, it is necessary to model the space-time behavior of ozone for predicting summaries of ozone across spatial domains of interest and for the detection of long-term trends at monitoring sites. These trends, adjusted for the effects of meteorological variables, are needed for determining the effectiveness of pollution control programs in terms of their magnitude and uncertainties across space. This paper proposes a space-time model for daily 8-hour maximum ozone levels to provide input to regulatory activities: detection, evaluation, and analysis of spatial patterns of ozone summaries and temporal trends. The model is applied to analyzing data from the state of Ohio which has been chosen because it contains a mix of urban, suburban, and rural ozone monitoring sites in several large cities separated by large rural areas. The proposed space-time model is auto-regressive and incorporates the most important meteorological variables observed at a collection of ozone monitoring sites as well as at several weather stations where ozone levels have not been observed. This problem of misalignment of ozone and meteorological data is overcome by spatial modeling of the latter. In so doing we adopt an approach based on the successive daily increments in meteorological variables. With regard to modeling, the increment (or change-in-meteorology) process proves more attractive than working directly with the meteorology process, without sacrificing any desired inference. The full model is specified within a Bayesian framework and is fitted using MCMC techniques. Hence, full inference with regard to model unknowns is available as well as for predictions in time and space, evaluation of annual summaries and assessment of trends. PMID:19759840
Space-Time Cluster Analysis of Invasive Meningococcal Disease
de Melker, Hester; Spanjaard, Lodewijk; Dankert, Jacob; Nagelkerke, Nico
2004-01-01
Clusters are recognized when meningococcal cases of the same phenotypic strain (markers: serogroup, serotype, and subtype) occur in spatial and temporal proximity. The incidence of such clusters was compared to the incidence that would be expected by chance by using space-time nearest-neighbor analysis of 4,887 confirmed invasive meningococcal cases identified in the 9-year surveillance period 1993–2001 in the Netherlands. Clustering beyond chance only occurred among the closest neighboring cases (comparable to secondary cases) and was small (3.1%, 95% confidence interval 2.1%–4.1%). PMID:15498165
Particle propagation and effective space-time in gravity's rainbow
NASA Astrophysics Data System (ADS)
Garattini, Remo; Mandanici, Gianluca
2012-01-01
Based on the results obtained in our previous study on gravity’s rainbow, we determine the quantum corrections to the space-time metric for the Schwarzschild and the de Sitter background, respectively. We analyze how quantum fluctuations alter these metrics, inducing modifications on the propagation of test particles. Significantly enough, we find that quantum corrections can become relevant not only for particles approaching the Planck energy but, due to the one-loop contribution, even for low-energy particles as far as Planckian length scales are considered. We briefly compare our results with others obtained in similar studies and with the recent experimental OPERA announcement of superluminal neutrino propagation.
Fuzzy Space-Time Geometry and Particle's Dynamics
NASA Astrophysics Data System (ADS)
Mayburov, S. N.
2010-12-01
The quantum space-time with Dodson-Zeeman topological structure is studied. In its framework, the states of massive particle m correspond to the elements of fuzzy ordered set (Foset), i.e. the fuzzy points. Due to their partial ordering, m space coordinate x acquires principal uncertainty σ x . Schroedinger formalism of Quantum Mechanics is derived from consideration of m evolution in fuzzy phase space with minimal number of additional axioms. The possible particle’s interactions on fuzzy manifold are studied and shown to be gauge invariant.
Energy-momentum localization in Marder space-time
NASA Astrophysics Data System (ADS)
Saygün, S.; Saygün, M.; Tarhan, I.
2007-01-01
Considering the Einstein, Møller, Bergmann--Thomson, Landau--Lifshitz (LL), Papapetrou, Qadir--Sharif and Weinberg's definitions in general relativity, we find the momentum four-vector of the closed Universe based on Marder space--time. The momentum four-vector (due to matter plus field) is found to be zero. These results support the viewpoints of Banerjee--Sen, Xulu and Aydogdu--Salti. Another point is that our study agrees with the previous works of Cooperstock--Israelit, Rosen, Johri et al.
Naked singularities in higher dimensional Vaidya space-times
Ghosh, S. G.; Dadhich, Naresh
2001-08-15
We investigate the end state of the gravitational collapse of a null fluid in higher-dimensional space-times. Both naked singularities and black holes are shown to be developing as the final outcome of the collapse. The naked singularity spectrum in a collapsing Vaidya region (4D) gets covered with the increase in dimensions and hence higher dimensions favor a black hole in comparison to a naked singularity. The cosmic censorship conjecture will be fully respected for a space of infinite dimension.
MAPLE Procedures For Boson Fields System On Curved Space - Time
Murariu, Gabriel
2007-04-23
Systems of interacting boson fields are an important subject in the last years. From the problem of dark matter to boson stars' study, boson fields are involved. In the general configuration, it is considered a Klein-Gordon-Maxwell-Einstein fields system for a complex scalar field minimally coupled to a gravitational one. The necessity of studying a larger number of space-time configurations and the huge volume of computations for each particular situation are some reasons for building a MAPLE procedures set for this kind of systems.
Canonical quantization of general relativity in discrete space-times.
Gambini, Rodolfo; Pullin, Jorge
2003-01-17
It has long been recognized that lattice gauge theory formulations, when applied to general relativity, conflict with the invariance of the theory under diffeomorphisms. We analyze discrete lattice general relativity and develop a canonical formalism that allows one to treat constrained theories in Lorentzian signature space-times. The presence of the lattice introduces a "dynamical gauge" fixing that makes the quantization of the theories conceptually clear, albeit computationally involved. The problem of a consistent algebra of constraints is automatically solved in our approach. The approach works successfully in other field theories as well, including topological theories. A simple cosmological application exhibits quantum elimination of the singularity at the big bang. PMID:12570532
Spatial and Lorentzian surfaces in Robertson-Walker space times
NASA Astrophysics Data System (ADS)
Chen, Bang-Yen; Van der Veken, Joeri
2007-07-01
Let L14(f,c)=(I×fS,gfc) be a Robertson-Walker space time which does not contain any open subset of constant curvature. In this paper, we provide a general study of nondegenerate surfaces in L14(f,c). First, we prove the nonexistence of marginally trapped surfaces with positive relative nullity. Then, we classify totally geodesic submanifolds. Finally, we classify the family of surfaces with parallel second fundamental form and the family of totally umbilical surfaces with parallel mean curvature vector.
Iodine photodissociation laser with an intracavity space - time light modulator
Kachalin, G N; Pevny, S N; Pivkin, A N; Safronov, A S
2012-08-31
A scheme of an iodine laser with two different intracavity space - time modulators based on electrooptic PLZT ceramics is experimentally studied. It is shown that lasing can occur in different angular directions with the use of both modulators. The output laser energy is 10 mJ with a pulse duration of 200 {mu}s and a beam divergence of 6.3 Multiplication-Sign 10{sup -4} rad. The laser field of view (5.1 Multiplication-Sign 10{sup -3} rad) consists of a discrete set of 8 Multiplication-Sign 8 directions. (control of laser radiation parameters)
Numerical Relativity in Higher-Dimensional Space-Times
NASA Astrophysics Data System (ADS)
Witek, Helvi
2013-09-01
Black holes are among the most exciting phenomena predicted by General Relativity and play a key role in fundamental physics. Many interesting phenomena involve dynamical black hole configurations in the high curvature regime of gravity. In these lecture notes I will summarize the main numerical relativity techniques to explore highly dynamical phenomena, such as black hole collisions, in generic D-dimensional space-times. The present notes are based on my lectures given at the NR/HEP2 spring school at IST/Lisbon (Portugal) from March 11-14, 2013.
Founding Gravitation in 4D Euclidean Space-Time Geometry
Winkler, Franz-Guenter
2010-11-24
The Euclidean interpretation of special relativity which has been suggested by the author is a formulation of special relativity in ordinary 4D Euclidean space-time geometry. The natural and geometrically intuitive generalization of this view involves variations of the speed of light (depending on location and direction) and a Euclidean principle of general covariance. In this article, a gravitation model by Jan Broekaert, which implements a view of relativity theory in the spirit of Lorentz and Poincare, is reconstructed and shown to fulfill the principles of the Euclidean approach after an appropriate reinterpretation.
NASA Astrophysics Data System (ADS)
Swanekamp, S. B.; Oliver, B. V.; Grossmann, J. M.; Smithe, D.; Ludeking, L.
1996-11-01
The current understanding of plasma opening switch (POS) operation is as follows. During the conduction phase the switch plasma is redistributed by MHD forces. This redistribution of mass leads to the formation of a low density region in the switch where a 1-3 mm gap in the plasma is believed to form as the switch opens and magnetic energy is transferred between the primary storage inductor and the load. The processes of gap formation and power delivery are not very well understood. It is generally accepted that the assumptions of MHD theory are not valid during the gap formation and power delivery processes because electron inertia and the lack of space-charge neutrality are expected to play a key role. To study non-MHD processes during the gap formation process and power delivery phase of the POS, we have developed a technique for importing an arbitrary state of an MHD code into the PIC code MAGIC. At present the plasma kinetic pressure is ignored during the initialization of particles. Work supported by Defense Nuclear Agency. ^ JAYCOR, Vienna, VA 22102. ^ NRL-NRC Research Associate.
Space-time reference with an optical link
NASA Astrophysics Data System (ADS)
Berceau, P.; Taylor, M.; Kahn, J.; Hollberg, L.
2016-07-01
We describe a concept for realizing a high performance space-time reference using a stable atomic clock in a precisely defined orbit and synchronizing the orbiting clock to high-accuracy atomic clocks on the ground. The synchronization would be accomplished using a two-way lasercom link between ground and space. The basic approach is to take advantage of the highest-performance cold-atom atomic clocks at national standards laboratories on the ground and to transfer that performance to an orbiting clock that has good stability and that serves as a ‘frequency-flywheel’ over time-scales of a few hours. The two-way lasercom link would also provide precise range information and thus precise orbit determination. With a well-defined orbit and a synchronized clock, the satellite could serve as a high-accuracy space-time reference, providing precise time worldwide, a valuable reference frame for geodesy, and independent high-accuracy measurements of GNSS clocks. Under reasonable assumptions, a practical system would be able to deliver picosecond timing worldwide and millimeter orbit determination, and could serve as an enabling subsystem for other proposed space-gravity missions, which are briefly reviewed.
Space time neural networks for tether operations in space
NASA Technical Reports Server (NTRS)
Lea, Robert N.; Villarreal, James A.; Jani, Yashvant; Copeland, Charles
1993-01-01
A space shuttle flight scheduled for 1992 will attempt to prove the feasibility of operating tethered payloads in earth orbit. due to the interaction between the Earth's magnetic field and current pulsing through the tether, the tethered system may exhibit a circular transverse oscillation referred to as the 'skiprope' phenomenon. Effective damping of skiprope motion depends on rapid and accurate detection of skiprope magnitude and phase. Because of non-linear dynamic coupling, the satellite attitude behavior has characteristic oscillations during the skiprope motion. Since the satellite attitude motion has many other perturbations, the relationship between the skiprope parameters and attitude time history is very involved and non-linear. We propose a Space-Time Neural Network implementation for filtering satellite rate gyro data to rapidly detect and predict skiprope magnitude and phase. Training and testing of the skiprope detection system will be performed using a validated Orbital Operations Simulator and Space-Time Neural Network software developed in the Software Technology Branch at NASA's Lyndon B. Johnson Space Center.
Brain system for mental orientation in space, time, and person
Peer, Michael; Salomon, Roy; Goldberg, Ilan; Blanke, Olaf; Arzy, Shahar
2015-01-01
Orientation is a fundamental mental function that processes the relations between the behaving self to space (places), time (events), and person (people). Behavioral and neuroimaging studies have hinted at interrelations between processing of these three domains. To unravel the neurocognitive basis of orientation, we used high-resolution 7T functional MRI as 16 subjects compared their subjective distance to different places, events, or people. Analysis at the individual-subject level revealed cortical activation related to orientation in space, time, and person in a precisely localized set of structures in the precuneus, inferior parietal, and medial frontal cortex. Comparison of orientation domains revealed a consistent order of cortical activity inside the precuneus and inferior parietal lobes, with space orientation activating posterior regions, followed anteriorly by person and then time. Core regions at the precuneus and inferior parietal lobe were activated for multiple orientation domains, suggesting also common processing for orientation across domains. The medial prefrontal cortex showed a posterior activation for time and anterior for person. Finally, the default-mode network, identified in a separate resting-state scan, was active for all orientation domains and overlapped mostly with person-orientation regions. These findings suggest that mental orientation in space, time, and person is managed by a specific brain system with a highly ordered internal organization, closely related to the default-mode network. PMID:26283353
Brain system for mental orientation in space, time, and person.
Peer, Michael; Salomon, Roy; Goldberg, Ilan; Blanke, Olaf; Arzy, Shahar
2015-09-01
Orientation is a fundamental mental function that processes the relations between the behaving self to space (places), time (events), and person (people). Behavioral and neuroimaging studies have hinted at interrelations between processing of these three domains. To unravel the neurocognitive basis of orientation, we used high-resolution 7T functional MRI as 16 subjects compared their subjective distance to different places, events, or people. Analysis at the individual-subject level revealed cortical activation related to orientation in space, time, and person in a precisely localized set of structures in the precuneus, inferior parietal, and medial frontal cortex. Comparison of orientation domains revealed a consistent order of cortical activity inside the precuneus and inferior parietal lobes, with space orientation activating posterior regions, followed anteriorly by person and then time. Core regions at the precuneus and inferior parietal lobe were activated for multiple orientation domains, suggesting also common processing for orientation across domains. The medial prefrontal cortex showed a posterior activation for time and anterior for person. Finally, the default-mode network, identified in a separate resting-state scan, was active for all orientation domains and overlapped mostly with person-orientation regions. These findings suggest that mental orientation in space, time, and person is managed by a specific brain system with a highly ordered internal organization, closely related to the default-mode network. PMID:26283353
Visceral leishmaniasis in the state of Sao Paulo, Brazil: spatial and space-time analysis
Cardim, Marisa Furtado Mozini; Guirado, Marluci Monteiro; Dibo, Margareth Regina; Chiaravalloti, Francisco
2016-01-01
ABSTRACT OBJECTIVE To perform both space and space-time evaluations of visceral leishmaniasis in humans in the state of Sao Paulo, Brazil. METHODS The population considered in the study comprised autochthonous cases of visceral leishmaniasis and deaths resulting from it in Sao Paulo, between 1999 and 2013. The analysis considered the western region of the state as its studied area. Thematic maps were created to show visceral leishmaniasis dissemination in humans in the municipality. Spatial analysis tools Kernel and Kernel ratio were used to respectively obtain the distribution of cases and deaths and the distribution of incidence and mortality. Scan statistics were used in order to identify spatial and space-time clusters of cases and deaths. RESULTS The visceral leishmaniasis cases in humans, during the studied period, were observed to occur in the western portion of Sao Paulo, and their territorial extension mainly followed the eastbound course of the Marechal Rondon highway. The incidences were characterized as two sequences of concentric ellipses of decreasing intensities. The first and more intense one was found to have its epicenter in the municipality of Castilho (where the Marechal Rondon highway crosses the border of the state of Mato Grosso do Sul) and the second one in Bauru. Mortality was found to have a similar behavior to incidence. The spatial and space-time clusters of cases were observed to coincide with the two areas of highest incidence. Both the space-time clusters identified, even without coinciding in time, were started three years after the human cases were detected and had the same duration, that is, six years. CONCLUSIONS The expansion of visceral leishmaniasis in Sao Paulo has been taking place in an eastbound direction, focusing on the role of highways, especially Marechal Rondon, in this process. The space-time analysis detected the disease occurred in cycles, in different spaces and time periods. These meetings, if considered, may
Extreme wave analysis in the space-time domain: from observations to applications
NASA Astrophysics Data System (ADS)
Barbariol, Francesco; Alves, Jose-Henrique; Benetazzo, Alvise; Bergamasco, Filippo; Carniel, Sandro; Chao, Yung Y.; Chawla, Arun; Ricchi, Antonio; Sclavo, Mauro
2016-04-01
this end, analytical directional spectra that explicitly depend upon the wind forcing (e.g. Pierson-Moskowitz or JONSWAP frequency spectra, combined with a cos2 directional distribution) have been integrated to provide kinematic and geometric parameters of the sea state as a function of the wind speed and fetch length. Then, the SWAN numerical wave model has been modified in order to compute kinematic and geometric properties of the sea state, and run under different wave-current conditions and bathymetric gradients. In doing so, it has been possible to estimate the contribution to the space-time extremes variation due to the wind inputs, to current speed and to depth gradients. Weather forecasting applications consist of using spectra simulated by wave forecasting models to compute space-time extremes. In this context, we have recently implemented the space-time extremes computation (according to the second order Fedele model) within the WAVEWATCH III numerical wave model. New output products (i.e. the maximum expected crest and wave heights) have been validated using space-time stereo-photogrammetric measurements, proving the concept that powerful tools that provide space-time extremes forecasts over extended domains may be developed for applications beneficial to the marine community.
Recursive evaluation of space-time lattice Green's functions
NASA Astrophysics Data System (ADS)
de Hon, Bastiaan P.; Arnold, John M.
2012-09-01
Up to a multiplicative constant, the lattice Green's function (LGF) as defined in condensed matter physics and lattice statistical mechanics is equivalent to the Z-domain counterpart of the finite-difference time-domain Green's function (GF) on a lattice. Expansion of a well-known integral representation for the LGF on a ν-dimensional hyper-cubic lattice in powers of Z-1 and application of the Chu-Vandermonde identity results in ν - 1 nested finite-sum representations for discrete space-time GFs. Due to severe numerical cancellations, these nested finite sums are of little practical use. For ν = 2, the finite sum may be evaluated in closed form in terms of a generalized hypergeometric function. For special lattice points, that representation simplifies considerably, while on the other hand the finite-difference stencil may be used to derive single-lattice-point second-order recurrence schemes for generating 2D discrete space-time GF time sequences on the fly. For arbitrary symbolic lattice points, Zeilberger's algorithm produces a third-order recurrence operator with polynomial coefficients of the sixth degree. The corresponding recurrence scheme constitutes the most efficient numerical method for the majority of lattice points, in spite of the fact that for explicit numeric lattice points the associated third-order recurrence operator is not the minimum recurrence operator. As regards the asymptotic bounds for the possible solutions to the recurrence scheme, Perron's theorem precludes factorial or exponential growth. Along horizontal lattices directions, rapid initial growth does occur, but poses no problems in augmented dynamic-range fixed precision arithmetic. By analysing long-distance wave propagation along a horizontal lattice direction, we have concluded that the chirp-up oscillations of the discrete space-time GF are the root cause of grid dispersion anisotropy. With each factor of ten increase in the lattice distance, one would have to roughly double
The application of the phase space time evolution method to electron shielding
NASA Technical Reports Server (NTRS)
Cordaro, M. C.; Zucker, M. S.
1972-01-01
A computer technique for treating the motion of charged and neutral particles and called the phase space time evolution method was developed. This technique employs the computer's bookkeeping capacity to keep track of the time development of a phase space distribution of particles. This method was applied to a study of the penetration of electrons. A 1 MeV beam of electrons normally incident on a semi-infinite slab of aluminum was used. Results of the calculation were compared with Monte Carlo calculations and experimental results. Time-dependent PSTE electron penetration results for the same problem are presented.
The Monte Carlo SRNA-VOX code for 3D proton dose distribution in voxelized geometry using CT data
NASA Astrophysics Data System (ADS)
Ilic, Radovan D.; Spasic-Jokic, Vesna; Belicev, Petar; Dragovic, Milos
2005-03-01
This paper describes the application of the SRNA Monte Carlo package for proton transport simulations in complex geometry and different material compositions. The SRNA package was developed for 3D dose distribution calculation in proton therapy and dosimetry and it was based on the theory of multiple scattering. The decay of proton induced compound nuclei was simulated by the Russian MSDM model and our own using ICRU 63 data. The developed package consists of two codes: the SRNA-2KG, which simulates proton transport in combinatorial geometry and the SRNA-VOX, which uses the voxelized geometry using the CT data and conversion of the Hounsfield's data to tissue elemental composition. Transition probabilities for both codes are prepared by the SRNADAT code. The simulation of the proton beam characterization by multi-layer Faraday cup, spatial distribution of positron emitters obtained by the SRNA-2KG code and intercomparison of computational codes in radiation dosimetry, indicate immediate application of the Monte Carlo techniques in clinical practice. In this paper, we briefly present the physical model implemented in the SRNA package, the ISTAR proton dose planning software, as well as the results of the numerical experiments with proton beams to obtain 3D dose distribution in the eye and breast tumour.
The Monte Carlo SRNA-VOX code for 3D proton dose distribution in voxelized geometry using CT data.
Ilić, Radovan D; Spasić-Jokić, Vesna; Belicev, Petar; Dragović, Milos
2005-03-01
This paper describes the application of the SRNA Monte Carlo package for proton transport simulations in complex geometry and different material compositions. The SRNA package was developed for 3D dose distribution calculation in proton therapy and dosimetry and it was based on the theory of multiple scattering. The decay of proton induced compound nuclei was simulated by the Russian MSDM model and our own using ICRU 63 data. The developed package consists of two codes: the SRNA-2KG, which simulates proton transport in combinatorial geometry and the SRNA-VOX, which uses the voxelized geometry using the CT data and conversion of the Hounsfield's data to tissue elemental composition. Transition probabilities for both codes are prepared by the SRNADAT code. The simulation of the proton beam characterization by multi-layer Faraday cup, spatial distribution of positron emitters obtained by the SRNA-2KG code and intercomparison of computational codes in radiation dosimetry, indicate immediate application of the Monte Carlo techniques in clinical practice. In this paper, we briefly present the physical model implemented in the SRNA package, the ISTAR proton dose planning software, as well as the results of the numerical experiments with proton beams to obtain 3D dose distribution in the eye and breast tumour. PMID:15798273
A space-time discretization procedure for wave propagation problems
NASA Technical Reports Server (NTRS)
Davis, Sanford
1989-01-01
Higher order compact algorithms are developed for the numerical simulation of wave propagation by using the concept of a discrete dispersion relation. The dispersion relation is the imprint of any linear operator in space-time. The discrete dispersion relation is derived from the continuous dispersion relation by examining the process by which locally plane waves propagate through a chosen grid. The exponential structure of the discrete dispersion relation suggests an efficient splitting of convective and diffusive terms for dissipative waves. Fourth- and eighth-order convection schemes are examined that involve only three or five spatial grid points. These algorithms are subject to the same restrictions that govern the use of dispersion relations in the constructions of asymptotic expansions to nonlinear evolution equations. A new eighth-order scheme is developed that is exact for Courant numbers of 1, 2, 3, and 4. Examples are given of a pulse and step wave with a small amount of physical diffusion.
Clustering space-time interest points for action representation
NASA Astrophysics Data System (ADS)
Jin, Sou-Young; Choi, Ho-Jin
2013-12-01
This paper presents a novel approach to represent human actions in a video. Our approach deals with the limitation of local representation, i.e. space-time interest points, which cannot adequately represent actions in a video due to lack of global information about geometric relationships among interest points. It adds the geometric relationships to interest points by clustering interest points using squared Euclidean distances, followed by using a minimum hexahedron to represent each cluster. Within each video, we build a multi-dimensional histogram based on the characteristics of hexahedrons in the video for recognition. The experimental results show that the proposed representation is powerful to include the global information on top of local interest points and it successfully increases the accuracy of action recognition.
Emergent space-time and the supersymmetric index
NASA Astrophysics Data System (ADS)
Benjamin, Nathan; Kachru, Shamit; Keller, Christoph A.; Paquette, Natalie M.
2016-05-01
It is of interest to find criteria on a 2d CFT which indicate that it gives rise to emergent gravity in a macroscopic 3d AdS space via holography. Symmetric orbifolds in the large N limit have partition functions which are consistent with an emergent space-time string theory with L string ˜ L AdS. For supersymmetric CFTs, the elliptic genus can serve as a sensitive probe of whether the SCFT admits a large radius gravity description with L string ≪ L AdS after one deforms away from the symmetric orbifold point in moduli space. We discuss several classes of constructions whose elliptic genera strongly hint that gravity with L Planck ≪ L string ≪ L AdS can emerge at suitable points in moduli space.
The quantum space-time of c = -2 gravity
NASA Astrophysics Data System (ADS)
Ambjørn, J.; Anagnostopoulos, K.; Ichihara, T.; Jensen, L.; Kawamoto, N.; Watabiki, Y.; Yotsuji, K.
1998-02-01
We study the fractal structure of space-time of two-dimensional quantum gravity coupled to c = -2 conformal matter by means of computer simulations. We find that the intrinsic Hausdorff dimension dH = 3.58 ± 0.04. This result supports the conjecture dH = -2 α1/ α-1, where αn is the gravitational dressing exponent of a spinless primary field of conformal weight ( n + 1, n + 1), and it disfavours the alternative prediction dH = 2/| γ|. On the other hand, < ln> ˜ r2 n for n > 1 with good accuracy, i.e. the boundary length l has an anomalous dimension relative to the area of the surface.
Euclidean space-time diffeomorphisms and their Fueter subgroups
Guersey, F.; Jiang, W. )
1992-02-01
Holomorphic Fueter functions of the position quaternion form a subgroup of Euclidean space-time diffeomorphisms. An {ital O}(4) covariant treatment of such mappings is presented with the quaternionic argument {ital x} being replaced by either {ital {bar p}x} or {ital x{bar p}} involving self-dual and anti-self-dual structures and {ital p} denoting an arbitrary Euclidean time direction. An infinite group (the quasiconformal group) is exhibited that admits the conformal group SO(5,1) as a subgroup, in analogy to the two-dimensional case in which the Moebius group SO(3,1) is a subgroup of the infinite Virasoro group. The ensuing (3+1) covariant decomposition of diffeomorphisms suggests covariant gauges that throw the metric and the stress tensors in standard forms suitable for canonical quantization, leading to improved'' energy-momentum tensors. Other possible applications to current algebra and gravity will be mentioned.
Space-time resolved wave turbulence in a vibrating plate.
Cobelli, Pablo; Petitjeans, Philippe; Maurel, Agnès; Pagneux, Vincent; Mordant, Nicolas
2009-11-13
Wave turbulence in a thin elastic plate is experimentally investigated. By using a Fourier transform profilometry technique, the deformation field of the plate surface is measured simultaneously in time and space. This enables us to compute the wave-vector-frequency (k, omega) Fourier spectrum of the full space-time deformation velocity. In the 3D (k, omega) space, we show that the energy of the motion is concentrated on a 2D surface that represents a nonlinear dispersion relation. This nonlinear dispersion relation is close to the linear dispersion relation. This validates the usual wave-number-frequency change of variables used in many experimental studies of wave turbulence. The deviation from the linear dispersion, which increases with the input power of the forcing, is attributed to weak nonlinear effects. Our technique opens the way for many new extensive quantitative comparisons between theory and experiments of wave turbulence. PMID:20365984
Entropic force, holography and thermodynamics for static space-times
NASA Astrophysics Data System (ADS)
Konoplya, R. A.
2010-10-01
Recently Verlinde has suggested a new approach to gravity which interprets gravitational interaction as a kind of entropic force. The new approach uses the holographic principle by stating that the information is kept on the holographic screens which coincide with equipotential surfaces. Motivated by this new interpretation of gravity (but not being limited by it) we study equipotential surfaces, the Unruh-Verlinde temperature, energy and acceleration for various static space-times: generic spherically symmetric solutions, axially symmetric black holes immersed in a magnetic field, traversable spherically symmetric wormholes of an arbitrary shape function, system of two and more extremely charged black holes in equilibrium. In particular, we have shown that the Unruh-Verlinde temperature of the holographic screen reaches absolute zero on the wormhole throat independently of the particular form of the wormhole solution.
Representations of space, time, and number in neonates.
de Hevia, Maria Dolores; Izard, Véronique; Coubart, Aurélie; Spelke, Elizabeth S; Streri, Arlette
2014-04-01
A rich concept of magnitude--in its numerical, spatial, and temporal forms--is a central foundation of mathematics, science, and technology, but the origins and developmental relations among the abstract concepts of number, space, and time are debated. Are the representations of these dimensions and their links tuned by extensive experience, or are they readily available from birth? Here, we show that, at the beginning of postnatal life, 0- to 3-d-old neonates reacted to a simultaneous increase (or decrease) in spatial extent and in duration or numerical quantity, but they did not react when the magnitudes varied in opposite directions. The findings provide evidence that representations of space, time, and number are systematically interrelated at the start of postnatal life, before acquisition of language and cultural metaphors, and before extensive experience with the natural correlations between these dimensions. PMID:24639511
Entanglement distribution over quantum code-division multiple-access networks
NASA Astrophysics Data System (ADS)
Zhu, Chang-long; Yang, Nan; Liu, Yu-xi; Nori, Franco; Zhang, Jing
2015-10-01
We present a method for quantum entanglement distribution over a so-called code-division multiple-access network, in which two pairs of users share the same quantum channel to transmit information. The main idea of this method is to use different broadband chaotic phase shifts, generated by electro-optic modulators and chaotic Colpitts circuits, to encode the information-bearing quantum signals coming from different users and then recover the masked quantum signals at the receiver side by imposing opposite chaotic phase shifts. The chaotic phase shifts given to different pairs of users are almost uncorrelated due to the randomness of chaos and thus the quantum signals from different pair of users can be distinguished even when they are sent via the same quantum channel. It is shown that two maximally entangled states can be generated between two pairs of users by our method mediated by bright coherent lights, which can be more easily implemented in experiments compared with single-photon lights. Our method is robust under the channel noises if only the decay rates of the information-bearing fields induced by the channel noises are not quite high. Our study opens up new perspectives for addressing and transmitting quantum information in future quantum networks.
The spatial distribution of fixed mutations within genes coding for proteins
NASA Technical Reports Server (NTRS)
Holmquist, R.; Goodman, M.; Conroy, T.; Czelusniak, J.
1983-01-01
An examination has been conducted of the extensive amino acid sequence data now available for five protein families - the alpha crystallin A chain, myoglobin, alpha and beta hemoglobin, and the cytochromes c - with the goal of estimating the true spatial distribution of base substitutions within genes that code for proteins. In every case the commonly used Poisson density failed to even approximate the experimental pattern of base substitution. For the 87 species of beta hemoglobin examined, for example, the probability that the observed results were from a Poisson process was the minuscule 10 to the -44th. Analogous results were obtained for the other functional families. All the data were reasonably, but not perfectly, described by the negative binomial density. In particular, most of the data were described by one of the very simple limiting forms of this density, the geometric density. The implications of this for evolutionary inference are discussed. It is evident that most estimates of total base substitutions between genes are badly in need of revision.
On distributed memory MPI-based parallelization of SPH codes in massive HPC context
NASA Astrophysics Data System (ADS)
Oger, G.; Le Touzé, D.; Guibert, D.; de Leffe, M.; Biddiscombe, J.; Soumagne, J.; Piccinali, J.-G.
2016-03-01
Most of particle methods share the problem of high computational cost and in order to satisfy the demands of solvers, currently available hardware technologies must be fully exploited. Two complementary technologies are now accessible. On the one hand, CPUs which can be structured into a multi-node framework, allowing massive data exchanges through a high speed network. In this case, each node is usually comprised of several cores available to perform multithreaded computations. On the other hand, GPUs which are derived from the graphics computing technologies, able to perform highly multi-threaded calculations with hundreds of independent threads connected together through a common shared memory. This paper is primarily dedicated to the distributed memory parallelization of particle methods, targeting several thousands of CPU cores. The experience gained clearly shows that parallelizing a particle-based code on moderate numbers of cores can easily lead to an acceptable scalability, whilst a scalable speedup on thousands of cores is much more difficult to obtain. The discussion revolves around speeding up particle methods as a whole, in a massive HPC context by making use of the MPI library. We focus on one particular particle method which is Smoothed Particle Hydrodynamics (SPH), one of the most widespread today in the literature as well as in engineering.
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry
1998-01-01
This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.
Detecting Climate Signals Using Space-Time EOFs.
NASA Astrophysics Data System (ADS)
North, Gerald R.; Wu, Qigang
2001-04-01
Estimates of the amplitudes of the forced responses of the surface temperature field over the last century are provided by a signal processing scheme utilizing space-time empirical orthogonal functions for several combinations of station sites and record intervals taken from the last century. These century-long signal fingerprints come mainly from energy balance model calculations, which are shown to be very close to smoothed ensemble average runs from a coupled ocean-atmosphere model (Hadley Centre Model). The space-time lagged covariance matrices of natural variability come from 100-yr control runs from several well-known coupled ocean-atmosphere models as well as a 10000-yr run from the stochastic energy balance climate model (EBCM). Evidence is found for robust, but weaker than expected signals from the greenhouse [amplitude 65% of that expected for a rather insensitive model (EBCM: T2×CO2 2.3°C)], volcanic (also about 65% expected amplitude), and even the 11-yr component of the solar signal (a most probable value of about 2.0 times that expected). In the analysis the anthropogenic aerosol signal is weak and the null hypothesis for this signal can only be rejected in a few sampling configurations involving the last 50 yr of the record. During the last 50 yr the full strength value (1.0) also lies within the 90% confidence interval. Some amplitude estimation results based upon the (temporally smoothed) Hadley fingerprints are included and the results are indistinguishable from those based on the EBCM. In addition, a geometrical derivation of the multiple regression formula from the filter point of view is provided, which shows how the signals `not of interest' are removed from the data stream in the estimation process. The criteria for truncating the EOF sequence are somewhat different from earlier analyses in that the amount of the signal variance accounted for at a given level of truncation is explicitly taken into account.