Generalized Deterministic Traffic Rules
NASA Astrophysics Data System (ADS)
Fuks, Henryk; Boccara, Nino
We study a family of deterministic models for highway traffic flow which generalize cellular automaton rule 184. This family is parameterized by the speed limit m and another parameter k that represents a "degree of aggressiveness" in driving, strictly related to the distance between two consecutive cars. We compare two driving strategies with identical maximum throughput: "conservative" driving with high speed limit and "aggressive" driving with low speed limit. Those two strategies are evaluated in terms of accident probability. We also discuss fundamental diagrams of generalized traffic rules and examine limitations of maximum achievable throughput. Possible modifications of the model are considered.
Deterministic models for traffic jams
NASA Astrophysics Data System (ADS)
Nagel, Kai; Herrmann, Hans J.
1993-10-01
We study several deterministic one-dimensional traffic models. For integer positions and velocities we find the typical high and low density phases separated by a simple transition. If positions and velocities are continuous variables the model shows self-organized critically driven by the slowest car.
Traffic chaotic dynamics modeling and analysis of deterministic network
NASA Astrophysics Data System (ADS)
Wu, Weiqiang; Huang, Ning; Wu, Zhitao
2016-07-01
Network traffic is an important and direct acting factor of network reliability and performance. To understand the behaviors of network traffic, chaotic dynamics models were proposed and helped to analyze nondeterministic network a lot. The previous research thought that the chaotic dynamics behavior was caused by random factors, and the deterministic networks would not exhibit chaotic dynamics behavior because of lacking of random factors. In this paper, we first adopted chaos theory to analyze traffic data collected from a typical deterministic network testbed — avionics full duplex switched Ethernet (AFDX, a typical deterministic network) testbed, and found that the chaotic dynamics behavior also existed in deterministic network. Then in order to explore the chaos generating mechanism, we applied the mean field theory to construct the traffic dynamics equation (TDE) for deterministic network traffic modeling without any network random factors. Through studying the derived TDE, we proposed that chaotic dynamics was one of the nature properties of network traffic, and it also could be looked as the action effect of TDE control parameters. A network simulation was performed and the results verified that the network congestion resulted in the chaotic dynamics for a deterministic network, which was identical with expectation of TDE. Our research will be helpful to analyze the traffic complicated dynamics behavior for deterministic network and contribute to network reliability designing and analysis.
Traffic-light boundary in the deterministic Nagel-Schreckenberg model
NASA Astrophysics Data System (ADS)
Jia, Ning; Ma, Shoufeng
2011-06-01
The characteristics of the deterministic Nagel-Schreckenberg model with traffic-light boundary conditions are investigated and elucidated in a mostly theoretically way. First, precise analytical results of the outflow are obtained for cases in which the duration of the red phase is longer than one step. Then, some results are found and studied for cases in which the red phase equals one step. The main findings include the following. The maximum outflow is “road-length related” if the inflow is saturated; otherwise, if the inbound cars are generated stochastically, multiple theoretical outflow volumes may exist. The findings indicate that although the traffic-light boundary can be implemented in a simple and deterministic manner, the deterministic Nagel-Schreckenberg model with such a boundary has some unique and interesting behaviors.
Traffic-light boundary in the deterministic Nagel-Schreckenberg model.
Jia, Ning; Ma, Shoufeng
2011-06-01
The characteristics of the deterministic Nagel-Schreckenberg model with traffic-light boundary conditions are investigated and elucidated in a mostly theoretically way. First, precise analytical results of the outflow are obtained for cases in which the duration of the red phase is longer than one step. Then, some results are found and studied for cases in which the red phase equals one step. The main findings include the following. The maximum outflow is "road-length related" if the inflow is saturated; otherwise, if the inbound cars are generated stochastically, multiple theoretical outflow volumes may exist. The findings indicate that although the traffic-light boundary can be implemented in a simple and deterministic manner, the deterministic Nagel-Schreckenberg model with such a boundary has some unique and interesting behaviors.
Classification and unification of the microscopic deterministic traffic models
NASA Astrophysics Data System (ADS)
Yang, Bo; Monterola, Christopher
2015-10-01
We identify a universal mathematical structure in microscopic deterministic traffic models (with identical drivers), and thus we show that all such existing models in the literature, including both the two-phase and three-phase models, can be understood as special cases of a master model by expansion around a set of well-defined ground states. This allows any two traffic models to be properly compared and identified. The three-phase models are characterized by the vanishing of leading orders of expansion within a certain density range, and as an example the popular intelligent driver model is shown to be equivalent to a generalized optimal velocity (OV) model. We also explore the diverse solutions of the generalized OV model that can be important both for understanding human driving behaviors and algorithms for autonomous driverless vehicles.
Aging in Subdiffusion Generated by a Deterministic Dynamical System
NASA Astrophysics Data System (ADS)
Barkai, Eli
2003-03-01
We investigate aging behavior in a simple dynamical system: a nonlinear map which generates subdiffusion deterministically. Asymptotic behaviors of the diffusion process are described using aging continuous time random walks. We show how these processes are described by an aging diffusion equation which is of fractional order. Our work demonstrates that aging behavior can be found in deterministic low dimensional dynamical systems.
Deterministic generation of multiparticle entanglement by quantum Zeno dynamics.
Barontini, Giovanni; Hohmann, Leander; Haas, Florian; Estève, Jérôme; Reichel, Jakob
2015-09-18
Multiparticle entangled quantum states, a key resource in quantum-enhanced metrology and computing, are usually generated by coherent operations exclusively. However, unusual forms of quantum dynamics can be obtained when environment coupling is used as part of the state generation. In this work, we used quantum Zeno dynamics (QZD), based on nondestructive measurement with an optical microcavity, to deterministically generate different multiparticle entangled states in an ensemble of 36 qubit atoms in less than 5 microseconds. We characterized the resulting states by performing quantum tomography, yielding a time-resolved account of the entanglement generation. In addition, we studied the dependence of quantum states on measurement strength and quantified the depth of entanglement. Our results show that QZD is a versatile tool for fast and deterministic entanglement generation in quantum engineering applications. Copyright © 2015, American Association for the Advancement of Science.
Theory of Deterministic Entanglement Generation between Remote Superconducting Atoms
NASA Astrophysics Data System (ADS)
Koshino, K.; Inomata, K.; Lin, Z. R.; Tokunaga, Y.; Yamamoto, T.; Nakamura, Y.
2017-06-01
Entangling remote qubits is an essential technological element in the distributed quantum information processing. Here, we propose a deterministic scheme to generate maximal entanglement between remote superconducting atoms, using a propagating microwave photon as a flying qubit. The building block of this scheme is an atom-photon two-qubit gate, in which the photon qubit is encoded on its carrier frequencies. The gate operation completes deterministically upon reflection of a photon, and the gate type can be continuously varied (including swap, √{SWAP } , and identity gates) through in situ control of the drive field. Applying such atom-photon gates sequentially, we can perform various gate operations between remote superconducting atoms.
Deterministic generation of remote entanglement with active quantum feedback
Martin, Leigh; Motzoi, Felix; Li, Hanhan; Sarovar, Mohan; Whaley, K. Birgitta
2015-12-10
We develop and study protocols for deterministic remote entanglement generation using quantum feedback, without relying on an entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can be modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Lastly, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.
Deterministic generation of remote entanglement with active quantum feedback
Martin, Leigh; Motzoi, Felix; Li, Hanhan; ...
2015-12-10
We develop and study protocols for deterministic remote entanglement generation using quantum feedback, without relying on an entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can bemore » modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Lastly, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.« less
Deterministic entanglement generation from driving through quantum phase transitions
NASA Astrophysics Data System (ADS)
Luo, Xin-Yu; Zou, Yi-Quan; Wu, Ling-Na; Liu, Qi; Han, Ming-Fei; Tey, Meng Khoon; You, Li
2017-02-01
Many-body entanglement is often created through the system evolution, aided by nonlinear interactions between the constituting particles. These very dynamics, however, can also lead to fluctuations and degradation of the entanglement if the interactions cannot be controlled. Here, we demonstrate near-deterministic generation of an entangled twin-Fock condensate of ~11,000 atoms by driving a rubidium-87 Bose-Einstein condensate undergoing spin mixing through two consecutive quantum phase transitions (QPTs). We directly observe number squeezing of 10.7 ± 0.6 decibels and normalized collective spin length of 0.99 ± 0.01. Together, these observations allow us to infer an entanglement-enhanced phase sensitivity of ~6 decibels beyond the standard quantum limit and an entanglement breadth of ~910 atoms. Our work highlights the power of generating large-scale useful entanglement by taking advantage of the different entanglement landscapes separated by QPTs.
Deterministic generation of a cluster state of entangled photons
NASA Astrophysics Data System (ADS)
Schwartz, I.; Cogan, D.; Schmidgall, E. R.; Don, Y.; Gantz, L.; Kenneth, O.; Lindner, N. H.; Gershoni, D.
2016-10-01
Photonic cluster states are a resource for quantum computation based solely on single-photon measurements. We use semiconductor quantum dots to deterministically generate long strings of polarization-entangled photons in a cluster state by periodic timed excitation of a precessing matter qubit. In each period, an entangled photon is added to the cluster state formed by the matter qubit and the previously emitted photons. In our prototype device, the qubit is the confined dark exciton, and it produces strings of hundreds of photons in which the entanglement persists over five sequential photons. The measured process map characterizing the device has a fidelity of 0.81 with that of an ideal device. Further feasible improvements of this device may reduce the resources needed for optical quantum information processing.
Deterministic entanglement generation from driving through quantum phase transitions.
Luo, Xin-Yu; Zou, Yi-Quan; Wu, Ling-Na; Liu, Qi; Han, Ming-Fei; Tey, Meng Khoon; You, Li
2017-02-10
Many-body entanglement is often created through the system evolution, aided by nonlinear interactions between the constituting particles. These very dynamics, however, can also lead to fluctuations and degradation of the entanglement if the interactions cannot be controlled. Here, we demonstrate near-deterministic generation of an entangled twin-Fock condensate of ~11,000 atoms by driving a arubidium-87 Bose-Einstein condensate undergoing spin mixing through two consecutive quantum phase transitions (QPTs). We directly observe number squeezing of 10.7 ± 0.6 decibels and normalized collective spin length of 0.99 ± 0.01. Together, these observations allow us to infer an entanglement-enhanced phase sensitivity of ~6 decibels beyond the standard quantum limit and an entanglement breadth of ~910 atoms. Our work highlights the power of generating large-scale useful entanglement by taking advantage of the different entanglement landscapes separated by QPTs. Copyright © 2017, American Association for the Advancement of Science.
Deterministic generation of remote entanglement with active quantum feedback
NASA Astrophysics Data System (ADS)
Martin, Leigh; Motzoi, Felix; Li, Hanhan; Sarovar, Mohan; Whaley, K. Birgitta
2015-12-01
We consider the task of deterministically entangling two remote qubits using joint measurement and feedback, but no directly entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can be modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Finally, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.
Traffic Generator (TrafficGen) Version 1.4.2: Users Guide
2016-06-01
ARL-TR-7711 ● JUNE 2016 US Army Research Laboratory Traffic Generator (TrafficGen) Version 1.4.2: User’s Guide by Chien Hsieh...longer needed. Do not return it to the originator. ARL-TR-7711 ● JUNE 2016 US Army Research Laboratory Traffic Generator ...REPORT DATE (DD-MM-YYYY) June 2016 2. REPORT TYPE Final 3. DATES COVERED (From - To) 10/2014–09/2015 4. TITLE AND SUBTITLE Traffic Generator
Goreac, Dan Kobylanski, Magdalena Martinez, Miguel
2016-10-15
We study optimal control problems in infinite horizon whxen the dynamics belong to a specific class of piecewise deterministic Markov processes constrained to star-shaped networks (corresponding to a toy traffic model). We adapt the results in Soner (SIAM J Control Optim 24(6):1110–1122, 1986) to prove the regularity of the value function and the dynamic programming principle. Extending the networks and Krylov’s “shaking the coefficients” method, we prove that the value function can be seen as the solution to a linearized optimization problem set on a convenient set of probability measures. The approach relies entirely on viscosity arguments. As a by-product, the dual formulation guarantees that the value function is the pointwise supremum over regular subsolutions of the associated Hamilton–Jacobi integrodifferential system. This ensures that the value function satisfies Perron’s preconization for the (unique) candidate to viscosity solution.
Deterministic photonic cluster state generation from quantum dot molecules
NASA Astrophysics Data System (ADS)
Economou, Sophia; Gimeno-Segovia, Mercedes; Rudolph, Terry
2014-03-01
Currently, the most promising approach for photon-based quantum information processing is measurement-based, or one-way, quantum computing. In this scheme, a large entangled state of photons is prepared upfront and the computation is implemented with single-qubit measurements alone. Available approaches to generating the cluster state are probabilistic, which makes scalability challenging. We propose to generate the cluster state using a quantum dot molecule with one electron spin per quantum dot. The two spins are coupled by exchange interaction and are periodically pulsed to produce photons. We show that the entanglement created by free evolution between the spins is transferred to the emitted photons, and thus a 2D photonic ladder can be created. Our scheme only utilizes single-spin gates and measurement, and is thus fully consistent with available technology.
Traffic scenario generation technique for piloted simulation studies
NASA Technical Reports Server (NTRS)
Williams, David H.; Wells, Douglas C.
1985-01-01
Piloted simulation studies of cockpit traffic display concepts require the development of representative traffic scenarios. With the exception of specific aircraft interaction issues, most research questions can be addressed using traffic scenarios consisting of prerecorded aircraft movements merged together to form a desired traffic pattern. Prerecorded traffic scenarios have distinct research advantages, allowing control of traffic encounters with repeatability of scenarios between different test subjects. A technique is described for generation of prerecorded jet transport traffic scenarios suitable for use in piloted simulation studies. Individual flight profiles for the aircraft in the scenario are created interactively with a computer program designed specifically for this purpose. The profiles are then time-correlated and merged into a complete scenario. This technique was used to create traffic scenarios for the Denver, Colorado area with operations centered at Stapleton International Airport. Traffic scenarios for other areas may also be created using this technique, with appropriate modifications made to the navigation fix locations contained in the flight profile generation program.
Mesh generation and energy group condensation studies for the jaguar deterministic transport code
Kennedy, R. A.; Watson, A. M.; Iwueke, C. I.; Edwards, E. J.
2012-07-01
The deterministic transport code Jaguar is introduced, and the modeling process for Jaguar is demonstrated using a two-dimensional assembly model of the Hoogenboom-Martin Performance Benchmark Problem. This single assembly model is being used to test and analyze optimal modeling methodologies and techniques for Jaguar. This paper focuses on spatial mesh generation and energy condensation techniques. In this summary, the models and processes are defined as well as thermal flux solution comparisons with the Monte Carlo code MC21. (authors)
MAGNeT : Monitor for Application-Generated Network Traffic /
Feng, W. C.; Hay, J. R.; Gardner, M. K.
2001-01-01
Over the laqt decade, network practitioners have focused on monitoring, measuring, and characterizing traffic in the network to gain insight into building critical network components (from the protocol stack to routers and switches to network interface cards). Recent research shows that additional insight can be obtained by monitoring traffic at the application level (Le,, before application-sent traffic is modulated by the protocol stack) rather than in the network (i-e., after it is modulated by the protocol stack). Consequently, this paper describes a Monitor for Application-Generated Network Traffic (MAGNeT) that captures traffic generated by the application rather than traffic in the network. MAGNeT consists of application programs as well as modifications to the standard Linux kernel. Together, these tools provide the capability of monitoring an application's network behavior and protocol state information in production systems. The use of MAGNeT will enable the research community to construct a library of real traces of application-generated traffic from which researchers can more realistically test network protocol designs and theory. MAGNeT can also be used to verify the correct operation of protocol enhancements and to troubleshoot and tune protocol implementations.
All-electrical deterministic single domain wall generation for on-chip applications
Guite, Chinkhanlun; Kerk, I. S.; Sekhar, M. Chandra; Ramu, M.; Goolaup, S.; Lew, W. S.
2014-01-01
Controlling domain wall (DW) generation and dynamics behaviour in ferromagnetic nanowire is critical to the engineering of domain wall-based non-volatile logic and magnetic memory devices. Previous research showed that DW generation suffered from a random or stochastic nature and that makes the realization of DW based device a challenging task. Conventionally, stabilizing a Néel DW requires a long pulsed current and the assistance of an external magnetic field. Here, we demonstrate a method to deterministically produce single DW without having to compromise the pulse duration. No external field is required to stabilize the DW. This is achieved by controlling the stray field magnetostatic interaction between a current-carrying strip line generated DW and the edge of the nanowire. The natural edge-field assisted domain wall generation process was found to be twice as fast as the conventional methods and requires less current density. Such deterministic DW generation method could potentially bring DW device technology, a step closer to on-chip application. PMID:25500734
Koay, Cheng Guan; Hurley, Samuel A.; Meyerand, M. Elizabeth
2011-01-01
Purpose: Diffusion MRI measurements are typically acquired sequentially with unit gradient directions that are distributed uniformly on the unit sphere. The ordering of the gradient directions has significant effect on the quality of dMRI-derived quantities. Even though several methods have been proposed to generate optimal orderings of gradient directions, these methods are not widely used in clinical studies because of the two major problems. The first problem is that the existing methods for generating highly uniform and antipodally symmetric gradient directions are inefficient. The second problem is that the existing methods for generating optimal orderings of gradient directions are also highly inefficient. In this work, the authors propose two extremely efficient and deterministic methods to solve these two problems. Methods: The method for generating nearly uniform point set on the unit sphere (with antipodal symmetry) is based upon the notion that the spacing between two consecutive points on the same latitude should be equal to the spacing between two consecutive latitudes. The method for generating optimal ordering of diffusion gradient directions is based on the idea that each subset of incremental sample size, which is derived from the prescribed and full set of gradient directions, must be as uniform as possible in terms of the modified electrostatic energy designed for antipodally symmetric point set. Results: The proposed method outperformed the state-of-the-art method in terms of computational efficiency by about six orders of magnitude. Conclusions: Two extremely efficient and deterministic methods have been developed for solving the problem of optimal ordering of diffusion gradient directions. The proposed strategy is also applicable to optimal view-ordering in three-dimensional radial MRI. PMID:21928652
NASA Astrophysics Data System (ADS)
Qian, Jing; Zhang, Weiping
2017-03-01
We develop a scheme for deterministic generation of an entangled state between two atoms on different Rydberg states via a chirped adiabatic passage, which directly connects the initial ground and target entangled states and also does not request the normally needed blockade effect. The occupancy of intermediate states suffers from a strong reduction via two pulses with proper time-dependent detunings and the electromagnetically induced transparency condition. By solving the analytical expressions of eigenvalues and eigenstates of a two-atom system, we investigate the optimal parameters for guaranteeing the adiabatic condition. We present a detailed study for the effect of pulse duration, changing rate, different Rydberg interactions on the fidelity of the prepared entangled state with experimentally feasible parameters, which reveals a good agreement between the analytic and full numerical results.
NASA Astrophysics Data System (ADS)
Liang, X. San
2011-09-01
How uncertainties are generated in deterministic geophysical fluid flows is an important but mostly overlooked subject in the atmospheric and oceanic research. In this study, it is shown that the generating mechanisms include local entropy generation (LEG) and cumulant information transfer, both of which are explicitly expressed with the aid of a theorem established herein. To a system the former is intrinsic, representing the evolutionary trend of a marginal entropy and bringing connections between the two physical notions namely uncertainty and instability. The latter results from the interaction between different locations through dynamic event synchronization, and appears only in the course of state evolution. Although in practice it is a notoriously difficult task to estimate entropy and entropy-related quantities for atmospheric and oceanic systems, which are in general of large dimensionality, estimation of the LEG can be accurately fulfilled with ensembles of limited size. If, furthermore, the processes of a system under consideration are quasi-ergodic and quasi-stationary, its LEG actually can be fairly satisfactorily estimated even without appealing to ensemble predictions. These assertions are illustrated and verified in an application with two simulated quasi-geostrophic jet streams with compact chaotic attractors, one global over the whole domain and another highly localized. The LEG study provides an objective way of rapid assessment for predictions, which is important in the practical fields such as adaptive sampling and adaptive modeling.
Direct generation of linearly polarized single photons with a deterministic axis in quantum dots
NASA Astrophysics Data System (ADS)
Wang, Tong; Puchtler, Tim J.; Patra, Saroj K.; Zhu, Tongtong; Ali, Muhammad; Badcock, Tom J.; Ding, Tao; Oliver, Rachel A.; Schulz, Stefan; Taylor, Robert A.
2017-07-01
We report the direct generation of linearly polarized single photons with a deterministic polarization axis in self-assembled quantum dots (QDs), achieved by the use of non-polar InGaN without complex device geometry engineering. Here, we present a comprehensive investigation of the polarization properties of these QDs and their origin with statistically significant experimental data and rigorous k·p modeling. The experimental study of 180 individual QDs allows us to compute an average polarization degree of 0.90, with a standard deviation of only 0.08. When coupled with theoretical insights, we show that these QDs are highly insensitive to size differences, shape anisotropies, and material content variations. Furthermore, 91% of the studied QDs exhibit a polarization axis along the crystal [1-100] axis, with the other 9% polarized orthogonal to this direction. These features give non-polar InGaN QDs unique advantages in polarization control over other materials, such as conventional polar nitride, InAs, or CdSe QDs. Hence, the ability to generate single photons with polarization control makes non-polar InGaN QDs highly attractive for quantum cryptography protocols.
Transforming the NAS: The Next Generation Air Traffic Control System
NASA Technical Reports Server (NTRS)
Erzberger, Heinz
2004-01-01
The next-generation air traffic control system must be designed to safely and efficiently accommodate the large growth of traffic expected in the near future. It should be sufficiently scalable to contend with the factor of 2 or more increase in demand expected by the year 2020. Analysis has shown that the current method of controlling air traffic cannot be scaled up to provide such levels of capacity. Therefore, to achieve a large increase in capacity while also giving pilots increased freedom to optimize their flight trajectories requires a fundamental change in the way air traffic is controlled. The key to achieving a factor of 2 or more increase in airspace capacity is to automate separation monitoring and control and to use an air-ground data link to send trajectories and clearances directly between ground-based and airborne systems. In addition to increasing capacity and offering greater flexibility in the selection of trajectories, this approach also has the potential to increase safety by reducing controller and pilot errors that occur in routine monitoring and voice communication tasks.
NASA Astrophysics Data System (ADS)
Wang, Xun; Liu, Zhirong; Huang, Kelin; Sun, Jingbo
2017-03-01
According to the theory of first-order Born approximation, analytical expressions for Gaussian Schell-model arrays (GSMA) beam scattered on a deterministic medium in the far-zone are derived. In terms of the analytical formula obtained, shifts of GSMA beam's scattered spectrum are numerically investigated. Results show that the scattering directions sx and sy, effective radius σ of the scattering medium, the initial beam transverse width σ0, correlation widths δx and δy of the source, and line width Γ0 of the incident spectrum closely influence the distributions of normalized scattered spectrum in the far-zone. These features of GSMA beam scattered spectrum could be used to obtain information about the structure of a deterministic medium.
Network Traffic Generator for Low-rate Small Network Equipment Software
Lanzisera, Steven
2013-05-28
Application that uses the Python low-level socket interface to pass network traffic between devices on the local side of a NAT router and the WAN side of the NAT router. This application is designed to generate traffic that complies with the Energy Star Small Network Equipment Test Method.
Network Traffic Generator for Low-rate Small Network Equipment Software
Lanzisera, Steven
2013-05-28
Application that uses the Python low-level socket interface to pass network traffic between devices on the local side of a NAT router and the WAN side of the NAT router. This application is designed to generate traffic that complies with the Energy Star Small Network Equipment Test Method.
Generation of deterministic tsunami hazard maps in the Bay of Cadiz, south-west Spain
NASA Astrophysics Data System (ADS)
Álvarez-Gómez, J. A.; Otero, L.; Olabarrieta, M.; González, M.; Carreño, E.; Baptista, M. A.; Miranda, J. M.; Medina, R.; Lima, V.
2009-04-01
free surface elevation, maximum water depth, maximum current speed, maximum Froude number and maximum impact forces (hydrostatic and dynamic forces). The fault rupture and sea bottom displacement has been computed by means of the Okada equations. As result, a set of more than 100 deterministic thematic maps have been created in a GIS environment incorporating geographical data and high resolution orthorectified satellite images. These thematic maps form an atlas of inundation maps that will be distributed to different government authorities and civil protection and emergency agencies. The authors gratefully acknowledge the financial support provided by the EU under the frame of the European Project TRANSFER (Tsunami Risk And Strategies For the European Region), 6th Framework Programme.
MMPP Traffic Generator for the Testing of the SCAR 2 Fast Packet Switch
NASA Technical Reports Server (NTRS)
Chren, William A., Jr.
1995-01-01
A prototype MWP Traffic Generator (TG) has been designed for testing of the COMSAT-supplied SCAR II Fast Packet Switch. By generating packets distributed according to a Markov-Modulated Poisson Process (MMPP) model. it allows the assessment of the switch performance under traffic conditions that are more realistic than could be generated using the COMSAT-supplied Traffic Generator Module. The MMPP model is widely believed to model accurately real-world superimposed voice and data communications traffic. The TG was designed to be as much as possible of a "drop-in" replacement for the COMSAT Traffic Generator Module. The latter fit on two Altera EPM7256EGC 192-pin CPLDs and produced traffic for one switch input port. No board changes are necessary because it has been partitioned to use the existing board traces. The TG, consisting of parts "TGDATPROC" and "TGRAMCTL" must merely be reprogrammed into the Altera devices of the same name. However, the 040 controller software must be modified to provide TG initialization data. This data will be given in Section II.
NASA Astrophysics Data System (ADS)
Argollo de Menezes, Marcio; Brigatti, Edgardo; Schwämmle, Veit
2013-08-01
Microbiological systems evolve to fulfil their tasks with maximal efficiency. The immune system is a remarkable example, where the distinction between self and non-self is made by means of molecular interaction between self-proteins and antigens, triggering affinity-dependent systemic actions. Specificity of this binding and the infinitude of potential antigenic patterns call for novel mechanisms to generate antibody diversity. Inspired by this problem, we develop a genetic algorithm where agents evolve their strings in the presence of random antigenic strings and reproduce with affinity-dependent rates. We ask what is the best strategy to generate diversity if agents can rearrange their strings a finite number of times. We find that endowing each agent with an inheritable cellular automaton rule for performing rearrangements makes the system more efficient in pattern-matching than if transformations are totally random. In the former implementation, the population evolves to a stationary state where agents with different automata rules coexist.
Deterministic generation of a three-dimensional entangled state via quantum Zeno dynamics
Li Wenan; Huang Guangyao
2011-02-15
A scheme is proposed for the generation of a three-dimensional entangled state for two atoms trapped in a cavity via quantum Zeno dynamics. Because the scheme is based on the resonant interaction, the time required to produce entanglement is very short compared with the dispersive protocols. We show that the resulting effective dynamics allows for the creation of robust qutrit-qutrit entanglement. The influence of various decoherence processes such as spontaneous emission and photon loss on the fidelity of the entangled state is investigated. Numerical results show that the scheme is robust against the cavity decay since the evolution of the system is restricted to a subspace with null-excitation cavity fields. Furthermore, the present scheme has been generalized to realize N-dimensional entanglement for two atoms.
Methodology for Generating Conflict Scenarios by Time Shifting Recorded Traffic Data
NASA Technical Reports Server (NTRS)
Paglione, Mike; Oaks, Robert; Bilimoria, Karl D.
2003-01-01
A methodology is presented for generating conflict scenarios that can be used as test cases to estimate the operational performance of a conflict probe. Recorded air traffic data is time shifted to create traffic scenarios featuring conflicts with characteristic properties similar to those encountered in typical air traffic operations. First, a reference set of conflicts is obtained from trajectories that are computed using birth points and nominal flight plans extracted from recorded traffic data. Distributions are obtained for several primary properties (e.g., encounter angle) that are most likely to affect the performance of a conflict probe. A genetic algorithm is then utilized to determine the values of time shifts for the recorded track data so that the primary properties of conflicts generated by the time shifted data match those of the reference set. This methodology is successfully demonstrated using recorded traffic data for the Memphis Air Route Traffic Control Center; a key result is that the required time shifts are less than 5 min for 99% of the tracks. It is also observed that close matching of the primary properties used in this study additionally provides a good match for some other secondary properties.
Studies of Next Generation Air Traffic Control Specialists: Why Be an Air Traffic Controller?
2011-08-01
for effective management . Health Care Manager , 19, 65-76. Lancaster, L.C. & Stillman, D. (2002). When generations collide: Who they are, why they...and descriptions of different generations abound in the popular press and the human resources management ( HRM ) trade press. Values, working styles...Agency Code 15. Supplemental Notes Work was accomplished under approved task AM-523 16. Abstract With phrases such as “ Managing
Yang, W. S.; Lee, C. H.
2008-05-16
Under the fast reactor simulation program launched in April 2007, development of an advanced multigroup cross section generation code was initiated in July 2007, in conjunction with the development of the high-fidelity deterministic neutron transport code UNIC. The general objectives are to simplify the existing multi-step schemes and to improve the resolved and unresolved resonance treatments. Based on the review results of current methods and the fact that they have been applied successfully to fast critical experiment analyses and fast reactor designs for last three decades, the methodologies of the ETOE-2/MC{sup 2}-2/SDX code system were selected as the starting set of methodologies for multigroup cross section generation for fast reactor analysis. As the first step for coupling with the UNIC code and use in a parallel computing environment, the MC{sup 2}-2 code was updated by modernizing the memory structure and replacing old data management package subroutines and functions with FORTRAN 90 based routines. Various modifications were also made in the ETOE-2 and MC{sup 2}-2 codes to process the ENDF/B-VII.0 data properly. Using the updated ETOE-2/MC{sup 2}-2 code system, the ENDF/B-VII.0 data was successfully processed for major heavy and intermediate nuclides employed in sodium-cooled fast reactors. Initial verification tests of the MC{sup 2}-2 libraries generated from ENDF/B-VII.0 data were performed by inter-comparison of twenty-one group infinite dilute total cross sections obtained from MC{sup 2}-2, VIM, and NJOY. For almost all nuclides considered, MC{sup 2}-2 cross sections agreed very well with those from VIM and NJOY. Preliminary validation tests of the ENDF/B-VII.0 libraries of MC{sup 2}-2 were also performed using a set of sixteen fast critical benchmark problems. The deterministic results based on MC{sup 2}-2/TWODANT calculations were in good agreement with MCNP solutions within {approx}0.25% {Delta}{rho}, except a few small LANL fast assemblies
Lehr, D.; Dietrich, K.; Siefke, T.; Kley, E.-B.; Alaee, R.; Filter, R.; Lederer, F.; Rockstuhl, C.; Tünnermann, A.
2014-10-06
A double-patterning process for scalable, efficient, and deterministic nanoring array fabrication is presented. It enables gaps and features below a size of 20 nm. A writing time of 3 min/cm{sup 2} makes this process extremely appealing for scientific and industrial applications. Numerical simulations are in agreement with experimentally measured optical spectra. Therefore, a platform and a design tool for upcoming next generation plasmonic devices like hybrid plasmonic quantum systems are delivered.
NASA Astrophysics Data System (ADS)
Kerner, Boris S.; Rehborn, Hubert; Schäfer, Ralf-Peter; Klenov, Sergey L.; Palmer, Jochen; Lorkowski, Stefan; Witte, Nikolaus
2013-01-01
Empirical and theoretical analyses of the spatiotemporal dynamics of traffic flow reconstructed from randomly distributed probe vehicle data are presented. For the empirical analysis, probe vehicle data generated by TomTom’s navigation devices in the commercial TomTom’s HD-traffic service as well as road detector data measured at the same road section are used. A stochastic microscopic (car-following) three-phase model is further developed for simulations of a real empirical complex spatiotemporal traffic dynamics measured over a three-lane long road stretch with several different bottlenecks. Physical features and limitations of simulations of real spatiotemporal traffic dynamics are revealed. Phase transition points between free flow (F), synchronized flow (S), and wide moving jam (J) are identified along trajectories of empirical and simulated probe vehicles randomly distributed in traffic flow. As predicted by three-phase theory, the empirical probe vehicle data shows that traffic breakdown is an F→S transition and wide moving jams emerge only in synchronized flow, i.e., due to S→J transitions. Through the use of the simulations, it has been found that already about 2% of probe vehicle data allows us to reconstruct traffic dynamics in space and time with an accuracy that is high enough for most applications like the generation of jam warning messages studied in the article.
Barellini, A; Bogi, L; Licitra, G; Silvi, A M; Zari, A
2009-12-01
Air traffic control (ATC) primary radars are 'classical' radars that use echoes of radiofrequency (RF) pulses from aircraft to determine their position. High-power RF pulses radiated from radar antennas may produce high electromagnetic field levels in the surrounding area. Measurement of electromagnetic fields produced by RF-pulsed radar by means of a swept-tuned spectrum analyser are investigated here. Measurements have been carried out both in the laboratory and in situ on signals generated by an ATC primary radar.
Wei, Yu-Jia; He, Yu-Ming; Chen, Ming-Cheng; Hu, Yi-Nan; He, Yu; Wu, Dian; Schneider, Christian; Kamp, Martin; Höfling, Sven; Lu, Chao-Yang; Pan, Jian-Wei
2014-11-12
Single photons are attractive candidates of quantum bits (qubits) for quantum computation and are the best messengers in quantum networks. Future scalable, fault-tolerant photonic quantum technologies demand both stringently high levels of photon indistinguishability and generation efficiency. Here, we demonstrate deterministic and robust generation of pulsed resonance fluorescence single photons from a single semiconductor quantum dot using adiabatic rapid passage, a method robust against fluctuation of driving pulse area and dipole moments of solid-state emitters. The emitted photons are background-free, have a vanishing two-photon emission probability of 0.3% and a raw (corrected) two-photon Hong-Ou-Mandel interference visibility of 97.9% (99.5%), reaching a precision that places single photons at the threshold for fault-tolerant surface-code quantum computing. This single-photon source can be readily scaled up to multiphoton entanglement and used for quantum metrology, boson sampling, and linear optical quantum computing.
Trajectory Assessment and Modification Tools for Next Generation Air Traffic Management Operations
NASA Technical Reports Server (NTRS)
Brasil, Connie; Lee, Paul; Mainini, Matthew; Lee, Homola; Lee, Hwasoo; Prevot, Thomas; Smith, Nancy
2011-01-01
This paper reviews three Next Generation Air Transportation System (NextGen) based high fidelity air traffic control human-in-the-loop (HITL) simulations, with a focus on the expected requirement of enhanced automated trajectory assessment and modification tools to support future air traffic flow management (ATFM) planning positions. The simulations were conducted at the National Aeronautics and Space Administration (NASA) Ames Research Centers Airspace Operations Laboratory (AOL) in 2009 and 2010. The test airspace for all three simulations assumed the mid-term NextGenEn-Route high altitude environment utilizing high altitude sectors from the Kansas City and Memphis Air Route Traffic Control Centers. Trajectory assessment, modification and coordination decision support tools were developed at the AOL in order to perform future ATFM tasks. Overall tool usage results and user acceptability ratings were collected across three areas of NextGen operatoins to evaluate the tools. In addition to the usefulness and usability feedback, feasibility issues, benefits, and future requirements were also addressed. Overall, the tool sets were rated very useful and usable, and many elements of the tools received high scores and were used frequently and successfully. Tool utilization results in all three HITLs showed both user and system benefits including better airspace throughput, reduced controller workload, and highly effective communication protocols in both full Data Comm and mixed-equipage environments.
Fast and optimized methodology to generate road traffic emission inventories and their uncertainties
NASA Astrophysics Data System (ADS)
Blond, N.; Ho, B. Q.; Clappier, A.
2012-04-01
Road traffic emissions are one of the main sources of air pollution in the cities. They are also the main sources of uncertainties in the air quality numerical models used to forecast and define abatement strategies. Until now, the available models for generating road traffic emission always required a big effort, money and time. This inhibits decisions to preserve air quality, especially in developing countries where road traffic emissions are changing very fast. In this research, we developed a new model designed to fast produce road traffic emission inventories. This model, called EMISENS, combines the well-known top-down and bottom-up approaches to force them to be coherent. A Monte Carlo methodology is included for computing emission uncertainties and the uncertainty rate due to each input parameters. This paper presents the EMISENS model and a demonstration of its capabilities through an application over Strasbourg region (Alsace), France. Same input data as collected for Circul'air model (using bottom-up approach) which has been applied for many years to forecast and study air pollution by the Alsatian air quality agency, ASPA, are used to evaluate the impact of several simplifications that a user could operate . These experiments give the possibility to review older methodologies and evaluate EMISENS results when few input data are available to produce emission inventories, as in developing countries and assumptions need to be done. We show that same average fraction of mileage driven with a cold engine can be used for all the cells of the study domain and one emission factor could replace both cold and hot emission factors.
NASA Technical Reports Server (NTRS)
Khambatta, Cyrus F.
2007-01-01
A technique for automated development of scenarios for use in the Multi-Center Traffic Management Advisor (McTMA) software simulations is described. The resulting software is designed and implemented to automate the generation of simulation scenarios with the intent of reducing the time it currently takes using an observational approach. The software program is effective in achieving this goal. The scenarios created for use in the McTMA simulations are based on data taken from data files from the McTMA system, and were manually edited before incorporation into the simulations to ensure accuracy. Despite the software s overall favorable performance, several key software issues are identified. Proposed solutions to these issues are discussed. Future enhancements to the scenario generator software may address the limitations identified in this paper.
Scripted drives: A robust protocol for generating exposures to traffic-related air pollution
NASA Astrophysics Data System (ADS)
Patton, Allison P.; Laumbach, Robert; Ohman-Strickland, Pamela; Black, Kathy; Alimokhtari, Shahnaz; Lioy, Paul J.; Kipen, Howard M.
2016-10-01
Commuting in automobiles can contribute substantially to total traffic-related air pollution (TRAP) exposure, yet measuring commuting exposures for studies of health outcomes remains challenging. To estimate real-world TRAP exposures, we developed and evaluated the robustness of a scripted drive protocol on the NJ Turnpike and local roads between April 2007 and October 2014. Study participants were driven in a car with closed windows and open vents during morning rush hours on 190 days. Real-time measurements of PM2.5, PNC, CO, and BC, and integrated samples of NO2, were made in the car cabin. Exposure measures included in-vehicle concentrations on the NJ Turnpike and local roads and the differences and ratios of these concentrations. Median in-cabin concentrations were 11 μg/m3 PM2.5, 40 000 particles/cm3, 0.3 ppm CO, 4 μg/m3 BC, and 20.6 ppb NO2. In-cabin concentrations on the NJ Turnpike were higher than in-cabin concentrations on local roads by a factor of 1.4 for PM2.5, 3.5 for PNC, 1.0 for CO, and 4 for BC. Median concentrations of NO2 for full rides were 2.4 times higher than ambient concentrations. Results were generally robust relative to season, traffic congestion, ventilation setting, and study year, except for PNC and PM2.5, which had secular and seasonal trends. Ratios of concentrations were more stable than differences or absolute concentrations. Scripted drives can be used to generate reasonably consistent in-cabin increments of exposure to traffic-related air pollution.
Scripted drives: A robust protocol for generating exposures to traffic-related air pollution.
Patton, Allison P; Laumbach, Robert; Ohman-Strickland, Pamela; Black, Kathy; Alimokhtari, Shahnaz; Lioy, Paul; Kipen, Howard M
2016-10-01
Commuting in automobiles can contribute substantially to total traffic-related air pollution (TRAP) exposure, yet measuring commuting exposures for studies of health outcomes remains challenging. To estimate real-world TRAP exposures, we developed and evaluated the robustness of a scripted drive protocol on the NJ Turnpike and local roads between April 2007 and October 2014. Study participants were driven in a car with closed windows and open vents during morning rush hours on 190 days. Real-time measurements of PM2.5, PNC, CO, and BC, and integrated samples of NO2, were made in the car cabin. Exposure measures included in-vehicle concentrations on the NJ Turnpike and local roads and the differences and ratios of these concentrations. Median in-cabin concentrations were 11 μg/m(3) PM2.5, 40 000 particles/cm(3), 0.3 ppm CO, 4 μg/m(3) BC, and 20.6 ppb NO2. In-cabin concentrations on the NJ Turnpike were higher than in-cabin concentrations on local roads by a factor of 1.4 for PM2.5, 3.5 for PNC, 1.0 for CO, and 4 for BC. Median concentrations of NO2 for full rides were 2.4 times higher than ambient concentrations. Results were generally robust relative to season, traffic congestion, ventilation setting, and study year, except for PNC and PM2.5, which had secular and seasonal trends. Ratios of concentrations were more stable than differences or absolute concentrations. Scripted drives can be used for generating reasonably consistent in-cabin increments of exposure to traffic-related air pollution.
Williams, K.A.; Delene, J.G.; Fuller, L.C.; Bowers, H.I.
1987-06-01
The total busbar electric generating costs were estimated for locations in ten regions of the United States for base-load nuclear and coal-fired power plants with a startup date of January 2000. For the Midwest region a complete data set that specifies each parameter used to obtain the comparative results is supplied. When based on the reference set of input variables, the comparison of power generation costs is found to favor nuclear in most regions of the country. Nuclear power is most favored in the northeast and western regions where coal must be transported over long distances; however, coal-fired generation is most competitive in the north central region where large reserves of cheaply mineable coal exist. In several regions small changes in the reference variables could cause either option to be preferred. The reference data set reflects the better of recent electric utility construction cost experience (BE) for nuclear plants. This study assumes as its reference case a stable regulatory environment and improved planning and construction practices, resulting in nuclear plants typically built at the present BE costs. Today's BE nuclear-plant capital investment cost model is then being used as a surrogate for projected costs for the next generation of light-water reactor plants. An alternative analysis based on today's median experience (ME) nuclear-plant construction cost experience is also included. In this case, coal is favored in all ten regions, implying that typical nuclear capital investment costs must improve for nuclear to be competitive.
The spatial relationship between traffic-generated air pollution and noise in 2 US cities.
Allen, Ryan W; Davies, Hugh; Cohen, Martin A; Mallach, Gary; Kaufman, Joel D; Adar, Sara D
2009-04-01
Traffic-generated air pollution and noise have both been linked to cardiovascular morbidity. Since traffic is a shared source, there is potential for correlated exposures that may lead to confounding in epidemiologic studies. As part of the Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air), 2-week NO and NO(2) concentrations were measured at up to 105 locations, selected primarily to characterize gradients near major roads, in each of 9 US communities. We measured 5-min A-weighted equivalent continuous sound pressure levels (L(eq)) and ultrafine particle (UFP) counts at a subset of these NO/NO(2) monitoring locations in Chicago, IL (N=69 in December 2006; N=36 in April 2007) and Riverside County, CA (N=46 in April 2007). L(eq) and UFP were measured during non-"rush hour" periods (10:00-16:00) to maximize comparability between measurements. We evaluated roadway proximity exposure surrogates in relation to the measured levels, estimated noise-air pollution correlation coefficients, and evaluated the impact of regional-scale pollution gradients, wind direction, and roadway proximity on the correlations. Five-minute L(eq) measurements in December 2006 and April 2007 were highly correlated (r=0.84), and measurements made at different times of day were similar (coefficients of variation: 0.5-13%), indicating that 5-min measurements are representative of long-term L(eq). Binary and continuous roadway proximity metrics characterized L(eq) as well or better than NO or NO(2). We found strong regional-scale gradients in NO and NO(2), particularly in Chicago, but only weak regional-scale gradients in L(eq) and UFP. L(eq) was most consistently correlated with NO, but the correlations were moderate (0.20-0.60). After removing the influence of regional-scale gradients the correlations generally increased (L(eq)-NO: r=0.49-0.62), and correlations downwind of major roads (L(eq)-NO: r=0.53-0.74) were consistently higher than those upwind (0.35-0.65). There was not a
Studies of uncontrolled air traffic patterns, phase 1
NASA Technical Reports Server (NTRS)
Baxa, E. G., Jr.; Scharf, L. L.; Ruedger, W. H.; Modi, J. A.; Wheelock, S. L.; Davis, C. M.
1975-01-01
The general aviation air traffic flow patterns at uncontrolled airports are investigated and analyzed and traffic pattern concepts are developed to minimize the midair collision hazard in uncontrolled airspace. An analytical approach to evaluate midair collision hazard probability as a function of traffic densities is established which is basically independent of path structure. Two methods of generating space-time interrelationships between terminal area aircraft are presented; one is a deterministic model to generate pseudorandom aircraft tracks, the other is a statistical model in preliminary form. Some hazard measures are presented for selected traffic densities. It is concluded that the probability of encountering a hazard should be minimized independently of any other considerations and that the number of encounters involving visible-avoidable aircraft should be maximized at the expense of encounters in other categories.
NASA Astrophysics Data System (ADS)
Oh, Seok-Geun; Suh, Myoung-Seok
2017-07-01
The projection skills of five ensemble methods were analyzed according to simulation skills, training period, and ensemble members, using 198 sets of pseudo-simulation data (PSD) produced by random number generation assuming the simulated temperature of regional climate models. The PSD sets were classified into 18 categories according to the relative magnitude of bias, variance ratio, and correlation coefficient, where each category had 11 sets (including 1 truth set) with 50 samples. The ensemble methods used were as follows: equal weighted averaging without bias correction (EWA_NBC), EWA with bias correction (EWA_WBC), weighted ensemble averaging based on root mean square errors and correlation (WEA_RAC), WEA based on the Taylor score (WEA_Tay), and multivariate linear regression (Mul_Reg). The projection skills of the ensemble methods improved generally as compared with the best member for each category. However, their projection skills are significantly affected by the simulation skills of the ensemble member. The weighted ensemble methods showed better projection skills than non-weighted methods, in particular, for the PSD categories having systematic biases and various correlation coefficients. The EWA_NBC showed considerably lower projection skills than the other methods, in particular, for the PSD categories with systematic biases. Although Mul_Reg showed relatively good skills, it showed strong sensitivity to the PSD categories, training periods, and number of members. On the other hand, the WEA_Tay and WEA_RAC showed relatively superior skills in both the accuracy and reliability for all the sensitivity experiments. This indicates that WEA_Tay and WEA_RAC are applicable even for simulation data with systematic biases, a short training period, and a small number of ensemble members.
NASA Astrophysics Data System (ADS)
Oh, Seok-Geun; Suh, Myoung-Seok
2016-03-01
The projection skills of five ensemble methods were analyzed according to simulation skills, training period, and ensemble members, using 198 sets of pseudo-simulation data (PSD) produced by random number generation assuming the simulated temperature of regional climate models. The PSD sets were classified into 18 categories according to the relative magnitude of bias, variance ratio, and correlation coefficient, where each category had 11 sets (including 1 truth set) with 50 samples. The ensemble methods used were as follows: equal weighted averaging without bias correction (EWA_NBC), EWA with bias correction (EWA_WBC), weighted ensemble averaging based on root mean square errors and correlation (WEA_RAC), WEA based on the Taylor score (WEA_Tay), and multivariate linear regression (Mul_Reg). The projection skills of the ensemble methods improved generally as compared with the best member for each category. However, their projection skills are significantly affected by the simulation skills of the ensemble member. The weighted ensemble methods showed better projection skills than non-weighted methods, in particular, for the PSD categories having systematic biases and various correlation coefficients. The EWA_NBC showed considerably lower projection skills than the other methods, in particular, for the PSD categories with systematic biases. Although Mul_Reg showed relatively good skills, it showed strong sensitivity to the PSD categories, training periods, and number of members. On the other hand, the WEA_Tay and WEA_RAC showed relatively superior skills in both the accuracy and reliability for all the sensitivity experiments. This indicates that WEA_Tay and WEA_RAC are applicable even for simulation data with systematic biases, a short training period, and a small number of ensemble members.
Characterization of highway traffic noise generated by rigid pavement contraction joints
NASA Astrophysics Data System (ADS)
Ellis, Lawrin T.; Niezrecki, Christopher; Bloomquist, David
2003-04-01
Contraction joints in rigid (concrete) pavements are required to permit expansion of each monolithic section of roadway. At higher speeds, the major source of highway noise is attributed to vehicle tire/roadway interaction. Current concerns about limiting the impact of highway traffic noise has forced transportation agencies to consider strategies to control noise generated by tire/roadway interaction. Within this work the difference in noise generated by 1/4- vs 3/8-in. joint widths is conducted. The study focuses on passenger vehicles including a sedan and a light duty van/truck. Both vehicle in-cabin and roadside noise levels are measured for vehicle speeds of 50, 60, and 70 miles per hour. For the sedan, the minimum and maximum observed in-cabin differences were determined to be 1.08 and 1.82 dB(A), respectively. Minimum and maximum observed roadside differences are 1.19 and 2.58 dB(A), respectively. Van tests resulted in minimum and maximum observed in-cabin differences of 0.60 and 1.09 dB(A) and minimum and maximum observed roadside differences of 1.05 and 3.18 dB(A), respectively. This paper contains details of reference standards, test methods, and the results obtained.
Deterministic Walks with Choice
Beeler, Katy E.; Berenhaut, Kenneth S.; Cooper, Joshua N.; Hunter, Meagan N.; Barr, Peter S.
2014-01-10
This paper studies deterministic movement over toroidal grids, integrating local information, bounded memory and choice at individual nodes. The research is motivated by recent work on deterministic random walks, and applications in multi-agent systems. Several results regarding passing tokens through toroidal grids are discussed, as well as some open questions.
Wang, Ling; Abdel-Aty, Mohamed; Lee, Jaeyoung; Shi, Qi
2017-07-07
There have been numerous studies on real-time crash prediction seeking to link real-time crash likelihood with traffic and environmental predictors. Nevertheless, none has explored the impact of socio-demographic and trip generation parameters on real-time crash risk. This study analyzed the real-time crash risk for expressway ramps using traffic, geometric, socio-demographic, and trip generation predictors. Two Bayesian logistic regression models were utilized to identify crash precursors and their impact on ramp crash risk. Meanwhile, four Support Vector Machines (SVM) were applied to predict crash occurrence. Bayesian logistic regression models and SVMs commonly showed that the models with the socio-demographic and trip generation variables outperform their counterparts without those parameters. It indicates that the socio-demographic and trip generation parameters have significant impact on the real-time crash risk. The Bayesian logistic regression model results showed that the logarithm of vehicle count, speed, and percentage of home-based-work production had positive impact on crash risk. Meanwhile, off-ramps or non-diamond-ramps experienced higher crash potential than on-ramps or diamond-ramps, respectively. Though the SVMs provided good model performance, the SVM model with all variables (i.e., all traffic, geometric, socio-demographic, and trip generation variables) had an overfitting problem. Therefore, it is recommended to build SVM models based on significant variables identified by other models, such as logistic regression. Copyright © 2017 Elsevier Ltd. All rights reserved.
Deterministic scale-free networks
NASA Astrophysics Data System (ADS)
Barabási, Albert-László; Ravasz, Erzsébet; Vicsek, Tamás
2001-10-01
Scale-free networks are abundant in nature and society, describing such diverse systems as the world wide web, the web of human sexual contacts, or the chemical network of a cell. All models used to generate a scale-free topology are stochastic, that is they create networks in which the nodes appear to be randomly connected to each other. Here we propose a simple model that generates scale-free networks in a deterministic fashion. We solve exactly the model, showing that the tail of the degree distribution follows a power law.
NASA Astrophysics Data System (ADS)
Hazelhoff, Lykele; Creusen, Ivo M.; Woudsma, Thomas; de With, Peter H. N.
2015-11-01
Combined databases of road markings and traffic signs provide a complete and full description of the present traffic legislation and instructions. Such databases contribute to efficient signage maintenance, improve navigation, and benefit autonomous driving vehicles. A system is presented for the automated creation of such combined databases, which additionally investigates the benefit of this combination for automated contextual placement analysis. This analysis involves verification of the co-occurrence of traffic signs and road markings to retrieve a list of potentially incorrectly signaled (and thus potentially unsafe) road situations. This co-occurrence verification is specifically explored for both pedestrian crossings and yield situations. Evaluations on 420 km of road have shown that individual detection of traffic signs and road markings denoting these road situations can be performed with accuracies of 98% and 85%, respectively. Combining both approaches shows that over 95% of the pedestrian crossings and give-way situations can be identified. An exploration toward additional co-occurrence analysis of signs and markings shows that inconsistently signaled situations can successfully be extracted, such that specific safety actions can be directed toward cases lacking signs or markings, while most consistently signaled situations can be omitted from this analysis.
A queuing model for road traffic simulation
Guerrouahane, N.; Aissani, D.; Bouallouche-Medjkoune, L.; Farhi, N.
2015-03-10
We present in this article a stochastic queuing model for the raod traffic. The model is based on the M/G/c/c state dependent queuing model, and is inspired from the deterministic Godunov scheme for the road traffic simulation. We first propose a variant of M/G/c/c state dependent model that works with density-flow fundamental diagrams rather than density-speed relationships. We then extend this model in order to consider upstream traffic demand as well as downstream traffic supply. Finally, we show how to model a whole raod by concatenating raod sections as in the deterministic Godunov scheme.
Deterministic transport in ratchets
NASA Astrophysics Data System (ADS)
Sarmiento, Antonio; Larralde, Hernán
1999-05-01
We present the deterministic transport properties of driven overdamped particles in a simple piecewise-linear ratchet potential. We consider the effects on the stationary current due to local spatial asymmetry, time asymmetry in the driving force, and we include the possibility of a global spatial asymmetry. We present an extremely simple scheme for evaluating the current that is established on the ratchet within an ``adiabatic'' approximation, and compare the results with exact numerical integration of the process.
Deterministic Entangled Nanosource
2008-08-01
currently valid OMB control number . PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) 01-09-2008 2. REPORT TYPE...Final Report 3. DATES COVERED (From - To) Sep 2005 – Sep 2008 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER FA9550-05-1-0455...Deterministic Entangled Nanosource 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER Khitrova, Galina 5e. TASK
Creutz, M.
1986-03-01
A deterministic cellular automation rule is presented which simulates the Ising model. On each cell in addition to an Ising spin is a space-time parity bit and a variable playing the role of a momentum conjugate to the spin. The procedure permits study of nonequilibrium phenomena, heat flow, mixing, and time correlations. The algorithm can make full use of multispin coding, thus permitting fast programs involving parallel processing on serial machines.
Pratt, Gregory C; Parson, Kris; Shinoda, Naomi; Lindgren, Paula; Dunlap, Sara; Yawn, Barbara; Wollan, Peter; Johnson, Jean
2014-01-01
Living near traffic adversely affects health outcomes. Traffic exposure metrics include distance to high-traffic roads, traffic volume on nearby roads, traffic within buffer distances, measured pollutant concentrations, land-use regression estimates of pollution concentrations, and others. We used Geographic Information System software to explore a new approach using traffic count data and a kernel density calculation to generate a traffic density surface with a resolution of 50 m. The density value in each cell reflects all the traffic on all the roads within the distance specified in the kernel density algorithm. The effect of a given roadway on the raster cell value depends on the amount of traffic on the road segment, its distance from the raster cell, and the form of the algorithm. We used a Gaussian algorithm in which traffic influence became insignificant beyond 300 m. This metric integrates the deleterious effects of traffic rather than focusing on one pollutant. The density surface can be used to impute exposure at any point, and it can be used to quantify integrated exposure along a global positioning system route. The traffic density calculation compares favorably with other metrics for assessing traffic exposure and can be used in a variety of applications.
2013-03-01
Organization Report No. Broach, D 9 . Performing Organization Name and Address 10. Work Unit No. (TRAIS) FAA Civil Aerospace Medical... 9 Single Aircraft Arrival...36 departures might be expected between 1225 and 1310. The arrivals might be a mix of hub-to-hub traffic Figure 4: CTRD in ORD tower cab 9 and
A Deterministic Microfluidic Ratchet
NASA Astrophysics Data System (ADS)
Loutherback, Kevin; Puchalla, Jason; Austin, Robert; Sturm, James
2009-03-01
We present a deterministic microfluidic ratchet where the trajectory of particles in a certain size range is not reversed when the sign of the driving force is reversed. This ratcheting effect is produced by employing triangular rather than the conventionally circular posts in a post array that selectively displaces particles transported through the array. The underlying mechanism of this method is shown to to be an asymmetric fluid velocity distribution through the gap between triangular posts that results in different critical particle sizes depending on the direction of the flow.
Deterministic Entangled Nanosource
2008-08-01
control number PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) 01-09-2008 2. REPORT TYPE Final Report 3...DATES COVERED (From - To) Sep 2005 - Sep 200? 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER FA9550-05-1-0455 5b. GRANT NUMBER Deterministic...Entangled Nanosource 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER Khitrova, Galina 5f. WORK UNIT NUMBER 7. PERFORMING
Chengjiang Mao
1996-12-31
In typical AI systems, we employ so-called non-deterministic reasoning (NDR), which resorts to some systematic search with backtracking in the search spaces defined by knowledge bases (KBs). An eminent property of NDR is that it facilitates programming, especially programming for those difficult AI problems such as natural language processing for which it is difficult to find algorithms to tell computers what to do at every step. However, poor efficiency of NDR is still an open problem. Our work aims at overcoming this efficiency problem.
The Deterministic Information Bottleneck
NASA Astrophysics Data System (ADS)
Strouse, D. J.; Schwab, David
2015-03-01
A fundamental and ubiquitous task that all organisms face is prediction of the future based on past sensory experience. Since an individual's memory resources are limited and costly, however, there is a tradeoff between memory cost and predictive payoff. The information bottleneck (IB) method (Tishby, Pereira, & Bialek 2000) formulates this tradeoff as a mathematical optimization problem using an information theoretic cost function. IB encourages storing as few bits of past sensory input as possible while selectively preserving the bits that are most predictive of the future. Here we introduce an alternative formulation of the IB method, which we call the deterministic information bottleneck (DIB). First, we argue for an alternative cost function, which better represents the biologically-motivated goal of minimizing required memory resources. Then, we show that this seemingly minor change has the dramatic effect of converting the optimal memory encoder from stochastic to deterministic. Next, we propose an iterative algorithm for solving the DIB problem. Additionally, we compare the IB and DIB methods on a variety of synthetic datasets, and examine the performance of retinal ganglion cell populations relative to the optimal encoding strategy for each problem.
Basic model for traffic interweave
NASA Astrophysics Data System (ADS)
Huang, Ding-wei
2015-09-01
We propose a three-parameter traffic model. The system consists of a loop with two junctions. The three parameters control the inflow, the outflow (from the junctions,) and the interweave (in the loop.) The dynamics is deterministic. The boundary conditions are stochastic. We present preliminary results for a complete phase diagram and all possible phase transitions. We observe four distinct traffic phases: free flow, congestion, bottleneck, and gridlock. The proposed model is able to present economically a clear perspective to these four different phases. Free flow and congestion are caused by the traffic conditions in the junctions. Both bottleneck and gridlock are caused by the traffic interweave in the loop. Instead of directly related to conventional congestion, gridlock can be taken as an extreme limit of bottleneck. This model can be useful to clarify the characteristics of traffic phases. This model can also be extended for practical applications.
Valavanidis, Athanasios; Loridas, Spyridon; Vlahogianni, Thomi; Fiotakis, Konstantinos
2009-03-15
Epidemiologic studies suggest that ozone (O(3)) and airborne particulate matter (PM) can interact causing acute respiratory inflammation and other respiratory diseases. Recent studies investigated the hypothesis that the effects of air pollution caused by O(3) and PM are larger than the effect of these two pollutants individually. We investigated the hypothesis that ozone and traffic-related PM (PM(10) and PM(2.5), diesel and gasoline exhaust particles) interact synergistically to produce increasing amounts of highly reactive hydroxyl radicals (HO) in a heterogeneous aqueous mixture at physiological pH. Electron paramagnetic resonance (EPR) and spin trapping were used for the measurements. Results showed that HO radicals are generated by the catalytic action of PM surface area with ozone and that EPR peak intensities are two to three times higher compared to PM samples without ozone. Incubation of the nucleoside 2'-deoxyguanosine (dG) in aqueous mixtures of ozone and PM at pH 7.4 resulted in the hydroxylation at C(8) position of dG. The formation of 8-hydroxy-2'-deoxyguanosine (8-OHdG) showed a 2-2.5-fold increase over control (PM without O(3)). These results suggest that PM and O(3) act synergistically generating a sustained production of reactive HO radicals. Partitioning of O(3) into the particle phase depends on the concentration, hygroscopicity and particle size.
Automatic Traffic-Based Internet Control Message Protocol (ICMP) Model Generation for ns-3
2015-12-01
protocols. We identified limitations and implemented a system that could utilize some of these tools to extract the vocabulary and grammar . We collected 3...sniffer or by specifying an existing capture, network flow, or other accepted formats. • Protocol inference modules: The vocabulary and grammar inference...communication flows. • Simulation module: Netzob utilizes the vocabulary and grammar models previously inferred to understand and generate communication
Maritime Dynamic Traffic Generator. Volume III: Density Data on World Maps
1975-06-01
AD-A012 498 MARITIME DYNAMIC TR/\\FFIC GENERATOR. VOLUME III: DENSITY DATA ON WORLD MAPS Franklin D. MacKenzie Transportation Systems Center Cambridqe... Transportation Systems Center i]. ContractorGrant No. Kendall Square (Camhridge MA 02142 13. Type of Report and Period Covered 12. Sponsorini"Agency Name and... Transportation • Systems Center to define and analyze requirements for navigation and communica .. ..... .. tion services throtgh a satellite for commercial
Wu, Jun; Ren, Cizao; Delfino, Ralph J; Chung, Judith; Wilhelm, Michelle; Ritz, Beate
2009-11-01
Preeclampsia is a major complication of pregnancy that can lead to substantial maternal and perinatal morbidity, mortality, and preterm birth. Increasing evidence suggests that air pollution adversely affects pregnancy outcomes. Yet few studies have examined how local traffic-generated emissions affect preeclampsia in addition to preterm birth. We examined effects of residential exposure to local traffic-generated air pollution on preeclampsia and preterm delivery (PTD). We identified 81,186 singleton birth records from four hospitals (1997-2006) in Los Angeles and Orange Counties, California (USA). We used a line-source dispersion model (CALINE4) to estimate individual exposure to local traffic-generated nitrogen oxides (NO(x)) and particulate matter < 2.5 mum in aerodynamic diameter (PM(2.5)) across the entire pregnancy. We used logistic regression to estimate effects of air pollution exposures on preeclampsia, PTD (gestational age < 37 weeks), moderate PTD (MPTD; gestational age < 35 weeks), and very PTD (VPTD; gestational age < 30 weeks). We observed elevated risks for preeclampsia and preterm birth from maternal exposure to local traffic-generated NO(x) and PM(2.5). The risk of preeclampsia increased 33% [odds ratio (OR) = 1.33; 95% confidence interval (CI), 1.18-1.49] and 42% (OR = 1.42; 95% CI, 1.26-1.59) for the highest NO(x) and PM(2.5) exposure quartiles, respectively. The risk of VPTD increased 128% (OR = 2.28; 95% CI, 2.15-2.42) and 81% (OR = 1.81; 95% CI, 1.71-1.92) for women in the highest NO(x) and PM(2.5) exposure quartiles, respectively. Exposure to local traffic-generated air pollution during pregnancy increases the risk of preeclampsia and preterm birth in Southern California women. These results provide further evidence that air pollution is associated with adverse reproductive outcomes.
Wu, Jun; Ren, Cizao; Delfino, Ralph J.; Chung, Judith; Wilhelm, Michelle; Ritz, Beate
2009-01-01
Background Preeclampsia is a major complication of pregnancy that can lead to substantial maternal and perinatal morbidity, mortality, and preterm birth. Increasing evidence suggests that air pollution adversely affects pregnancy outcomes. Yet few studies have examined how local traffic-generated emissions affect preeclampsia in addition to preterm birth. Objectives We examined effects of residential exposure to local traffic-generated air pollution on preeclampsia and preterm delivery (PTD). Methods We identified 81,186 singleton birth records from four hospitals (1997–2006) in Los Angeles and Orange Counties, California (USA). We used a line-source dispersion model (CALINE4) to estimate individual exposure to local traffic-generated nitrogen oxides (NOx) and particulate matter < 2.5 μm in aerodynamic diameter (PM2.5) across the entire pregnancy. We used logistic regression to estimate effects of air pollution exposures on preeclampsia, PTD (gestational age < 37 weeks), moderate PTD (MPTD; gestational age < 35 weeks), and very PTD (VPTD; gestational age < 30 weeks). Results We observed elevated risks for preeclampsia and preterm birth from maternal exposure to local traffic-generated NOx and PM2.5. The risk of preeclampsia increased 33% [odds ratio (OR) = 1.33; 95% confidence interval (CI), 1.18–1.49] and 42% (OR = 1.42; 95% CI, 1.26–1.59) for the highest NOx and PM2.5 exposure quartiles, respectively. The risk of VPTD increased 128% (OR = 2.28; 95% CI, 2.15–2.42) and 81% (OR = 1.81; 95% CI, 1.71–1.92) for women in the highest NOx and PM2.5 exposure quartiles, respectively. Conclusion Exposure to local traffic-generated air pollution during pregnancy increases the risk of preeclampsia and preterm birth in Southern California women. These results provide further evidence that air pollution is associated with adverse reproductive outcomes. PMID:20049131
NASA Technical Reports Server (NTRS)
Roske-Hofstrand, Renate J.
1990-01-01
The man-machine interface and its influence on the characteristics of computer displays in automated air traffic is discussed. The graphical presentation of spatial relationships and the problems it poses for air traffic control, and the solution of such problems are addressed. Psychological factors involved in the man-machine interface are stressed.
NASA Technical Reports Server (NTRS)
Roske-Hofstrand, Renate J.
1990-01-01
The man-machine interface and its influence on the characteristics of computer displays in automated air traffic is discussed. The graphical presentation of spatial relationships and the problems it poses for air traffic control, and the solution of such problems are addressed. Psychological factors involved in the man-machine interface are stressed.
Near real-time traffic routing
NASA Technical Reports Server (NTRS)
Yang, Chaowei (Inventor); Cao, Ying (Inventor); Xie, Jibo (Inventor); Zhou, Bin (Inventor)
2012-01-01
A near real-time physical transportation network routing system comprising: a traffic simulation computing grid and a dynamic traffic routing service computing grid. The traffic simulator produces traffic network travel time predictions for a physical transportation network using a traffic simulation model and common input data. The physical transportation network is divided into a multiple sections. Each section has a primary zone and a buffer zone. The traffic simulation computing grid includes multiple of traffic simulation computing nodes. The common input data includes static network characteristics, an origin-destination data table, dynamic traffic information data and historical traffic data. The dynamic traffic routing service computing grid includes multiple dynamic traffic routing computing nodes and generates traffic route(s) using the traffic network travel time predictions.
Deterministic patterns in cell motility
NASA Astrophysics Data System (ADS)
Lavi, Ido; Piel, Matthieu; Lennon-Duménil, Ana-Maria; Voituriez, Raphaël; Gov, Nir S.
2016-12-01
Cell migration paths are generally described as random walks, associated with both intrinsic and extrinsic noise. However, complex cell locomotion is not merely related to such fluctuations, but is often determined by the underlying machinery. Cell motility is driven mechanically by actin and myosin, two molecular components that generate contractile forces. Other cell functions make use of the same components and, therefore, will compete with the migratory apparatus. Here, we propose a physical model of such a competitive system, namely dendritic cells whose antigen capture function and migratory ability are coupled by myosin II. The model predicts that this coupling gives rise to a dynamic instability, whereby cells switch from persistent migration to unidirectional self-oscillation, through a Hopf bifurcation. Cells can then switch to periodic polarity reversals through a homoclinic bifurcation. These predicted dynamic regimes are characterized by robust features that we identify through in vitro trajectories of dendritic cells over long timescales and distances. We expect that competition for limited resources in other migrating cell types can lead to similar deterministic migration modes.
Deterministic nanoparticle assemblies: from substrate to solution
NASA Astrophysics Data System (ADS)
Barcelo, Steven J.; Kim, Ansoon; Gibson, Gary A.; Norris, Kate J.; Yamakawa, Mineo; Li, Zhiyong
2014-04-01
The deterministic assembly of metallic nanoparticles is an exciting field with many potential benefits. Many promising techniques have been developed, but challenges remain, particularly for the assembly of larger nanoparticles which often have more interesting plasmonic properties. Here we present a scalable process combining the strengths of top down and bottom up fabrication to generate deterministic 2D assemblies of metallic nanoparticles and demonstrate their stable transfer to solution. Scanning electron and high-resolution transmission electron microscopy studies of these assemblies suggested the formation of nanobridges between touching nanoparticles that hold them together so as to maintain the integrity of the assembly throughout the transfer process. The application of these nanoparticle assemblies as solution-based surface-enhanced Raman scattering (SERS) materials is demonstrated by trapping analyte molecules in the nanoparticle gaps during assembly, yielding uniformly high enhancement factors at all stages of the fabrication process.
Master equation analysis of deterministic chemical chaos
NASA Astrophysics Data System (ADS)
Wang, Hongli; Li, Qianshu
1998-05-01
The underlying microscopic dynamics of deterministic chemical chaos was investigated in this paper. We analyzed the master equation for the Williamowski-Rössler model by direct stochastic simulation as well as in the generating function representation. Simulation within an ensemble revealed that in the chaotic regime the deterministic mass action kinetics is related neither to the ensemble mean nor to the most probable value within the ensemble. Cumulant expansion analysis of the master equation also showed that the molecular fluctuations do not admit bounded values but increase linearly in time infinitely, indicating the meaninglessness of the chaotic trajectories predicted by the phenomenological equations. These results proposed that the macroscopic description is no longer useful in the chaotic regime and a more microscopic description is necessary in this circumstance.
Self-stabilizing Deterministic Gathering
NASA Astrophysics Data System (ADS)
Dieudonné, Yoann; Petit, Franck
In this paper, we investigate the possibility to deterministically solve the gathering problem (GP) with weak robots (anonymous, autonomous, disoriented, oblivious, deaf, and dumb). We introduce strong multiplicity detection as the ability for the robots to detect the exact number of robots located at a given position. We show that with strong multiplicity detection, there exists a deterministic self-stabilizing algorithm solving GP for n robots if, and only if, n is odd.
The Traffic Management Advisor
NASA Technical Reports Server (NTRS)
Nedell, William; Erzberger, Heinz; Neuman, Frank
1990-01-01
The traffic management advisor (TMA) is comprised of algorithms, a graphical interface, and interactive tools for controlling the flow of air traffic into the terminal area. The primary algorithm incorporated in it is a real-time scheduler which generates efficient landing sequences and landing times for arrivals within about 200 n.m. from touchdown. A unique feature of the TMA is its graphical interface that allows the traffic manager to modify the computer-generated schedules for specific aircraft while allowing the automatic scheduler to continue generating schedules for all other aircraft. The graphical interface also provides convenient methods for monitoring the traffic flow and changing scheduling parameters during real-time operation.
Deterministic multidimensional nonuniform gap sampling.
Worley, Bradley; Powers, Robert
2015-12-01
Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities. Copyright © 2015 Elsevier Inc. All rights reserved.
Multimedia traffic monitoring system
NASA Astrophysics Data System (ADS)
Al-Sayegh, Osamah A.; Dashti, Ali E.
2000-10-01
Increasing congestion on roads and highways, and the problems associated with conventional traffic monitoring systems have generated an interest in new traffic surveillance systems, such as video image processing. These systems are expected to be more effective and more economical than conventional surveillance systems. In this paper, we describe the design of a traffic surveillance system, called Multimedia traffic Monitoring System. The system is based on a client/server model, with the following main modules: 1) video image capture module (VICM), 2) video image processing module (VIPM), and 3) database module (DBM). The VICM is used to capture the live feed from a digital camera. Depending on the mode of operation, VICM either: 1) sends the video images directly to the VIPM (on the same processing node), or 2) compresses the video images and sends them to the VIPM and/or the DBM on separate processing node(s). The main contribution of this paper is the design of a traffic monitoring system that uses image processing (VIPM) to estimate traffic flow. In the current implementation, VIPM estimates the number of vehicles per kilometer, while using 9 image sequences (at a rate of 4 frames per second). The VIPM algorithm generates a virtual grid and superimposes it on a part of the traffic scene. Motion and vehicle detection operators are carried out within each cell in the grid. Vehicle count is concluded based on the nine images of a sequence. The system is tested against a manual count of more than 40 image sequences (total of more than 365 traffic images) of various traffic situations. The results show that the system is able to determine the traffic flow with a precision of 1.5 vehicles per kilometer.
Visualization of Traffic Accidents
NASA Technical Reports Server (NTRS)
Wang, Jie; Shen, Yuzhong; Khattak, Asad
2010-01-01
Traffic accidents have tremendous impact on society. Annually approximately 6.4 million vehicle accidents are reported by police in the US and nearly half of them result in catastrophic injuries. Visualizations of traffic accidents using geographic information systems (GIS) greatly facilitate handling and analysis of traffic accidents in many aspects. Environmental Systems Research Institute (ESRI), Inc. is the world leader in GIS research and development. ArcGIS, a software package developed by ESRI, has the capabilities to display events associated with a road network, such as accident locations, and pavement quality. But when event locations related to a road network are processed, the existing algorithm used by ArcGIS does not utilize all the information related to the routes of the road network and produces erroneous visualization results of event locations. This software bug causes serious problems for applications in which accurate location information is critical for emergency responses, such as traffic accidents. This paper aims to address this problem and proposes an improved method that utilizes all relevant information of traffic accidents, namely, route number, direction, and mile post, and extracts correct event locations for accurate traffic accident visualization and analysis. The proposed method generates a new shape file for traffic accidents and displays them on top of the existing road network in ArcGIS. Visualization of traffic accidents along Hampton Roads Bridge Tunnel is included to demonstrate the effectiveness of the proposed method.
Mixed deterministic and probabilistic networks
Dechter, Rina
2010-01-01
The paper introduces mixed networks, a new graphical model framework for expressing and reasoning with probabilistic and deterministic information. The motivation to develop mixed networks stems from the desire to fully exploit the deterministic information (constraints) that is often present in graphical models. Several concepts and algorithms specific to belief networks and constraint networks are combined, achieving computational efficiency, semantic coherence and user-interface convenience. We define the semantics and graphical representation of mixed networks, and discuss the two main types of algorithms for processing them: inference-based and search-based. A preliminary experimental evaluation shows the benefits of the new model. PMID:20981243
Fluid turbulence - Deterministic or statistical
NASA Astrophysics Data System (ADS)
Cheng, Sin-I.
The deterministic view of turbulence suggests that the classical theory of fluid turbulence may be treating the wrong entity. The paper explores the physical implications of such an abstract mathematical result, and provides a constructive computational demonstration of the deterministic and the wave nature of fluid turbulence. The associated pressure disturbance for restoring solenoidal velocity is the primary agent, and its reflection from solid surface(s) the dominant mechanism of turbulence production. Statistical properties and their modeling must address to the statistics of the uncertainties of initial boundary data of the ensemble.
Semiautomated Management Of Arriving Air Traffic
NASA Technical Reports Server (NTRS)
Erzberger, Heinz; Nedell, William
1992-01-01
System of computers, graphical workstations, and computer programs developed for semiautomated management of approach and arrival of numerous aircraft at airport. System comprises three subsystems: traffic-management advisor, used for controlling traffic into terminal area; descent advisor generates information integrated into plan-view display of traffic on monitor; and final-approach-spacing tool used to merge traffic converging on final approach path while making sure aircraft are properly spaced. Not intended to restrict decisions of air-traffic controllers.
Deterministic hydrodynamics: Taking blood apart
Davis, John A.; Inglis, David W.; Morton, Keith J.; Lawrence, David A.; Huang, Lotien R.; Chou, Stephen Y.; Sturm, James C.; Austin, Robert H.
2006-01-01
We show the fractionation of whole blood components and isolation of blood plasma with no dilution by using a continuous-flow deterministic array that separates blood components by their hydrodynamic size, independent of their mass. We use the technology we developed of deterministic arrays which separate white blood cells, red blood cells, and platelets from blood plasma at flow velocities of 1,000 μm/sec and volume rates up to 1 μl/min. We verified by flow cytometry that an array using focused injection removed 100% of the lymphocytes and monocytes from the main red blood cell and platelet stream. Using a second design, we demonstrated the separation of blood plasma from the blood cells (white, red, and platelets) with virtually no dilution of the plasma and no cellular contamination of the plasma. PMID:17001005
Analysis of FBC deterministic chaos
Daw, C.S.
1996-06-01
It has recently been discovered that the performance of a number of fossil energy conversion devices such as fluidized beds, pulsed combustors, steady combustors, and internal combustion engines are affected by deterministic chaos. It is now recognized that understanding and controlling the chaotic elements of these devices can lead to significantly improved energy efficiency and reduced emissions. Application of these techniques to key fossil energy processes are expected to provide important competitive advantages for U.S. industry.
Deterministic Laws and Epistemic Chances
NASA Astrophysics Data System (ADS)
Myrvold, Wayne C.
In this paper, a concept of chance is introduced that is compatible with deterministic physical laws, yet does justice to our use of chance-talk in connection with typical games of chance, and in classical statistical mechanics. We take our cue from what Poincaré called "the method of arbitrary functions," and elaborate upon a suggestion made by Savage in connection with this. Comparison is made between this notion of chance, and David Lewis' conception.
NASA Astrophysics Data System (ADS)
Nuckelt, J.; Schack, M.; Kürner, T.
2011-08-01
This paper presents a physical (PHY) layer simulator of the IEEE 802.11p standard for Wireless Access in Vehicular Environments (WAVE). This simulator allows the emulation of data transmission via different radio channels as well as the analysis of the resulting system behavior. The PHY layer simulator is part of an integrated simulation platform including a traffic model to generate realistic mobility of vehicles and a 3D ray-optical model to calculate the multipath propagation channel between transmitter and receiver. Besides deterministic channel modeling by means of ray-optical modeling, the simulator can also be used with stochastic channel models of typical vehicular scenarios. With the aid of this PHY layer simulator and the integrated channel models, the resulting performance of the system in terms of bit and packet error rates of different receiver designs can be analyzed in order to achieve a robust data transmission.
TrafficGen Architecture Document
2016-01-01
distribution unlimited. 2 Fig. 1 TrafficGen user interface The TrafficGen user’s guide details specific features and their use. TrafficGen is available...model is the foundation for the timeline-based user interface and for generating commands for integration with external applications. 3. User... Interface The user interface of this application is comprised of several sets of MVC classes and other support classes. Working together, they present
Díaz, Julio; Martínez-Martín, Pablo; Rodríguez-Blázquez, Carmen; Vázquez, Blanca; Forjaz, Maria João; Ortiz, Cristina; Carmona, Rocío; Linares, Cristina
2017-03-23
To analyse whether there is a short-term association between road traffic noise in the city of Madrid and Parkinson's disease (PD)-related demand for healthcare. Time-series analysis (2008-2009) using variables of analysis linked to emergency and daily PD-related demand for healthcare (ICD-10: G20-G21), namely, PD-hospital admissions (HAs), PD-outpatient visits (OVs) and PD-emergency medical calls in Madrid. The noise pollution measurements used were Leqd, equivalent sound level for the daytime hours (from 8 a.m. to 10 p.m.), and Leqn, equivalent sound level for night time hours (from 10 p.m. to 8 a.m.) in dB(A). We controlled for temperature, pollution, trends and seasons, and used the Poisson regression model to calculate relative risk (RR). The association between Leqd and HAs was found to be linear. Leqd and Leqn at lag 0.1 and temperature at lags 1 and 5 were the only environmental variables associated with increased PD-related healthcare demand. The RR (lag 0) for Leqd and HA was 1.07 (1.04-1.09), the RR (lag 0) for Leqd and OV was 1.28 (1.12-1.45), and the RR (lags 0.1) for Leqn and emergency medical calls was 1.46 (1.06-2.01). The above results indicate that road traffic noise is a risk factor for PD exacerbation. Measures to reduce noise-exposure levels could result in a lower PD-related healthcare demand. Copyright © 2017 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.
Treiber, Martin; Kesting, Arne; Helbing, Dirk
2006-07-01
We investigate the adaptation of the time headways in car-following models as a function of the local velocity variance, which is a measure of the inhomogeneity of traffic flow. We apply this mechanism to several car-following models and simulate traffic breakdowns in open systems with an on-ramp as bottleneck and in a closed ring road. Single-vehicle data and one-minute aggregated data generated by several virtual detectors show a semiquantitative agreement with microscopic and flow-density data from the Dutch freeway A9. This includes the observed distributions of the net time headways for free and congested traffic, the velocity variance as a function of density, and the fundamental diagram. The modal value of the time headway distribution is shifted by a factor of about 2 under congested conditions. Macroscopically, this corresponds to the capacity drop at the transition from free to congested traffic. The simulated fundamental diagram shows free, synchronized, and jammed traffic, and a wide scattering in the congested traffic regime. We explain this by a self-organized variance-driven process that leads to the spontaneous formation and decay of long-lived platoons even for a deterministic dynamics on a single lane.
NASA Astrophysics Data System (ADS)
Treiber, Martin; Kesting, Arne; Helbing, Dirk
2006-07-01
We investigate the adaptation of the time headways in car-following models as a function of the local velocity variance, which is a measure of the inhomogeneity of traffic flow. We apply this mechanism to several car-following models and simulate traffic breakdowns in open systems with an on-ramp as bottleneck and in a closed ring road. Single-vehicle data and one-minute aggregated data generated by several virtual detectors show a semiquantitative agreement with microscopic and flow-density data from the Dutch freeway A9. This includes the observed distributions of the net time headways for free and congested traffic, the velocity variance as a function of density, and the fundamental diagram. The modal value of the time headway distribution is shifted by a factor of about 2 under congested conditions. Macroscopically, this corresponds to the capacity drop at the transition from free to congested traffic. The simulated fundamental diagram shows free, synchronized, and jammed traffic, and a wide scattering in the congested traffic regime. We explain this by a self-organized variance-driven process that leads to the spontaneous formation and decay of long-lived platoons even for a deterministic dynamics on a single lane.
Deterministic chaos in entangled eigenstates
NASA Astrophysics Data System (ADS)
Schlegel, K. G.; Förster, S.
2008-05-01
We investigate the problem of deterministic chaos in connection with entangled states using the Bohmian formulation of quantum mechanics. We show for a two particle system in a harmonic oscillator potential, that in a case of entanglement and three energy eigen-values the maximum Lyapunov-parameters of a representative ensemble of trajectories for large times develops to a narrow positive distribution, which indicates nearly complete chaotic dynamics. We also present in short results from two time-dependent systems, the anisotropic and the Rabi oscillator.
NASA Technical Reports Server (NTRS)
1992-01-01
Mestech's X-15 "Eye in the Sky," a traffic monitoring system, incorporates NASA imaging and robotic vision technology. A camera or "sensor box" is mounted in a housing. The sensor detects vehicles approaching an intersection and sends the information to a computer, which controls the traffic light according to the traffic rate. Jet Propulsion Laboratory technical support packages aided in the company's development of the system. The X-15's "smart highway" can also be used to count vehicles on a highway and compute the number in each lane and their speeds, important information for freeway control engineers. Additional applications are in airport and railroad operations. The system is intended to replace loop-type traffic detectors.
Criticality in dynamic arrest: Correspondence between glasses and traffic
NASA Astrophysics Data System (ADS)
Miedema, Daniel; de Wijn, Astrid; Nienhuis, Bernard; Schall, Peter
2013-03-01
Dynamic arrest is a general phenomenon across a wide range of dynamic systems including glasses, traffic flow, and dynamics in cells, but the universality of dynamic arrest phenomena remains unclear. We connect the emergence of traffic jams in traffic flow to the dynamic slow down in glasses. A direct correspondence is established by identifying a simple traffic model as a kinetically constrained model. In kinetically constrained models, the formation of glass becomes a (singular) phase transition in the zero temperature limit. Similarly, using the Nagel-Schreckenberg model to simulate traffic flow, we show that the emergence of jammed traffic acquires the signature of a sharp transition in the deterministic limit, corresponding to overcautious driving. We identify a true dynamical critical point marking the onset of coexistence between free flowing and jammed traffic, and demonstrate its analogy to the kinetically constrained glass models.
Causse, Mickaël; Alonso, Roland; Vachon, François; Parise, Robert; Orliaguet, Jean-Pierre; Tremblay, Sébastien; Terrier, Patrice
2014-01-01
This study aims to determine whether indirect touch device can be used to interact with graphical objects displayed on another screen in an air traffic control (ATC) context. The introduction of such a device likely requires an adaptation of the sensory-motor system. The operator has to simultaneously perform movements on the horizontal plane while assessing them on the vertical plane. Thirty-six right-handed participants performed movement training with either constant or variable practice and with or without visual feedback of the displacement of their actions. Participants then performed a test phase without visual feedback. Performance improved in both practice conditions, but accuracy was higher with visual feedback. During the test phase, movement time was longer for those who had practiced with feedback, suggesting an element of dependency. However, this 'cost' of feedback did not extend to movement accuracy. Finally, participants who had received variable training performed better in the test phase, but accuracy was still unsatisfactory. We conclude that continuous visual feedback on the stylus position is necessary if tablets are to be introduced in ATC.
Intelligent Traffic Quantification System
NASA Astrophysics Data System (ADS)
Mohanty, Anita; Bhanja, Urmila; Mahapatra, Sudipta
2017-08-01
Currently, city traffic monitoring and controlling is a big issue in almost all cities worldwide. Vehicular ad-hoc Network (VANET) technique is an efficient tool to minimize this problem. Usually, different types of on board sensors are installed in vehicles to generate messages characterized by different vehicle parameters. In this work, an intelligent system based on fuzzy clustering technique is developed to reduce the number of individual messages by extracting important features from the messages of a vehicle. Therefore, the proposed fuzzy clustering technique reduces the traffic load of the network. The technique also reduces congestion and quantifies congestion.
State Traffic Data: Traffic Safety Facts, 2001.
ERIC Educational Resources Information Center
National Center for Statistics and Analysis (NHTSA), Washington, DC.
This brief provides statistical information on U.S. traffic accidents delineated by state. A map details the 2001 traffic fatalities by state and the percent change from 2000. Data tables include: (1) traffic fatalities and fatality rates, 2001; (2) traffic fatalities and percent change, 1975-2001; (3) alcohol involvement in fatal traffic crashes,…
Survivability of Deterministic Dynamical Systems
Hellmann, Frank; Schultz, Paul; Grabow, Carsten; Heitzig, Jobst; Kurths, Jürgen
2016-01-01
The notion of a part of phase space containing desired (or allowed) states of a dynamical system is important in a wide range of complex systems research. It has been called the safe operating space, the viability kernel or the sunny region. In this paper we define the notion of survivability: Given a random initial condition, what is the likelihood that the transient behaviour of a deterministic system does not leave a region of desirable states. We demonstrate the utility of this novel stability measure by considering models from climate science, neuronal networks and power grids. We also show that a semi-analytic lower bound for the survivability of linear systems allows a numerically very efficient survivability analysis in realistic models of power grids. Our numerical and semi-analytic work underlines that the type of stability measured by survivability is not captured by common asymptotic stability measures. PMID:27405955
Deterministic Bragg Coherent Diffraction Imaging.
Pavlov, Konstantin M; Punegov, Vasily I; Morgan, Kaye S; Schmalz, Gerd; Paganin, David M
2017-04-25
A deterministic variant of Bragg Coherent Diffraction Imaging is introduced in its kinematical approximation, for X-ray scattering from an imperfect crystal whose imperfections span no more than half of the volume of the crystal. This approach provides a unique analytical reconstruction of the object's structure factor and displacement fields from the 3D diffracted intensity distribution centred around any particular reciprocal lattice vector. The simple closed-form reconstruction algorithm, which requires only one multiplication and one Fourier transformation, is not restricted by assumptions of smallness of the displacement field. The algorithm performs well in simulations incorporating a variety of conditions, including both realistic levels of noise and departures from ideality in the reference (i.e. imperfection-free) part of the crystal.
Traffic camera system development
NASA Astrophysics Data System (ADS)
Hori, Toshi
1997-04-01
The intelligent transportation system has generated a strong need for the development of intelligent camera systems to meet the requirements of sophisticated applications, such as electronic toll collection (ETC), traffic violation detection and automatic parking lot control. In order to achieve the highest levels of accuracy in detection, these cameras must have high speed electronic shutters, high resolution, high frame rate, and communication capabilities. A progressive scan interline transfer CCD camera, with its high speed electronic shutter and resolution capabilities, provides the basic functions to meet the requirements of a traffic camera system. Unlike most industrial video imaging applications, traffic cameras must deal with harsh environmental conditions and an extremely wide range of light. Optical character recognition is a critical function of a modern traffic camera system, with detection and accuracy heavily dependent on the camera function. In order to operate under demanding conditions, communication and functional optimization is implemented to control cameras from a roadside computer. The camera operates with a shutter speed faster than 1/2000 sec. to capture highway traffic both day and night. Consequently camera gain, pedestal level, shutter speed and gamma functions are controlled by a look-up table containing various parameters based on environmental conditions, particularly lighting. Lighting conditions are studied carefully, to focus only on the critical license plate surface. A unique light sensor permits accurate reading under a variety of conditions, such as a sunny day, evening, twilight, storms, etc. These camera systems are being deployed successfully in major ETC projects throughout the world.
Deterministic weak localization in periodic structures.
Tian, C; Larkin, A
2005-12-09
In some perfect periodic structures classical motion exhibits deterministic diffusion. For such systems we present the weak localization theory. As a manifestation for the velocity autocorrelation function a universal power law decay is predicted to appear at four Ehrenfest times. This deterministic weak localization is robust against weak quenched disorders, which may be confirmed by coherent backscattering measurements of periodic photonic crystals.
Deterministic Tripartite Controlled Remote State Preparation
NASA Astrophysics Data System (ADS)
Sang, Ming-huang; Nie, Yi-you
2017-07-01
We demonstrate that a seven-qubit entangled state can be used to realize the deterministic tripartite controlled remote state preparation by performing only Pauli operations and single-qubit measurements. In our scheme, three distant senders can simultaneously and deterministically exchange their quantum state with the other senders under the control of the supervisor.
Simulating synchronized traffic flow and wide moving jam based on the brake light rule
NASA Astrophysics Data System (ADS)
Xiang, Zheng-Tao; Li, Yu-Jin; Chen, Yu-Feng; Xiong, Li
2013-11-01
A new cellular automaton (CA) model based on brake light rules is proposed, which considers the influence of deterministic deceleration on randomization probability and deceleration extent. To describe the synchronized flow phase of Kerner’s three-phase theory in accordance with empirical data, we have changed some rules of vehicle motion with the aim to improve speed and acceleration vehicle behavior in synchronized flow simulated with earlier cellular automaton models with brake lights. The fundamental diagrams and spatial-temporal diagrams are analyzed, as well as the complexity of the traffic evolution, the emergence process of wide moving jam. Simulation results show that our new model can reproduce the three traffic phases: free flow, synchronized flow and wide moving jam. In addition, our new model can well describe the complexity of traffic evolution: (1) with initial homogeneous distribution and large densities, the traffic will evolve into multiple steady states, in which the numbers of wide moving jams are not invariable. (2) With initial homogeneous distribution and the middle range of density, the wide moving jam will emerge stochastically. (3) With initial mega-jam distribution and the density close to a point with the low value, the initial mega-jam will disappear stochastically. (4) For the cases with multiple wide moving jams, the process is analyzed involving the generation of narrow moving jam due to “pinch effect”, which leads to wide moving jam emergence.
Criticality in Dynamic Arrest: Correspondence between Glasses and Traffic
NASA Astrophysics Data System (ADS)
de Wijn, A. S.; Miedema, D. M.; Nienhuis, B.; Schall, P.
2012-11-01
Dynamic arrest is a general phenomenon across a wide range of dynamic systems including glasses, traffic flow, and dynamics in cells, but the universality of dynamic arrest phenomena remains unclear. We connect the emergence of traffic jams in a simple traffic flow model directly to the dynamic slowing down in kinetically constrained models for glasses. In kinetically constrained models, the formation of glass becomes a true (singular) phase transition in the limit T→0. Similarly, using the Nagel-Schreckenberg model to simulate traffic flow, we show that the emergence of jammed traffic acquires the signature of a sharp transition in the deterministic limit p→1, corresponding to overcautious driving. We identify a true dynamic critical point marking the onset of coexistence between free flowing and jammed traffic, and demonstrate its analogy to the kinetically constrained glass models. We find diverging correlations analogous to those at a critical point of thermodynamic phase transitions.
Criticality in dynamic arrest: correspondence between glasses and traffic.
de Wijn, A S; Miedema, D M; Nienhuis, B; Schall, P
2012-11-30
Dynamic arrest is a general phenomenon across a wide range of dynamic systems including glasses, traffic flow, and dynamics in cells, but the universality of dynamic arrest phenomena remains unclear. We connect the emergence of traffic jams in a simple traffic flow model directly to the dynamic slowing down in kinetically constrained models for glasses. In kinetically constrained models, the formation of glass becomes a true (singular) phase transition in the limit T→0. Similarly, using the Nagel-Schreckenberg model to simulate traffic flow, we show that the emergence of jammed traffic acquires the signature of a sharp transition in the deterministic limit p→1, corresponding to overcautious driving. We identify a true dynamic critical point marking the onset of coexistence between free flowing and jammed traffic, and demonstrate its analogy to the kinetically constrained glass models. We find diverging correlations analogous to those at a critical point of thermodynamic phase transitions.
Sensitivity analysis in a Lassa fever deterministic mathematical model
NASA Astrophysics Data System (ADS)
Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman
2015-05-01
Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.
Deterministic quantum teleportation with atoms.
Riebe, M; Häffner, H; Roos, C F; Hänsel, W; Benhelm, J; Lancaster, G P T; Körber, T W; Becher, C; Schmidt-Kaler, F; James, D F V; Blatt, R
2004-06-17
Teleportation of a quantum state encompasses the complete transfer of information from one particle to another. The complete specification of the quantum state of a system generally requires an infinite amount of information, even for simple two-level systems (qubits). Moreover, the principles of quantum mechanics dictate that any measurement on a system immediately alters its state, while yielding at most one bit of information. The transfer of a state from one system to another (by performing measurements on the first and operations on the second) might therefore appear impossible. However, it has been shown that the entangling properties of quantum mechanics, in combination with classical communication, allow quantum-state teleportation to be performed. Teleportation using pairs of entangled photons has been demonstrated, but such techniques are probabilistic, requiring post-selection of measured photons. Here, we report deterministic quantum-state teleportation between a pair of trapped calcium ions. Following closely the original proposal, we create a highly entangled pair of ions and perform a complete Bell-state measurement involving one ion from this pair and a third source ion. State reconstruction conditioned on this measurement is then performed on the other half of the entangled pair. The measured fidelity is 75%, demonstrating unequivocally the quantum nature of the process.
Classification of Automated Search Traffic
NASA Astrophysics Data System (ADS)
Buehrer, Greg; Stokes, Jack W.; Chellapilla, Kumar; Platt, John C.
As web search providers seek to improve both relevance and response times, they are challenged by the ever-increasing tax of automated search query traffic. Third party systems interact with search engines for a variety of reasons, such as monitoring a web site’s rank, augmenting online games, or possibly to maliciously alter click-through rates. In this paper, we investigate automated traffic (sometimes referred to as bot traffic) in the query stream of a large search engine provider. We define automated traffic as any search query not generated by a human in real time. We first provide examples of different categories of query logs generated by automated means. We then develop many different features that distinguish between queries generated by people searching for information, and those generated by automated processes. We categorize these features into two classes, either an interpretation of the physical model of human interactions, or as behavioral patterns of automated interactions. Using the these detection features, we next classify the query stream using multiple binary classifiers. In addition, a multiclass classifier is then developed to identify subclasses of both normal and automated traffic. An active learning algorithm is used to suggest which user sessions to label to improve the accuracy of the multiclass classifier, while also seeking to discover new classes of automated traffic. Performance analysis are then provided. Finally, the multiclass classifier is used to predict the subclass distribution for the search query stream.
Connecting deterministic and stochastic metapopulation models.
Barbour, A D; McVinish, R; Pollett, P K
2015-12-01
In this paper, we study the relationship between certain stochastic and deterministic versions of Hanski's incidence function model and the spatially realistic Levins model. We show that the stochastic version can be well approximated in a certain sense by the deterministic version when the number of habitat patches is large, provided that the presence or absence of individuals in a given patch is influenced by a large number of other patches. Explicit bounds on the deviation between the stochastic and deterministic models are given.
Universality classes for deterministic surface growth
NASA Technical Reports Server (NTRS)
Krug, J.; Spohn, H.
1988-01-01
A scaling theory for the generalized deterministic Kardar-Parisi-Zhang (1986) equation with beta greater than 1, is developed to study the growth of a surface through deterministic local rules. A one-dimensional surface model corresponding to beta = 1 is presented and solved exactly. The model can be studied as a limiting case of ballistic deposition, or as the deterministic limit of the Eden (1961) model. The scaling exponents, the correlation functions, and the skewness of the surface are determined. The results are compared with those of Burgers' (1974) equation for the case of beta = 2.
NASA Technical Reports Server (NTRS)
1995-01-01
Intelligent Vision Systems, Inc. (InVision) needed image acquisition technology that was reliable in bad weather for its TDS-200 Traffic Detection System. InVision researchers used information from NASA Tech Briefs and assistance from Johnson Space Center to finish the system. The NASA technology used was developed for Earth-observing imaging satellites: charge coupled devices, in which silicon chips convert light directly into electronic or digital images. The TDS-200 consists of sensors mounted above traffic on poles or span wires, enabling two sensors to view an intersection; a "swing and sway" feature to compensate for movement of the sensors; a combination of electronic shutter and gain control; and sensor output to an image digital signal processor, still frame video and optionally live video.
NASA Technical Reports Server (NTRS)
Bollman, W. E.; Chadwick, C.
1982-01-01
A number of interplanetary missions now being planned involve placing deterministic maneuvers along the flight path to alter the trajectory. Lee and Boain (1973) examined the statistics of trajectory correction maneuver (TCM) magnitude with no deterministic ('bias') component. The Delta v vector magnitude statistics were generated for several values of random Delta v standard deviations using expansions in terms of infinite hypergeometric series. The present investigation uses a different technique (Monte Carlo simulation) to generate Delta v magnitude statistics for a wider selection of random Delta v standard deviations and also extends the analysis to the case of nonzero deterministic Delta v's. These Delta v magnitude statistics are plotted parametrically. The plots are useful in assisting the analyst in quickly answering questions about the statistics of Delta v magnitude for single TCM's consisting of both a deterministic and a random component. The plots provide quick insight into the nature of the Delta v magnitude distribution for the TCM.
Chaotic Ising-like dynamics in traffic signals
Suzuki, Hideyuki; Imura, Jun-ichi; Aihara, Kazuyuki
2013-01-01
The green and red lights of a traffic signal can be viewed as the up and down states of an Ising spin. Moreover, traffic signals in a city interact with each other, if they are controlled in a decentralised way. In this paper, a simple model of such interacting signals on a finite-size two-dimensional lattice is shown to have Ising-like dynamics that undergoes a ferromagnetic phase transition. Probabilistic behaviour of the model is realised by chaotic billiard dynamics that arises from coupled non-chaotic elements. This purely deterministic model is expected to serve as a starting point for considering statistical mechanics of traffic signals. PMID:23350034
Towards a quasi-deterministic single-photon source
NASA Astrophysics Data System (ADS)
Peters, N. A.; Arnold, K. J.; VanDevender, A. P.; Jeffrey, E. R.; Rangarajan, R.; Hosten, O.; Barreiro, J. T.; Altepeter, J. B.; Kwiat, P. G.
2006-08-01
A source of single photons allows secure quantum key distribution, in addition, to being a critical resource for linear optics quantum computing. We describe our progress on deterministically creating single photons from spontaneous parametric downconversion, an extension of the Pittman, Jacobs and Franson scheme [Phys. Rev A, v66, 042303 (2002)]. Their idea was to conditionally prepare single photons by measuring one member of a spontaneously emitted photon pair and storing the remaining conditionally prepared photon until a predetermined time, when it would be "deterministically" released from storage. Our approach attempts to improve upon this by recycling the pump pulse in order to decrease the possibility of multiple-pair generation, while maintaining a high probability of producing a single pair. Many of the challenges we discuss are central to other quantum information technologies, including the need for low-loss optical storage, switching and detection, and fast feed-forward control.
Deterministic algorithm with agglomerative heuristic for location problems
NASA Astrophysics Data System (ADS)
Kazakovtsev, L.; Stupina, A.
2015-10-01
Authors consider the clustering problem solved with the k-means method and p-median problem with various distance metrics. The p-median problem and the k-means problem as its special case are most popular models of the location theory. They are implemented for solving problems of clustering and many practically important logistic problems such as optimal factory or warehouse location, oil or gas wells, optimal drilling for oil offshore, steam generators in heavy oil fields. Authors propose new deterministic heuristic algorithm based on ideas of the Information Bottleneck Clustering and genetic algorithms with greedy heuristic. In this paper, results of running new algorithm on various data sets are given in comparison with known deterministic and stochastic methods. New algorithm is shown to be significantly faster than the Information Bottleneck Clustering method having analogous preciseness.
Deterministic noiseless amplification of coherent states
NASA Astrophysics Data System (ADS)
Hu, Meng-Jun; Zhang, Yong-Sheng
2015-08-01
A universal deterministic noiseless quantum amplifier has been shown to be impossible. However, probabilistic noiseless amplification of a certain set of states is physically permissible. Regarding quantum state amplification as quantum state transformation, we show that deterministic noiseless amplification of coherent states chosen from a proper set is attainable. The relation between input coherent states and gain of amplification for deterministic noiseless amplification is thus derived. Furthermore, we extend our result to more general situation and show that deterministic noiseless amplification of Gaussian states is also possible. As an example of application, we find that our amplification model can obtain better performance in homodyne detection to measure the phase of state selected from a certain set. Besides, other possible applications are also discussed.
Virtualized Traffic: reconstructing traffic flows from discrete spatiotemporal data.
Sewall, Jason; van den Berg, Jur; Lin, Ming C; Manocha, Dinesh
2011-01-01
We present a novel concept, Virtualized Traffic, to reconstruct and visualize continuous traffic flows from discrete spatiotemporal data provided by traffic sensors or generated artificially to enhance a sense of immersion in a dynamic virtual world. Given the positions of each car at two recorded locations on a highway and the corresponding time instances, our approach can reconstruct the traffic flows (i.e., the dynamic motions of multiple cars over time) between the two locations along the highway for immersive visualization of virtual cities or other environments. Our algorithm is applicable to high-density traffic on highways with an arbitrary number of lanes and takes into account the geometric, kinematic, and dynamic constraints on the cars. Our method reconstructs the car motion that automatically minimizes the number of lane changes, respects safety distance to other cars, and computes the acceleration necessary to obtain a smooth traffic flow subject to the given constraints. Furthermore, our framework can process a continuous stream of input data in real time, enabling the users to view virtualized traffic events in a virtual world as they occur. We demonstrate our reconstruction technique with both synthetic and real-world input.
Deterministic Execution of Ptides Programs
2013-05-15
are developed in Ptolemy , a design and simulation environment for heteroge- neous systems. This framework also contains a code generation framework... Ptolemy , a design and simulation environment for heteroge- neous systems. This framework also contains a code generation framework which is leveraged to...generation is implemented in Ptolemy II, [4], an academic tool for designing and experimenting with heterogeneous system models. The first section of
A deterministic approach to simulate and downscale hydrological records
NASA Astrophysics Data System (ADS)
Maskey, M.; Puente, C. E.; Sivakumar, B.
2016-12-01
Application of a deterministic geometric approach for the simulation and downscaling of hydrologic data, daily rainfall and daily streamflow over a year, is presented. Specifically, it is shown that adaptations of the fractal-multifractal (FM) method, relying on only eight geometric parameters, may do both tasks accurately. The capability of the FM approach in producing plausible synthetic and disaggregated sets is illustrated using rain sets gathered in Laikakota, Bolivia and Tinkham, Washington, USA, and streamflow sets measured at the Sacramento River, USA. It is shown that suitable deterministic synthetic sets, maintaining the texture of the original records, may readily be found that faithfully preserve, for rainfall, the entire records' histogram, entropy and distribution of zeroes, and, for streamflow, the entire data's autocorrelation, histogram and entropy. It is then shown that the FM method readily generates daily series of rainfall and streamflow over a year based on weekly, biweekly and monthly accumulated information, which, while closely preserving the time evolution of the daily records, reasonably captures a variety of key statistical attributes. It is argued that the parsimonious FM deterministic simulations and downscalings may enhance and/or supplement stochastic simulation and disaggregation methods.
Comparison of Deterministic and Probabilistic Radial Distribution Systems Load Flow
NASA Astrophysics Data System (ADS)
Gupta, Atma Ram; Kumar, Ashwani
2017-08-01
Distribution system network today is facing the challenge of meeting increased load demands from the industrial, commercial and residential sectors. The pattern of load is highly dependent on consumer behavior and temporal factors such as season of the year, day of the week or time of the day. For deterministic radial distribution load flow studies load is taken as constant. But, load varies continually with a high degree of uncertainty. So, there is a need to model probable realistic load. Monte-Carlo Simulation is used to model the probable realistic load by generating random values of active and reactive power load from the mean and standard deviation of the load and for solving a Deterministic Radial Load Flow with these values. The probabilistic solution is reconstructed from deterministic data obtained for each simulation. The main contribution of the work is:
Will higher traffic flow lead to more traffic conflicts? A crash surrogate metric based analysis
Kuang, Yan; Yan, Yadan
2017-01-01
In this paper, we aim to examine the relationship between traffic flow and potential conflict risks by using crash surrogate metrics. It has been widely recognized that one traffic flow corresponds to two distinct traffic states with different speeds and densities. In view of this, instead of simply aggregating traffic conditions with the same traffic volume, we represent potential conflict risks at a traffic flow fundamental diagram. Two crash surrogate metrics, namely, Aggregated Crash Index and Time to Collision, are used in this study to represent the potential conflict risks with respect to different traffic conditions. Furthermore, Beijing North Ring III and Next Generation SIMulation Interstate 80 datasets are utilized to carry out case studies. By using the proposed procedure, both datasets generate similar trends, which demonstrate the applicability of the proposed methodology and the transferability of our conclusions. PMID:28787022
Will higher traffic flow lead to more traffic conflicts? A crash surrogate metric based analysis.
Kuang, Yan; Qu, Xiaobo; Yan, Yadan
2017-01-01
In this paper, we aim to examine the relationship between traffic flow and potential conflict risks by using crash surrogate metrics. It has been widely recognized that one traffic flow corresponds to two distinct traffic states with different speeds and densities. In view of this, instead of simply aggregating traffic conditions with the same traffic volume, we represent potential conflict risks at a traffic flow fundamental diagram. Two crash surrogate metrics, namely, Aggregated Crash Index and Time to Collision, are used in this study to represent the potential conflict risks with respect to different traffic conditions. Furthermore, Beijing North Ring III and Next Generation SIMulation Interstate 80 datasets are utilized to carry out case studies. By using the proposed procedure, both datasets generate similar trends, which demonstrate the applicability of the proposed methodology and the transferability of our conclusions.
Deterministic phase retrieval employing spherical illumination
NASA Astrophysics Data System (ADS)
Martínez-Carranza, J.; Falaggis, K.; Kozacki, T.
2015-05-01
Deterministic Phase Retrieval techniques (DPRTs) employ a series of paraxial beam intensities in order to recover the phase of a complex field. These paraxial intensities are usually generated in systems that employ plane-wave illumination. This type of illumination allows a direct processing of the captured intensities with DPRTs for recovering the phase. Furthermore, it has been shown that intensities for DPRTs can be acquired from systems that use spherical illumination as well. However, this type of illumination presents a major setback for DPRTs: the captured intensities change their size for each position of the detector on the propagation axis. In order to apply the DPRTs, reescalation of the captured intensities has to be applied. This condition can increase the error sensitivity of the final phase result if it is not carried out properly. In this work, we introduce a novel system based on a Phase Light Modulator (PLM) for capturing the intensities when employing spherical illumination. The proposed optical system enables us to capture the diffraction pattern of under, in, and over-focus intensities. The employment of the PLM allows capturing the corresponding intensities without displacing the detector. Moreover, with the proposed optical system we can control accurately the magnification of the captured intensities. Thus, the stack of captured intensities can be used in DPRTs, overcoming the problems related with the resizing of the images. In order to prove our claims, the corresponding numerical experiments will be carried out. These simulations will show that the retrieved phases with spherical illumination are accurate and can be compared with those that employ plane wave illumination. We demonstrate that with the employment of the PLM, the proposed optical system has several advantages as: the optical system is compact, the beam size on the detector plane is controlled accurately, and the errors coming from mechanical motion can be suppressed easily.
Optimal partial deterministic quantum teleportation of qubits
Mista, Ladislav Jr.; Filip, Radim
2005-02-01
We propose a protocol implementing optimal partial deterministic quantum teleportation for qubits. This is a teleportation scheme realizing deterministically an optimal 1{yields}2 asymmetric universal cloning where one imperfect copy of the input state emerges at the sender's station while the other copy emerges at receiver's possibly distant station. The optimality means that the fidelities of the copies saturate the asymmetric cloning inequality. The performance of the protocol relies on the partial deterministic nondemolition Bell measurement that allows us to continuously control the flow of information among the outgoing qubits. We also demonstrate that the measurement is optimal two-qubit operation in the sense of the trade-off between the state disturbance and the information gain.
Effect of Uncertainty on Deterministic Runway Scheduling
NASA Technical Reports Server (NTRS)
Gupta, Gautam; Malik, Waqar; Jung, Yoon C.
2012-01-01
Active runway scheduling involves scheduling departures for takeoffs and arrivals for runway crossing subject to numerous constraints. This paper evaluates the effect of uncertainty on a deterministic runway scheduler. The evaluation is done against a first-come- first-serve scheme. In particular, the sequence from a deterministic scheduler is frozen and the times adjusted to satisfy all separation criteria; this approach is tested against FCFS. The comparison is done for both system performance (throughput and system delay) and predictability, and varying levels of congestion are considered. The modeling of uncertainty is done in two ways: as equal uncertainty in availability at the runway as for all aircraft, and as increasing uncertainty for later aircraft. Results indicate that the deterministic approach consistently performs better than first-come-first-serve in both system performance and predictability.
Nearly deterministic linear optical controlled-NOT gate.
Nemoto, Kae; Munro, W J
2004-12-17
We show how to construct a near deterministic CNOT gate using several single photons sources, linear optics, photon number resolving quantum nondemolition detectors, and feed forward. This gate does not require the use of massively entangled states common to other implementations and is very efficient on resources with only one ancilla photon required. The key element of this gate is nondemolition detectors that use a weak cross-Kerr nonlinearity effect to conditionally generate a phase shift on a coherent probe if a photon is present in the signal mode. These potential phase shifts can then be measured using highly efficient homodyne detection.
Real-Time Surface Traffic Adviser
NASA Technical Reports Server (NTRS)
Glass, Brian J. (Inventor); Spirkovska, Liljana (Inventor); McDermott, William J. (Inventor); Reisman, Ronald J. (Inventor); Gibson, James (Inventor); Iverson, David L. (Inventor)
2001-01-01
A real-time data management system which uses data generated at different rates by multiple heterogeneous incompatible data sources are presented. In one embodiment, the invention is as an airport surface traffic data management system (traffic adviser) that electronically interconnects air traffic control, airline, and airport operations user communities to facilitate information sharing and improve taxi queuing. The system uses an expert system to fuse dam from a variety of airline, airport operations, ramp control, and air traffic control sources, in order to establish, predict, and update reference data values for every aircraft surface operation.
Jamitons: Phantom Traffic Jams
ERIC Educational Resources Information Center
Kowszun, Jorj
2013-01-01
Traffic on motorways can slow down for no apparent reason. Sudden changes in speed by one or two drivers can create a chain reaction that causes a traffic jam for the vehicles that are following. This kind of phantom traffic jam is called a "jamiton" and the article discusses some of the ways in which traffic engineers produce…
Jamitons: Phantom Traffic Jams
ERIC Educational Resources Information Center
Kowszun, Jorj
2013-01-01
Traffic on motorways can slow down for no apparent reason. Sudden changes in speed by one or two drivers can create a chain reaction that causes a traffic jam for the vehicles that are following. This kind of phantom traffic jam is called a "jamiton" and the article discusses some of the ways in which traffic engineers produce…
George, L.L.
1988-09-16
The Federal Aviation Administration plans to consolidate several hundred air traffic control centers and TRACONs into area control facilities while maintaining air traffic coverage. This paper defines air traffic coverage, a performance measure of the air traffic control system. Air traffic coverage measures performance without controversy regarding delay and collision probabilities and costs. Coverage measures help evaluate alternative facility architectures and help schedule consolidation. Coverage measures also help evaluate protocols for handling one facility's air traffic to another facility in case of facility failure. Coverage measures help evaluate radar, communications and other air traffic control systems and procedures. 4 refs., 2 figs.,
Deterministic geologic processes and stochastic modeling
Rautman, C.A.; Flint, A.L.
1991-12-31
Recent outcrop sampling at Yucca Mountain, Nevada, has produced significant new information regarding the distribution of physical properties at the site of a potential high-level nuclear waste repository. Consideration of the spatial distribution of measured values and geostatistical measures of spatial variability indicates that there are a number of widespread deterministic geologic features at the site that have important implications for numerical modeling of such performance aspects as ground water flow and radionuclide transport. These deterministic features have their origin in the complex, yet logical, interplay of a number of deterministic geologic processes, including magmatic evolution; volcanic eruption, transport, and emplacement; post-emplacement cooling and alteration; and late-stage (diagenetic) alteration. Because of geologic processes responsible for formation of Yucca Mountain are relatively well understood and operate on a more-or-less regional scale, understanding of these processes can be used in modeling the physical properties and performance of the site. Information reflecting these deterministic geologic processes may be incorporated into the modeling program explicitly, using geostatistical concepts such as soft information, or implicitly, through the adoption of a particular approach to modeling. It is unlikely that any single representation of physical properties at the site will be suitable for all modeling purposes. Instead, the same underlying physical reality will need to be described many times, each in a manner conducive to assessing specific performance issues.
A deterministic discrete ordinates transport proxy application
2014-06-03
Kripke is a simple 3D deterministic discrete ordinates (Sn) particle transport code that maintains the computational load and communications pattern of a real transport code. It is intended to be a research tool to explore different data layouts, new programming paradigms and computer architectures.
Deterministic Quantization by Dynamical Boundary Conditions
Dolce, Donatello
2010-06-15
We propose an unexplored quantization method. It is based on the assumption of dynamical space-time intrinsic periodicities for relativistic fields, which in turn can be regarded as dual to extra-dimensional fields. As a consequence we obtain a unified and consistent interpretation of Special Relativity and Quantum Mechanics in terms of Deterministic Geometrodynamics.
Supports of invariant measures for piecewise deterministic Markov processes
NASA Astrophysics Data System (ADS)
Benaïm, M.; Colonius, F.; Lettau, R.
2017-09-01
For a class of piecewise deterministic Markov processes, the supports of the invariant measures are characterized. This is based on the analysis of controllability properties of an associated deterministic control system. Its invariant control sets determine the supports.
Deterministic nanoassembly: Neutral or plasma route?
NASA Astrophysics Data System (ADS)
Levchenko, I.; Ostrikov, K.; Keidar, M.; Xu, S.
2006-07-01
It is shown that, owing to selective delivery of ionic and neutral building blocks directly from the ionized gas phase and via surface migration, plasma environments offer a better deal of deterministic synthesis of ordered nanoassemblies compared to thermal chemical vapor deposition. The results of hybrid Monte Carlo (gas phase) and adatom self-organization (surface) simulation suggest that higher aspect ratios and better size and pattern uniformity of carbon nanotip microemitters can be achieved via the plasma route.
Ada programming guidelines for deterministic storage management
NASA Technical Reports Server (NTRS)
Auty, David
1988-01-01
Previous reports have established that a program can be written in the Ada language such that the program's storage management requirements are determinable prior to its execution. Specific guidelines for ensuring such deterministic usage of Ada dynamic storage requirements are described. Because requirements may vary from one application to another, guidelines are presented in a most-restrictive to least-restrictive fashion to allow the reader to match appropriate restrictions to the particular application area under investigation.
Deterministic linear optical quantum Toffoli gate
NASA Astrophysics Data System (ADS)
Huang, He-Liang; Bao, Wan-Su; Li, Tan; Li, Feng-Guang; Fu, Xiang-Qun; Zhang, Shuo; Zhang, Hai-Long; Wang, Xiang
2017-09-01
Quantum Toffoli gate is a crucial part of many quantum information processing schemes. We design a deterministic linear optical quantum Toffoli gate using three degrees of freedom of a single photon. The proposed setup does not require any ancilla photons and is experimentally feasible with current technology. Moreover, we show that our setup can be directly used to demonstrate that hypergraph states violate local realism in an extreme manner.
Nonlinear neural network for deterministic scheduling
Gulati, S.; Iyengar, S.S.; Toomarian, N.; Protopopescu, V.; Barhen, J.
1988-01-01
This paper addresses the NP-complete, deterministic scheduling problem for a single server system. Given a set of n tasks along with the precedence-constraints among them, their timing requirements, setup costs and their completion deadlines, a neuromorphic model is used to construct a non-preemptive optimal processing schedule such that the total completion time, total tarediness and the number of tardy jobs is minimized. This model exhibits faster convergence than techniques based on gradient projection methods.
Working Memory and Its Relation to Deterministic Sequence Learning
Martini, Markus; Furtner, Marco R.; Sachse, Pierre
2013-01-01
Is there a relation between working memory (WM) and incidental sequence learning? Nearly all of the earlier investigations in the role of WM capacity (WMC) in sequence learning suggest no correlations in incidental learning conditions. However, the theoretical view of WM and operationalization of WMC made strong progress in recent years. The current study related performance in a coordination and transformation task to sequence knowledge in a four-choice incidental deterministic serial reaction time (SRT) task and a subsequent free generation task. The response-to-stimulus interval (RSI) was varied between 0 ms and 300 ms. Our results show correlations between WMC and error rates in condition RSI 0 ms. For condition RSI 300 ms we found relations between WMC and sequence knowledge in the SRT task as well as between WMC and generation task performance. Theoretical implications of these findings for ongoing processes during sequence learning and retrieval of sequence knowledge are discussed. PMID:23409148
Entanglement and deterministic quantum computing with one qubit
NASA Astrophysics Data System (ADS)
Boyer, Michel; Brodutch, Aharon; Mor, Tal
2017-02-01
The role of entanglement and quantum correlations in complex physical systems and quantum information processing devices has become a topic of intense study in the past two decades. In this work we present tools for learning about entanglement and quantum correlations in dynamical systems where the quantum states are mixed and the eigenvalue spectrum is highly degenerate. We apply these results to the deterministic quantum computing with one qubit (DQC1) computation model and show that the states generated in a DQC1 circuit have an eigenvalue structure that makes them difficult to entangle, even when they are relatively far from the completely mixed state. Our results strengthen the conjecture that it may be possible to find quantum algorithms that do not generate entanglement and yet still have an exponential advantage over their classical counterparts.
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul
2016-05-03
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d < 2κ. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. Lastly, this is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul
2016-05-03
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul
2016-05-03
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d < 2κ. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. Lastly, this is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.
When to conduct probabilistic linkage vs. deterministic linkage? A simulation study.
Zhu, Ying; Matsuyama, Yutaka; Ohashi, Yasuo; Setoguchi, Soko
2015-08-01
When unique identifiers are unavailable, successful record linkage depends greatly on data quality and types of variables available. While probabilistic linkage theoretically captures more true matches than deterministic linkage by allowing imperfection in identifiers, studies have shown inconclusive results likely due to variations in data quality, implementation of linkage methodology and validation method. The simulation study aimed to understand data characteristics that affect the performance of probabilistic vs. deterministic linkage. We created ninety-six scenarios that represent real-life situations using non-unique identifiers. We systematically introduced a range of discriminative power, rate of missing and error, and file size to increase linkage patterns and difficulties. We assessed the performance difference of linkage methods using standard validity measures and computation time. Across scenarios, deterministic linkage showed advantage in PPV while probabilistic linkage showed advantage in sensitivity. Probabilistic linkage uniformly outperformed deterministic linkage as the former generated linkages with better trade-off between sensitivity and PPV regardless of data quality. However, with low rate of missing and error in data, deterministic linkage performed not significantly worse. The implementation of deterministic linkage in SAS took less than 1min, and probabilistic linkage took 2min to 2h depending on file size. Our simulation study demonstrated that the intrinsic rate of missing and error of linkage variables was key to choosing between linkage methods. In general, probabilistic linkage was a better choice, but for exceptionally good quality data (<5% error), deterministic linkage was a more resource efficient choice. Copyright © 2015 Elsevier Inc. All rights reserved.
Improving ground-penetrating radar data in sedimentary rocks using deterministic deconvolution
Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.; Byrnes, A.P.
2003-01-01
Resolution is key to confidently identifying unique geologic features using ground-penetrating radar (GPR) data. Source wavelet "ringing" (related to bandwidth) in a GPR section limits resolution because of wavelet interference, and can smear reflections in time and/or space. The resultant potential for misinterpretation limits the usefulness of GPR. Deconvolution offers the ability to compress the source wavelet and improve temporal resolution. Unlike statistical deconvolution, deterministic deconvolution is mathematically simple and stable while providing the highest possible resolution because it uses the source wavelet unique to the specific radar equipment. Source wavelets generated in, transmitted through and acquired from air allow successful application of deterministic approaches to wavelet suppression. We demonstrate the validity of using a source wavelet acquired in air as the operator for deterministic deconvolution in a field application using "400-MHz" antennas at a quarry site characterized by interbedded carbonates with shale partings. We collected GPR data on a bench adjacent to cleanly exposed quarry faces in which we placed conductive rods to provide conclusive groundtruth for this approach to deconvolution. The best deconvolution results, which are confirmed by the conductive rods for the 400-MHz antenna tests, were observed for wavelets acquired when the transmitter and receiver were separated by 0.3 m. Applying deterministic deconvolution to GPR data collected in sedimentary strata at our study site resulted in an improvement in resolution (50%) and improved spatial location (0.10-0.15 m) of geologic features compared to the same data processed without deterministic deconvolution. The effectiveness of deterministic deconvolution for increased resolution and spatial accuracy of specific geologic features is further demonstrated by comparing results of deconvolved data with nondeconvolved data acquired along a 30-m transect immediately adjacent
Using MCBEND for neutron or gamma-ray deterministic calculations
NASA Astrophysics Data System (ADS)
Geoff, Dobson; Adam, Bird; Brendan, Tollit; Paul, Smith
2017-09-01
MCBEND 11 is the latest version of the general radiation transport Monte Carlo code from AMEC Foster Wheeler's ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. MCBEND supports a number of acceleration techniques, for example the use of an importance map in conjunction with Splitting/Russian Roulette. MCBEND has a well established automated tool to generate this importance map, commonly referred to as the MAGIC module using a diffusion adjoint solution. This method is fully integrated with the MCBEND geometry and material specification, and can easily be run as part of a normal MCBEND calculation. An often overlooked feature of MCBEND is the ability to use this method for forward scoping calculations, which can be run as a very quick deterministic method. Additionally, the development of the Visual Workshop environment for results display provides new capabilities for the use of the forward calculation as a productivity tool. In this paper, we illustrate the use of the combination of the old and new in order to provide an enhanced analysis capability. We also explore the use of more advanced deterministic methods for scoping calculations used in conjunction with MCBEND, with a view to providing a suite of methods to accompany the main Monte Carlo solver.
Deterministic Clone of an Unknown N-Qubit Entangled State with Assistance
NASA Astrophysics Data System (ADS)
Ma, Song-Ya; Chen, Xiu-Bo; Guan, Xiao-Wei; Niu, Xin-Xin; Yang, Yi-Xian
2010-11-01
We propose a deterministic scheme to realize assisted-clone of an unknown N(≥3)-qubit entangled state. The first stage of the protocol requires teleportation via maximal entanglement as the quantum channel. In the second stage of the protocol, a novel set of mutually orthogonal basis vectors are constructed. With the assistance of the preparer through an N-particle projective measurement under this basis, the perfect copy of an original state can be produced. Comparing with the previous protocols which produce the unknown state and its orthogonal complement state at the site of the sender, our scheme generates the unknown state deterministically.
McQuinn, Ian H; Lesage, Véronique; Carrier, Dominic; Larrivée, Geneviève; Samson, Yves; Chartrand, Sylvain; Michaud, Robert; Theriault, James
2011-12-01
The threatened resident beluga population of the St. Lawrence Estuary shares the Saguenay-St. Lawrence Marine Park with significant anthropogenic noise sources, including marine commercial traffic and a well-established, vessel-based whale-watching industry. Frequency-dependent (FD) weighting was used to approximate beluga hearing sensitivity to determine how noise exposure varied in time and space at six sites of high beluga summer residency. The relative contribution of each source to acoustic habitat degradation was estimated by measuring noise levels throughout the summer and noise signatures of typical vessel classes with respect to traffic volume and sound propagation characteristics. Rigid-hulled inflatable boats were the dominant noise source with respect to estimated beluga hearing sensitivity in the studied habitats due to their high occurrence and proximity, high correlation with site-specific FD-weighted sound levels, and the dominance of mid-frequencies (0.3-23 kHz) in their noise signatures. Median C-weighted sound pressure level (SPL(RMS)) had a range of 19 dB re 1 μPa between the noisiest and quietest sites. Broadband SPL(RMS) exceeded 120 dB re 1 μPa 8-32% of the time depending on the site. Impacts of these noise levels on St. Lawrence beluga will depend on exposure recurrence and individual responsiveness.
Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology
Gao, Fei; Li, Ye; Novak, Igor L.; Slepchenko, Boris M.
2016-01-01
Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium ‘sparks’ as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell. PMID:27959915
NASA Astrophysics Data System (ADS)
Samson, E. C.; Wilson, K. E.; Newman, Z. L.; Anderson, B. P.
2016-02-01
We experimentally and numerically demonstrate deterministic creation and manipulation of a pair of oppositely charged singly quantized vortices in a highly oblate Bose-Einstein condensate (BEC). Two identical blue-detuned, focused Gaussian laser beams that pierce the BEC serve as repulsive obstacles for the superfluid atomic gas; by controlling the positions of the beams within the plane of the BEC, superfluid flow is deterministically established around each beam such that two vortices of opposite circulation are generated by the motion of the beams, with each vortex pinned to the in situ position of a laser beam. We study the vortex creation process, and show that the vortices can be moved about within the BEC by translating the positions of the laser beams. This technique can serve as a building block in future experimental techniques to create, on-demand, deterministic arrangements of few or many vortices within a BEC for precise studies of vortex dynamics and vortex interactions.
Effects of changing orders in the update rules on traffic flow.
Xue, Yu; Dong, Li-Yun; Li, Lei; Dai, Shi-Qiang
2005-02-01
Based on the Nagel-Schreckenberg (NaSch) model of traffic flow, we study the effects of the orders of the evolutive rule on traffic flow. It has been found from simulation that the cellular automaton (CA) traffic model is very sensitively dependent on the orders of the evolutive rule. Changing the evolutive steps of the NaSch model will result in two modified models, called the SDNaSch model and the noise-first model, with different fundamental diagrams and jamming states. We analyze the mechanism of these two different traffic models and corresponding traffic behaviors in detail and compare the two modified model with the NaSch model. It is concluded that the order arrangement of the stochastic delay and deterministic deceleration indeed has remarkable effects on traffic flow.
CHAOS AND STOCHASTICITY IN DETERMINISTICALLY GENERATED MULTIFRACTAL MEASURES. (R824780)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
CHAOS AND STOCHASTICITY IN DETERMINISTICALLY GENERATED MULTIFRACTAL MEASURES. (R824780)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Deterministic convergence in iterative phase shifting
Luna, Esteban; Salas, Luis; Sohn, Erika; Ruiz, Elfego; Nunez, Juan M.; Herrera, Joel
2009-03-10
Previous implementations of the iterative phase shifting method, in which the phase of a test object is computed from measurements using a phase shifting interferometer with unknown positions of the reference, do not provide an accurate way of knowing when convergence has been attained. We present a new approach to this method that allows us to deterministically identify convergence. The method is tested with a home-built Fizeau interferometer that measures optical surfaces polished to {lambda}/100 using the Hydra tool. The intrinsic quality of the measurements is better than 0.5 nm. Other possible applications for this technique include fringe projection or any problem where phase shifting is involved.
Deterministic Folding in Stiff Elastic Membranes
NASA Astrophysics Data System (ADS)
Tallinen, T.; Åström, J. A.; Timonen, J.
2008-09-01
Crumpled membranes have been found to be characterized by complex patterns of spatially seemingly random facets separated by narrow ridges of high elastic energy. We demonstrate by numerical simulations that compression of stiff elastic membranes with small randomness in their initial configurations leads to either random ridge configurations (high entropy) or nearly deterministic folds (low elastic energy). For folding with symmetric ridge configurations to appear in part of the crumpling processes, the crumpling rate must be slow enough. Folding stops when the thickness of the folded structure becomes important, and crumpling continues thereafter as a random process.
Diffusion in Deterministic Interacting Lattice Systems
NASA Astrophysics Data System (ADS)
Medenjak, Marko; Klobas, Katja; Prosen, Tomaž
2017-09-01
We study reversible deterministic dynamics of classical charged particles on a lattice with hard-core interaction. It is rigorously shown that the system exhibits three types of transport phenomena, ranging from ballistic, through diffusive to insulating. By obtaining an exact expressions for the current time-autocorrelation function we are able to calculate the linear response transport coefficients, such as the diffusion constant and the Drude weight. Additionally, we calculate the long-time charge profile after an inhomogeneous quench and obtain diffusive profilewith the Green-Kubo diffusion constant. Exact analytical results are corroborated by Monte Carlo simulations.
Phase Space Transition States for Deterministic Thermostats
NASA Astrophysics Data System (ADS)
Ezra, Gregory; Wiggins, Stephen
2009-03-01
We describe the relation between the phase space structure of Hamiltonian and non-Hamiltonian deterministic thermostats. We show that phase space structures governing reaction dynamics in Hamiltonian systems, such as the transition state, map to the same type of phase space structures for the non-Hamiltonian isokinetic equations of motion for the thermostatted Hamiltonian. Our results establish a general theoretical framework for analyzing thermostat dynamics using concepts and methods developed in reaction rate theory. Numerical results are presented for the isokinetic thermostat.
Deterministic quantum computation with one photonic qubit
NASA Astrophysics Data System (ADS)
Hor-Meyll, M.; Tasca, D. S.; Walborn, S. P.; Ribeiro, P. H. Souto; Santos, M. M.; Duzzioni, E. I.
2015-07-01
We show that deterministic quantum computing with one qubit (DQC1) can be experimentally implemented with a spatial light modulator, using the polarization and the transverse spatial degrees of freedom of light. The scheme allows the computation of the trace of a high-dimension matrix, being limited by the resolution of the modulator panel and the technical imperfections. In order to illustrate the method, we compute the normalized trace of unitary matrices and implement the Deutsch-Jozsa algorithm. The largest matrix that can be manipulated with our setup is 1080 ×1920 , which is able to represent a system with approximately 21 qubits.
Additivity Principle in High-Dimensional Deterministic Systems
NASA Astrophysics Data System (ADS)
Saito, Keiji; Dhar, Abhishek
2011-12-01
The additivity principle (AP), conjectured by Bodineau and Derrida [Phys. Rev. Lett. 92, 180601 (2004)PRLTAO0031-900710.1103/PhysRevLett.92.180601], is discussed for the case of heat conduction in three-dimensional disordered harmonic lattices to consider the effects of deterministic dynamics, higher dimensionality, and different transport regimes, i.e., ballistic, diffusive, and anomalous transport. The cumulant generating function (CGF) for heat transfer is accurately calculated and compared with the one given by the AP. In the diffusive regime, we find a clear agreement with the conjecture even if the system is high dimensional. Surprisingly, even in the anomalous regime the CGF is also well fitted by the AP. Lower-dimensional systems are also studied and the importance of three dimensionality for the validity is stressed.
Additivity principle in high-dimensional deterministic systems.
Saito, Keiji; Dhar, Abhishek
2011-12-16
The additivity principle (AP), conjectured by Bodineau and Derrida [Phys. Rev. Lett. 92, 180601 (2004)], is discussed for the case of heat conduction in three-dimensional disordered harmonic lattices to consider the effects of deterministic dynamics, higher dimensionality, and different transport regimes, i.e., ballistic, diffusive, and anomalous transport. The cumulant generating function (CGF) for heat transfer is accurately calculated and compared with the one given by the AP. In the diffusive regime, we find a clear agreement with the conjecture even if the system is high dimensional. Surprisingly, even in the anomalous regime the CGF is also well fitted by the AP. Lower-dimensional systems are also studied and the importance of three dimensionality for the validity is stressed.
More on exact state reconstruction in deterministic digital control systems
NASA Technical Reports Server (NTRS)
Polites, Michael E.
1988-01-01
Presented is a special form of the Ideal State Reconstructor for deterministic digital control systems which is simpler to implement than the most general form. The Ideal State Reconstructor is so named because, if the plant parameters are known exactly, its output will exactly equal, not just approximate, the true state of the plant and accomplish this without any knowledge of the plant's initial state. Besides this, it adds no new states or eigenvalues to the system. Nor does it affect the plant equation for the system in any way; it affects the measurement equation only. It is characterized by the fact that discrete measurements are generated every T/N seconds and input into a multi-input/multi-output moving-average (MA) process. The output of this process is sampled every T seconds and utilized in reconstructing the state of the system.
Integrating Clonal Selection and Deterministic Sampling for Efficient Associative Classification
Elsayed, Samir A. Mohamed; Rajasekaran, Sanguthevar; Ammar, Reda A.
2013-01-01
Traditional Associative Classification (AC) algorithms typically search for all possible association rules to find a representative subset of those rules. Since the search space of such rules may grow exponentially as the support threshold decreases, the rules discovery process can be computationally expensive. One effective way to tackle this problem is to directly find a set of high-stakes association rules that potentially builds a highly accurate classifier. This paper introduces AC-CS, an AC algorithm that integrates the clonal selection of the immune system along with deterministic data sampling. Upon picking a representative sample of the original data, it proceeds in an evolutionary fashion to populate only rules that are likely to yield good classification accuracy. Empirical results on several real datasets show that the approach generates dramatically less rules than traditional AC algorithms. In addition, the proposed approach is significantly more efficient than traditional AC algorithms while achieving a competitive accuracy. PMID:24500504
NASA Astrophysics Data System (ADS)
Itoh, Kosuke; Nakada, Tsutomu
2013-04-01
Deterministic nonlinear dynamical processes are ubiquitous in nature. Chaotic sounds generated by such processes may appear irregular and random in waveform, but these sounds are mathematically distinguished from random stochastic sounds in that they contain deterministic short-time predictability in their temporal fine structures. We show that the human brain distinguishes deterministic chaotic sounds from spectrally matched stochastic sounds in neural processing and perception. Deterministic chaotic sounds, even without being attended to, elicited greater cerebral cortical responses than the surrogate control sounds after about 150 ms in latency after sound onset. Listeners also clearly discriminated these sounds in perception. The results support the hypothesis that the human auditory system is sensitive to the subtle short-time predictability embedded in the temporal fine structure of sounds.
Deterministic prediction of surface wind speed variations
NASA Astrophysics Data System (ADS)
Drisya, G. V.; Kiplangat, D. C.; Asokan, K.; Satheesh Kumar, K.
2014-11-01
Accurate prediction of wind speed is an important aspect of various tasks related to wind energy management such as wind turbine predictive control and wind power scheduling. The most typical characteristic of wind speed data is its persistent temporal variations. Most of the techniques reported in the literature for prediction of wind speed and power are based on statistical methods or probabilistic distribution of wind speed data. In this paper we demonstrate that deterministic forecasting methods can make accurate short-term predictions of wind speed using past data, at locations where the wind dynamics exhibit chaotic behaviour. The predictions are remarkably accurate up to 1 h with a normalised RMSE (root mean square error) of less than 0.02 and reasonably accurate up to 3 h with an error of less than 0.06. Repeated application of these methods at 234 different geographical locations for predicting wind speeds at 30-day intervals for 3 years reveals that the accuracy of prediction is more or less the same across all locations and time periods. Comparison of the results with f-ARIMA model predictions shows that the deterministic models with suitable parameters are capable of returning improved prediction accuracy and capturing the dynamical variations of the actual time series more faithfully. These methods are simple and computationally efficient and require only records of past data for making short-term wind speed forecasts within practically tolerable margin of errors.
Deterministic forward scatter from surface gravity waves.
Deane, Grant B; Preisig, James C; Tindle, Chris T; Lavery, Andone; Stokes, M Dale
2012-12-01
Deterministic structures in sound reflected by gravity waves, such as focused arrivals and Doppler shifts, have implications for underwater acoustics and sonar, and the performance of underwater acoustic communications systems. A stationary phase analysis of the Helmholtz-Kirchhoff scattering integral yields the trajectory of focused arrivals and their relationship to the curvature of the surface wave field. Deterministic effects along paths up to 70 water depths long are observed in shallow water measurements of surface-scattered sound at the Martha's Vineyard Coastal Observatory. The arrival time and amplitude of surface-scattered pulses are reconciled with model calculations using measurements of surface waves made with an upward-looking sonar mounted mid-way along the propagation path. The root mean square difference between the modeled and observed pulse arrival amplitude and delay, respectively, normalized by the maximum range of amplitudes and delays, is found to be 0.2 or less for the observation periods analyzed. Cross-correlation coefficients for modeled and observed pulse arrival delays varied from 0.83 to 0.16 depending on surface conditions. Cross-correlation coefficients for normalized pulse energy for the same conditions were small and varied from 0.16 to 0.06. In contrast, the modeled and observed pulse arrival delay and amplitude statistics were in good agreement.
Deterministic Creation of Macroscopic Cat States
Lombardo, Daniel; Twamley, Jason
2015-01-01
Despite current technological advances, observing quantum mechanical effects outside of the nanoscopic realm is extremely challenging. For this reason, the observation of such effects on larger scale systems is currently one of the most attractive goals in quantum science. Many experimental protocols have been proposed for both the creation and observation of quantum states on macroscopic scales, in particular, in the field of optomechanics. The majority of these proposals, however, rely on performing measurements, making them probabilistic. In this work we develop a completely deterministic method of macroscopic quantum state creation. We study the prototypical optomechanical Membrane In The Middle model and show that by controlling the membrane’s opacity, and through careful choice of the optical cavity initial state, we can deterministically create and grow the spatial extent of the membrane’s position into a large cat state. It is found that by using a Bose-Einstein condensate as a membrane high fidelity cat states with spatial separations of up to ∼300 nm can be achieved. PMID:26345157
Discrete Deterministic and Stochastic Petri Nets
NASA Technical Reports Server (NTRS)
Zijal, Robert; Ciardo, Gianfranco
1996-01-01
Petri nets augmented with timing specifications gained a wide acceptance in the area of performance and reliability evaluation of complex systems exhibiting concurrency, synchronization, and conflicts. The state space of time-extended Petri nets is mapped onto its basic underlying stochastic process, which can be shown to be Markovian under the assumption of exponentially distributed firing times. The integration of exponentially and non-exponentially distributed timing is still one of the major problems for the analysis and was first attacked for continuous time Petri nets at the cost of structural or analytical restrictions. We propose a discrete deterministic and stochastic Petri net (DDSPN) formalism with no imposed structural or analytical restrictions where transitions can fire either in zero time or according to arbitrary firing times that can be represented as the time to absorption in a finite absorbing discrete time Markov chain (DTMC). Exponentially distributed firing times are then approximated arbitrarily well by geometric distributions. Deterministic firing times are a special case of the geometric distribution. The underlying stochastic process of a DDSPN is then also a DTMC, from which the transient and stationary solution can be obtained by standard techniques. A comprehensive algorithm and some state space reduction techniques for the analysis of DDSPNs are presented comprising the automatic detection of conflicts and confusions, which removes a major obstacle for the analysis of discrete time models.
An efficient method to detect periodic behavior in botnet traffic by analyzing control plane traffic
AsSadhan, Basil; Moura, José M.F.
2013-01-01
Botnets are large networks of bots (compromised machines) that are under the control of a small number of bot masters. They pose a significant threat to Internet’s communications and applications. A botnet relies on command and control (C2) communications channels traffic between its members for its attack execution. C2 traffic occurs prior to any attack; hence, the detection of botnet’s C2 traffic enables the detection of members of the botnet before any real harm happens. We analyze C2 traffic and find that it exhibits a periodic behavior. This is due to the pre-programmed behavior of bots that check for updates to download them every T seconds. We exploit this periodic behavior to detect C2 traffic. The detection involves evaluating the periodogram of the monitored traffic. Then applying Walker’s large sample test to the periodogram’s maximum ordinate in order to determine if it is due to a periodic component or not. If the periodogram of the monitored traffic contains a periodic component, then it is highly likely that it is due to a bot’s C2 traffic. The test looks only at aggregate control plane traffic behavior, which makes it more scalable than techniques that involve deep packet inspection (DPI) or tracking the communication flows of different hosts. We apply the test to two types of botnet, tinyP2P and IRC that are generated by SLINGbot. We verify the periodic behavior of their C2 traffic and compare it to the results we get on real traffic that is obtained from a secured enterprise network. We further study the characteristics of the test in the presence of injected HTTP background traffic and the effect of the duty cycle on the periodic behavior. PMID:25685512
AsSadhan, Basil; Moura, José M F
2014-07-01
Botnets are large networks of bots (compromised machines) that are under the control of a small number of bot masters. They pose a significant threat to Internet's communications and applications. A botnet relies on command and control (C2) communications channels traffic between its members for its attack execution. C2 traffic occurs prior to any attack; hence, the detection of botnet's C2 traffic enables the detection of members of the botnet before any real harm happens. We analyze C2 traffic and find that it exhibits a periodic behavior. This is due to the pre-programmed behavior of bots that check for updates to download them every T seconds. We exploit this periodic behavior to detect C2 traffic. The detection involves evaluating the periodogram of the monitored traffic. Then applying Walker's large sample test to the periodogram's maximum ordinate in order to determine if it is due to a periodic component or not. If the periodogram of the monitored traffic contains a periodic component, then it is highly likely that it is due to a bot's C2 traffic. The test looks only at aggregate control plane traffic behavior, which makes it more scalable than techniques that involve deep packet inspection (DPI) or tracking the communication flows of different hosts. We apply the test to two types of botnet, tinyP2P and IRC that are generated by SLINGbot. We verify the periodic behavior of their C2 traffic and compare it to the results we get on real traffic that is obtained from a secured enterprise network. We further study the characteristics of the test in the presence of injected HTTP background traffic and the effect of the duty cycle on the periodic behavior.
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; ...
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.
Operational Global Deterministic and Ensemble Wave Prediction Systems at Environment Canada
NASA Astrophysics Data System (ADS)
Bernier, Natacha; Peel, Syd; Bélanger, Jean-Marc; Roch, Michel; Lépine, Mario; Pellerin, Pierre; Henrique Alves, José; Tolman, Hendrik
2015-04-01
Canada's new global deterministic and ensemble wave prediction systems are presented together with an evaluation of their performance over a 5 month hindcast. Particular attention is paid to the Arctic Ocean where accurate forecasts are crucial for maintaining safe activities such as drilling, and vessel operation. The wave prediction systems are based on WAVEWATCHIII and are operated at grid spacings of 1/4° (deterministic) and 1/2 ° (ensemble). Both systems are run twice daily with lead times of 120h (5 days) for the deterministic systems and 240h (10 days) for the ensemble system. The wave prediction systems will be shown to have skill in forecasting significant wave height and peak period over the future several days. Beyond lead times of 120h, deterministic forecasts are extended using ensembles of wave forecasts to generate probabilistic forecasts for long-range events. New displays will be used to summarize the wealth of information generated by ensembles into depictions that could help support early warning systems.
Corver, Sifra Christina; Unger, Dana; Grote, Gudela
2016-06-01
Our study investigates whether trajectory uncertainty moderates the relationship between traffic conflict and workload. Furthermore, we examine if the indirect effect of traffic density on workload through traffic conflict is conditional on the presence of trajectory uncertainty. Although it is widely accepted that uncertainty related to the future trajectory of an aircraft impacts air traffic controller decision making, little is known about how the presence of trajectory uncertainty impacts controller workload. A better understanding of the impact on controller workload can improve workload prediction models for en route air traffic control. We collected data in a live operation environment, including workload ratings based on over-the-shoulder observations and real-time sector data. Hierarchical linear modeling was used to analyze the data. Trajectory uncertainty interacts with traffic conflict in such a way that the positive relationship between traffic conflict and workload is strongest in the presence of trajectory uncertainty. Furthermore, we found that the mediating effect of traffic density through traffic conflict is conditional on the presence of trajectory uncertainty. Our results indicate that workload prediction tools that do not incorporate trajectory uncertainty may underestimate workload under conditions of trajectory uncertainty, leading to possible overload situations of air traffic controllers. Sources that generate trajectory uncertainty, as well as their interaction effects with dynamic complexity metrics, should be acknowledged in workload prediction models to increase the predictive power of these models. Implications for future air traffic management operations as envisioned by SESAR and NextGen are discussed. © 2016, Human Factors and Ergonomics Society.
Deterministic and stochastic responses of nonlinear systems
NASA Astrophysics Data System (ADS)
Abou-Rayan, Ashraf Mohamed
The responses of nonlinear systems to both deterministic and stochastic excitations are discussed. For a single degree of freedom system, the response of a simply supported buckled beam to parametric excitations is investigated. Two types of excitations are examined: deterministic and random. For the nonlinear response to a harmonic axial load, the method of multiple scales is used to determine to second order the amplitude and phase modulation equations. Floquet theory is used to analyze the stability of periodic responses. The perturbation results are verified by integrating the governing equation using both digital and analog computers. For small excitation amplitudes, the analytical results are in good agreement with the numerical solutions. The large amplitude responses are investigated by using simulations on a digital computer and are compared with results obtained using an analog computer. For the stochastic response to a wide-band random excitation, the Gaussian and non-Gaussian closure schemes are used to determine the response statistics. The results are compared with those obtained from real-time analysis (analog-computer simulation). The normality assumption is examined. A comparison between the responses to deterministic and random excitation is presented. For two degree of freedom systems, two methods are used to study the response under the action of broad-band random excitations. The first method is applicable to systems having cubic nonlinearities. It involves an averaging approach to reduce the number of moment equations for the non-Gaussian closure scheme from 69 to 14 equations. The results are compared with those obtained from numerical integrations of the moment equations and the exact stationary solution of the Fokker-Planck-Komologorov equation. The second method is applicable to systems having quadratic and cubic nonlinearities. Stationary solutions of the moment equations are determined and their stability is ascertained by examining the
Deterministic approaches to coherent diffractive imaging
NASA Astrophysics Data System (ADS)
Allen, L. J.; D'Alfonso, A. J.; Martin, A. V.; Morgan, A. J.; Quiney, H. M.
2016-01-01
In this review we will consider the retrieval of the wave at the exit surface of an object illuminated by a coherent probe from one or more measured diffraction patterns. These patterns may be taken in the near-field (often referred to as images) or in the far field (the Fraunhofer diffraction pattern, where the wave is the Fourier transform of that at the exit surface). The retrieval of the exit surface wave from such data is an inverse scattering problem. This inverse problem has historically been solved using nonlinear iterative methods, which suffer from convergence and uniqueness issues. Here we review deterministic approaches to obtaining the exit surface wave which ameliorate those problems.
Deterministic phase slips in mesoscopic superconducting rings
Petković, Ivana; Lollo, A.; Glazman, L. I.; Harris, J. G. E.
2016-11-24
The properties of one-dimensional superconductors are strongly influenced by topological fluctuations of the order parameter, known as phase slips, which cause the decay of persistent current in superconducting rings and the appearance of resistance in superconducting wires. Despite extensive work, quantitative studies of phase slips have been limited by uncertainty regarding the order parameter’s free-energy landscape. Here we show detailed agreement between measurements of the persistent current in isolated flux-biased rings and Ginzburg–Landau theory over a wide range of temperature, magnetic field and ring size; this agreement provides a quantitative picture of the free-energy landscape. Furthermore, we also demonstrate that phase slips occur deterministically as the barrier separating two competing order parameter configurations vanishes. These results will enable studies of quantum and thermal phase slips in a well-characterized system and will provide access to outstanding questions regarding the nature of one-dimensional superconductivity.
Deterministic polishing from theory to practice
NASA Astrophysics Data System (ADS)
Hooper, Abigail R.; Hoffmann, Nathan N.; Sarkas, Harry W.; Escolas, John; Hobbs, Zachary
2015-10-01
Improving predictability in optical fabrication can go a long way towards increasing profit margins and maintaining a competitive edge in an economic environment where pressure is mounting for optical manufacturers to cut costs. A major source of hidden cost is rework - the share of production that does not meet specification in the first pass through the polishing equipment. Rework substantially adds to the part's processing and labor costs as well as bottlenecks in production lines and frustration for managers, operators and customers. The polishing process consists of several interacting variables including: glass type, polishing pads, machine type, RPM, downforce, slurry type, baume level and even the operators themselves. Adjusting the process to get every variable under control while operating in a robust space can not only provide a deterministic polishing process which improves profitability but also produces a higher quality optic.
Deterministic multi-zone ice accretion modeling
NASA Technical Reports Server (NTRS)
Yamaguchi, K.; Hansman, R. John, Jr.; Kazmierczak, Michael
1991-01-01
The focus here is on a deterministic model of the surface roughness transition behavior of glaze ice. The initial smooth/rough transition location, bead formation, and the propagation of the transition location are analyzed. Based on the hypothesis that the smooth/rough transition location coincides with the laminar/turbulent boundary layer transition location, a multizone model is implemented in the LEWICE code. In order to verify the effectiveness of the model, ice accretion predictions for simple cylinders calculated by the multizone LEWICE are compared to experimental ice shapes. The glaze ice shapes are found to be sensitive to the laminar surface roughness and bead thickness parameters controlling the transition location, while the ice shapes are found to be insensitive to the turbulent surface roughness.
Deterministic remote preparation via the Brown state
NASA Astrophysics Data System (ADS)
Ma, Song-Ya; Gao, Cong; Zhang, Pei; Qu, Zhi-Guo
2017-04-01
We propose two deterministic remote state preparation (DRSP) schemes by using the Brown state as the entangled channel. Firstly, the remote preparation of an arbitrary two-qubit state is considered. It is worth mentioning that the construction of measurement bases plays a key role in our scheme. Then, the remote preparation of an arbitrary three-qubit state is investigated. The proposed schemes can be extended to controlled remote state preparation (CRSP) with unit success probabilities. At variance with the existing CRSP schemes via the Brown state, the derived schemes have no restriction on the coefficients, while the success probabilities can reach 100%. It means the success probabilities are greatly improved. Moreover, we pay attention to the DRSP in noisy environments under two important decoherence models, the amplitude-damping noise and phase-damping noise.
Deterministic phase slips in mesoscopic superconducting rings
NASA Astrophysics Data System (ADS)
Petković, I.; Lollo, A.; Glazman, L. I.; Harris, J. G. E.
2016-11-01
The properties of one-dimensional superconductors are strongly influenced by topological fluctuations of the order parameter, known as phase slips, which cause the decay of persistent current in superconducting rings and the appearance of resistance in superconducting wires. Despite extensive work, quantitative studies of phase slips have been limited by uncertainty regarding the order parameter's free-energy landscape. Here we show detailed agreement between measurements of the persistent current in isolated flux-biased rings and Ginzburg-Landau theory over a wide range of temperature, magnetic field and ring size; this agreement provides a quantitative picture of the free-energy landscape. We also demonstrate that phase slips occur deterministically as the barrier separating two competing order parameter configurations vanishes. These results will enable studies of quantum and thermal phase slips in a well-characterized system and will provide access to outstanding questions regarding the nature of one-dimensional superconductivity.
Deterministic phase slips in mesoscopic superconducting rings
Petković, I.; Lollo, A.; Glazman, L. I.; Harris, J. G. E.
2016-01-01
The properties of one-dimensional superconductors are strongly influenced by topological fluctuations of the order parameter, known as phase slips, which cause the decay of persistent current in superconducting rings and the appearance of resistance in superconducting wires. Despite extensive work, quantitative studies of phase slips have been limited by uncertainty regarding the order parameter's free-energy landscape. Here we show detailed agreement between measurements of the persistent current in isolated flux-biased rings and Ginzburg–Landau theory over a wide range of temperature, magnetic field and ring size; this agreement provides a quantitative picture of the free-energy landscape. We also demonstrate that phase slips occur deterministically as the barrier separating two competing order parameter configurations vanishes. These results will enable studies of quantum and thermal phase slips in a well-characterized system and will provide access to outstanding questions regarding the nature of one-dimensional superconductivity. PMID:27882924
Block variables for deterministic aperiodic sequences
NASA Astrophysics Data System (ADS)
Hörnquist, Michael
1997-10-01
We use the concept of block variables to obtain a measure of order/disorder for some one-dimensional deterministic aperiodic sequences. For the Thue - Morse sequence, the Rudin - Shapiro sequence and the period-doubling sequence it is possible to obtain analytical expressions in the limit of infinite sequences. For the Fibonacci sequence, we present some analytical results which can be supported by numerical arguments. It turns out that the block variables show a wide range of different behaviour, some of them indicating that some of the considered sequences are more `random' than other. However, the method does not give any definite answer to the question of which sequence is more disordered than the other and, in this sense, the results obtained are negative. We compare this with some other ways of measuring the amount of order/disorder in such systems, and there seems to be no direct correspondence between the measures.
Deterministic-random separation in nonstationary regime
NASA Astrophysics Data System (ADS)
Abboud, D.; Antoni, J.; Sieg-Zieba, S.; Eltabach, M.
2016-02-01
In rotating machinery vibration analysis, the synchronous average is perhaps the most widely used technique for extracting periodic components. Periodic components are typically related to gear vibrations, misalignments, unbalances, blade rotations, reciprocating forces, etc. Their separation from other random components is essential in vibration-based diagnosis in order to discriminate useful information from masking noise. However, synchronous averaging theoretically requires the machine to operate under stationary regime (i.e. the related vibration signals are cyclostationary) and is otherwise jeopardized by the presence of amplitude and phase modulations. A first object of this paper is to investigate the nature of the nonstationarity induced by the response of a linear time-invariant system subjected to speed varying excitation. For this purpose, the concept of a cyclo-non-stationary signal is introduced, which extends the class of cyclostationary signals to speed-varying regimes. Next, a "generalized synchronous average'' is designed to extract the deterministic part of a cyclo-non-stationary vibration signal-i.e. the analog of the periodic part of a cyclostationary signal. Two estimators of the GSA have been proposed. The first one returns the synchronous average of the signal at predefined discrete operating speeds. A brief statistical study of it is performed, aiming to provide the user with confidence intervals that reflect the "quality" of the estimator according to the SNR and the estimated speed. The second estimator returns a smoothed version of the former by enforcing continuity over the speed axis. It helps to reconstruct the deterministic component by tracking a specific trajectory dictated by the speed profile (assumed to be known a priori).The proposed method is validated first on synthetic signals and then on actual industrial signals. The usefulness of the approach is demonstrated on envelope-based diagnosis of bearings in variable
NASA Technical Reports Server (NTRS)
1973-01-01
The traffic analyses and system requirements data generated in the study resulted in the development of two traffic models; the baseline traffic model and the new traffic model. The baseline traffic model provides traceability between the numbers and types of geosynchronous missions considered in the study and the entire spectrum of missions foreseen in the total national space program. The information presented pertaining to the baseline traffic model includes: (1) definition of the baseline traffic model, including identification of specific geosynchronous missions and their payload delivery schedules through 1990; (2) Satellite location criteria, including the resulting distribution of the satellite population; (3) Geosynchronous orbit saturation analyses, including the effects of satellite physical proximity and potential electromagnetic interference; and (4) Platform system requirements analyses, including satellite and mission equipment descriptions, the options and limitations in grouping satellites, and on-orbit servicing criteria (both remotely controlled and man-attended).
Calculation of photon pulse height distribution using deterministic and Monte Carlo methods
NASA Astrophysics Data System (ADS)
Akhavan, Azadeh; Vosoughi, Naser
2015-12-01
Radiation transport techniques which are used in radiation detection systems comprise one of two categories namely probabilistic and deterministic. However, probabilistic methods are typically used in pulse height distribution simulation by recreating the behavior of each individual particle, the deterministic approach, which approximates the macroscopic behavior of particles by solution of Boltzmann transport equation, is being developed because of its potential advantages in computational efficiency for complex radiation detection problems. In current work linear transport equation is solved using two methods including collided components of the scalar flux algorithm which is applied by iterating on the scattering source and ANISN deterministic computer code. This approach is presented in one dimension with anisotropic scattering orders up to P8 and angular quadrature orders up to S16. Also, multi-group gamma cross-section library required for this numerical transport simulation is generated in a discrete appropriate form. Finally, photon pulse height distributions are indirectly calculated by deterministic methods that approvingly compare with those from Monte Carlo based codes namely MCNPX and FLUKA.
Deterministic-statistical analysis of a structural-acoustic system
NASA Astrophysics Data System (ADS)
Wang, Xu
2011-09-01
The purpose of this paper is to develop an efficient approach for vibro-acoustic analysis. Being simple and representative, an exited plate-acoustic system is selected as a validation case for the vibro-acoustic analysis as the system presents one two-dimensional statistical component (modal dense structure panel—plate) connected to the other component (deterministic acoustic volume—cavity) through the area junction over a surface domain, rather than at a line boundary. Potential industrial applications of the system vibro-acoustic analysis would be in acoustic modelling of vehicle body panels such as the cabin roof panel, and door panels for the boom noise analysis. A new deterministic-statistical analysis approach is proposed from a combination or hybrid of deterministic analysis and statistical energy analysis (SEA) approaches. General theory of the new deterministic-statistical analysis approach is introduced. The main advantage of the new deterministic-statistical analysis approach is its possibility in place of the time consuming Monte Carlo simulation. In order to illustrate and validate the new deterministic-statistical analysis approach, three approaches of the deterministic analysis, the statistical energy analysis and the new deterministic-statistical analysis are then applied to conduct the plate-acoustic system modelling, and their results will be compared. The vibro-acoustic energy coupling characteristic of the plate-acoustic system will be studied. The most suitable frequency range for the new approach will be identified in consideration of computational accuracy, information and speed.
Non-Deterministic Context and Aspect Choice in Russian.
ERIC Educational Resources Information Center
Koubourlis, Demetrius J.
In any given context, a Russian verb form may be either perfective or imperfective. Perfective aspect signals the completion or result of an action, whereas imperfective does not. Aspect choice is a function of context, and two types of context are distinguished: deterministic and non-deterministic. This paper is part of a larger study whose aim…
Use of deterministic models in sports and exercise biomechanics research.
Chow, John W; Knudson, Duane V
2011-09-01
A deterministic model is a modeling paradigm that determines the relationships between a movement outcome measure and the biomechanical factors that produce such a measure. This review provides an overview of the use of deterministic models in biomechanics research, a historical summary of this research, and an analysis of the advantages and disadvantages of using deterministic models. The deterministic model approach has been utilized in technique analysis over the last three decades, especially in swimming, athletics field events, and gymnastics. In addition to their applications in sports and exercise biomechanics, deterministic models have been applied successfully in research on selected motor skills. The advantage of the deterministic model approach is that it helps to avoid selecting performance or injury variables arbitrarily and to provide the necessary theoretical basis for examining the relative importance of various factors that influence the outcome of a movement task. Several disadvantages of deterministic models, such as the use of subjective measures for the performance outcome, were discussed. It is recommended that exercise and sports biomechanics scholars should consider using deterministic models to help identify meaningful dependent variables in their studies.
Trafficability and workability of soils
USDA-ARS?s Scientific Manuscript database
Trafficability and workability are soil capabilities supporting operations of agricultural machinery. Trafficability is a soil's capability to support agricultural traffic without degrading soils and ecosystems. Workability is a soil capability supporting tillage. Agriculture is associated with mech...
ERIC Educational Resources Information Center
Roman, Harry T.
2014-01-01
Traffic lights are an important part of the transportation infrastructure, regulating traffic flow and maintaining safety when crossing busy streets. When they go awry or become nonfunctional, a great deal of havoc and danger can be present. During power outages, the street lights go out all over the affected area. It would be good to be able to…
ERIC Educational Resources Information Center
Roman, Harry T.
2014-01-01
Traffic lights are an important part of the transportation infrastructure, regulating traffic flow and maintaining safety when crossing busy streets. When they go awry or become nonfunctional, a great deal of havoc and danger can be present. During power outages, the street lights go out all over the affected area. It would be good to be able to…
Optimal Deterministic Ring Exploration with Oblivious Asynchronous Robots
NASA Astrophysics Data System (ADS)
Lamani, Anissa; Potop-Butucaru, Maria Gradinariu; Tixeuil, Sébastien
We consider the problem of exploring an anonymous unoriented ring of size n by k identical, oblivious, asynchronous mobile robots, that are unable to communicate, yet have the ability to sense their environment and take decisions based on their local view. Previous works in this weak scenario prove that k must not divide n for a deterministic solution to exist. Also, it is known that the minimum number of robots (either deterministic or probabilistic) to explore a ring of size n is 4. An upper bound of 17 robots holds in the deterministic case while 4 probabilistic robots are sufficient. In this paper, we close the complexity gap in the deterministic setting, by proving that no deterministic exploration is feasible with less than five robots, and that five robots are sufficient for any n that is coprime with five. Our protocol completes exploration in O(n) robot moves, which is also optimal.
Traffic model by braking capability and response time
NASA Astrophysics Data System (ADS)
Lee, Hyun Keun; Kim, Jeenu; Kim, Youngho; Lee, Choong-Ki
2015-06-01
We propose a microscopic traffic model where the update velocity is determined by the deceleration capacity and response time. It is found that there is a class of collisions that cannot be distinguished by simply comparing the stop positions. The model generates the safe, comfortable, and efficient traffic flow in numerical simulations with a reasonable values of the parameters, and this is analytically supported. Our approach provides a new perspective in modeling traffic-flow safety and worrying situations like lane changing.
Deterministic schedules for robust and reproducible non-uniform sampling in multidimensional NMR.
Eddy, Matthew T; Ruben, David; Griffin, Robert G; Herzfeld, Judith
2012-01-01
We show that a simple, general, and easily reproducible method for generating non-uniform sampling (NUS) schedules preserves the benefits of random sampling, including inherently reduced sampling artifacts, while removing the pitfalls associated with choosing an arbitrary seed. Sampling schedules are generated from a discrete cumulative distribution function (CDF) that closely fits the continuous CDF of the desired probability density function. We compare random and deterministic sampling using a Gaussian probability density function applied to 2D HSQC spectra. Data are processed using the previously published method of Spectroscopy by Integration of Frequency and Time domain data (SIFT). NUS spectra from deterministic sampling schedules were found to be at least as good as those from random schedules at the SIFT critical sampling density, and significantly better at half that sampling density. The method can be applied to any probability density function and generalized to greater than two dimensions. Copyright © 2011 Elsevier Inc. All rights reserved.
Applications of the 3-D Deterministic Transport Code Attlla for Core Safety Analysis
D. S. Lucas
2004-10-01
An LDRD (Laboratory Directed Research and Development) project is ongoing at the Idaho National Engineering and Environmental Laboratory (INEEL) for applying the three-dimensional multi-group deterministic neutron transport code (Attila®) to criticality, flux and depletion calculations of the Advanced Test Reactor (ATR). This paper discusses the model development, capabilities of Attila, generation of the cross-section libraries, and comparisons to an ATR MCNP model and future.
Applications of the 3-D Deterministic Transport Attila{reg_sign} for Core Safety Analysis
Lucas, D.S.; Gougar, D.; Roth, P.A.; Wareing, T.; Failla, G.; McGhee, J.; Barnett, A.
2004-10-06
An LDRD (Laboratory Directed Research and Development) project is ongoing at the Idaho National Engineering and Environmental Laboratory (INEEL) for applying the three-dimensional multi-group deterministic neutron transport code (Attila{reg_sign}) to criticality, flux and depletion calculations of the Advanced Test Reactor (ATR). This paper discusses the model development, capabilities of Attila, generation of the cross-section libraries, and comparisons to an ATR MCNP model and future.
NASA Astrophysics Data System (ADS)
Chen, Geng; Xu, Jin-Shi; Li, Chuan-Feng; Gong, Ming; Chen, Lei; Guo, Guang-Can
2009-08-01
According to Nielsen's theorem [Phys. Rev. Lett. 83 (1999) 436] and as a proof of principle, we demonstrate the deterministic transformation from a maximum entangled state to an arbitrary nonmaximum entangled pure state with local operation and classical communication in an optical system. The output states are verified with a quantum tomography process. We further test the violation of Bell-like inequality to demonstrate the quantum nonlocality of the state we generated. Our results may be useful in quantum information processing.
On Transform Domain Communication Systems under Spectrum Sensing Mismatch: A Deterministic Analysis.
Jin, Chuanxue; Hu, Su; Huang, Yixuan; Luo, Qu; Huang, Dan; Li, Yi; Gao, Yuan; Cheng, Shaochi
2017-07-08
Towards the era of mobile Internet and the Internet of Things (IoT), numerous sensors and devices are being introduced and interconnected. To support such an amount of data traffic, traditional wireless communication technologies are facing challenges both in terms of the increasing shortage of spectrum resources and massive multiple access. The transform-domain communication system (TDCS) is considered as an alternative multiple access system, where 5G and mobile IoT are mainly focused. However, previous studies about TDCS are under the assumption that the transceiver has the global spectrum information, without the consideration of spectrum sensing mismatch (SSM). In this paper, we present the deterministic analysis of TDCS systems under arbitrary given spectrum sensing scenarios, especially the influence of the SSM pattern to the signal to noise ratio (SNR) performance. Simulation results show that arbitrary SSM pattern can lead to inferior bit error rate (BER) performance.
Deterministic transfer function for transionospheric propagation
NASA Astrophysics Data System (ADS)
Roussel-Dupre, R.; Argo, P.
Recent interest in ground-to-satellite propagation of broadband signals has prompted investigation into the development of a transfer function for the ionosphere that includes effects such as dispersion, refraction, changes in polarization, reflection, absorption, and scattering. Depending on the application (e.g. geolocation), it may be necessary to incorporate all of these processes in order to extract the information of interest from the measured transionospheric signal. A transfer function for midlatitudes at VBF from 25 - 175 MHz is one of the goals of the BLACKBEARD program in characterizing propagation distortion. In support of this program we discuss in this paper an analytic model for the deterministic transfer function of the ionosphere that includes the effects of dispersion, refraction, and changes in polarization to second order in the parameter X = omega(sub pe)(exp 2)/(omega)(exp 2) where X is assumed to be small compared to one, (omega)(sub pe) is the peak plasma frequency of the ionosphere, and omega is the wave frequency. Analytic expressions for the total phase change, group delay, and polarization change in a spherical geometry assuming a radial, electron density profile are presented. A computer code ITF (Ionospheric Transfer Function) that makes use of the ICED (Ionospheric Conductivity and Electron Density) model to, venerate electron density profiles was developed to calculate the ionospheric transfer function along a specified transmitter-to-receiver path. Details of this code will be presented as well as comparisons made between ITF analytic results and ray-tracing calculations.
Deterministic phase slips in mesoscopic superconducting rings
Petković, Ivana; Lollo, A.; Glazman, L. I.; ...
2016-11-24
The properties of one-dimensional superconductors are strongly influenced by topological fluctuations of the order parameter, known as phase slips, which cause the decay of persistent current in superconducting rings and the appearance of resistance in superconducting wires. Despite extensive work, quantitative studies of phase slips have been limited by uncertainty regarding the order parameter’s free-energy landscape. Here we show detailed agreement between measurements of the persistent current in isolated flux-biased rings and Ginzburg–Landau theory over a wide range of temperature, magnetic field and ring size; this agreement provides a quantitative picture of the free-energy landscape. Furthermore, we also demonstrate thatmore » phase slips occur deterministically as the barrier separating two competing order parameter configurations vanishes. These results will enable studies of quantum and thermal phase slips in a well-characterized system and will provide access to outstanding questions regarding the nature of one-dimensional superconductivity.« less
Deterministically Driven Avalanche Models of Solar Flares
NASA Astrophysics Data System (ADS)
Strugarek, Antoine; Charbonneau, Paul; Joseph, Richard; Pirot, Dorian
2014-08-01
We develop and discuss the properties of a new class of lattice-based avalanche models of solar flares. These models are readily amenable to a relatively unambiguous physical interpretation in terms of slow twisting of a coronal loop. They share similarities with other avalanche models, such as the classical stick-slip self-organized critical model of earthquakes, in that they are driven globally by a fully deterministic energy-loading process. The model design leads to a systematic deficit of small-scale avalanches. In some portions of model space, mid-size and large avalanching behavior is scale-free, being characterized by event size distributions that have the form of power-laws with index values, which, in some parameter regimes, compare favorably to those inferred from solar EUV and X-ray flare data. For models using conservative or near-conservative redistribution rules, a population of large, quasiperiodic avalanches can also appear. Although without direct counterparts in the observational global statistics of flare energy release, this latter behavior may be relevant to recurrent flaring in individual coronal loops. This class of models could provide a basis for the prediction of large solar flares.
Quality control in a deterministic manufacturing environment
Barkman, W.E.; Babelay, E.F.; De Mint, P.D.; Lewis, J.C.; Woodard, L.M.
1985-01-24
An approach for establishing quality control in processes which exhibit undesired continual or intermittent excursions in key process parameters is discussed. The method is called deterministic manufacturing, and it is designed to employ automatic monitoring of the key process variables for process certification, but utilizes only sample certification of the process output to verify the validity of the measurement process. The system utilizes a local minicomputer to sample the appropriate process parameters that describe the condition of the machine tool, the cutting process, and the computer numerical control system. Sampled data are pre-processed by the minicomputer and then sent to a host computer that maintains a permanent data base describing the manufacturing conditions for each work piece. Parts are accepted if the various parameters remain within the required limits during the machining cycle. The need for additional actions is flagged if limits are exceeded. With this system it is possible to retrospectively examine the process status just prior to the occurrence of a problem. (LEW)
Analysis of pinching in deterministic particle separation
NASA Astrophysics Data System (ADS)
Risbud, Sumedh; Luo, Mingxiang; Frechette, Joelle; Drazer, German
2011-11-01
We investigate the problem of spherical particles vertically settling parallel to Y-axis (under gravity), through a pinching gap created by an obstacle (spherical or cylindrical, center at the origin) and a wall (normal to X axis), to uncover the physics governing microfluidic separation techniques such as deterministic lateral displacement and pinched flow fractionation: (1) theoretically, by linearly superimposing the resistances offered by the wall and the obstacle separately, (2) computationally, using the lattice Boltzmann method for particulate systems and (3) experimentally, by conducting macroscopic experiments. Both, theory and simulations, show that for a given initial separation between the particle centre and the Y-axis, presence of a wall pushes the particles closer to the obstacle, than its absence. Experimentally, this is expected to result in an early onset of the short-range repulsive forces caused by solid-solid contact. We indeed observe such an early onset, which we quantify by measuring the asymmetry in the trajectories of the spherical particles around the obstacle. This work is partially supported by the National Science Foundation Grant Nos. CBET- 0731032, CMMI-0748094, and CBET-0954840.
Electromagnetic field enhancement and light localization in deterministic aperiodic nanostructures
NASA Astrophysics Data System (ADS)
Gopinath, Ashwin
The control of light matter interaction in periodic and random media has been investigated in depth during the last few decades, yet structures with controlled degree of disorder such as Deterministic Aperiodic Nano Structures (DANS) have been relatively unexplored. DANS are characterized by non-periodic yet long-range correlated (deterministic) morphologies and can be generated by the mathematical rules of symbolic dynamics and number theory. In this thesis, I have experimentally investigated the unique light transport and localization properties in planar dielectric and metal (plasmonics) DANS. In particular, I have focused on the design, nanofabrication and optical characterization of DANS, formed by arranging metal/dielectric nanoparticles in an aperiodic lattice. This effort is directed towards development of on-chip nanophotonic applications with emphasis on label-free bio-sensing and enhanced light emission. The DANS designed as Surface Enhanced Raman Scattering (SERS) substrate is composed of multi-scale aperiodic nanoparticle arrays fabricated by e-beam lithography and are capable of reproducibly demonstrating enhancement factors as high as ˜107. Further improvement of SERS efficiency is achieved by combining DANS formed by top-down approach with bottom-up reduction of gold nanoparticles, to fabricate novel nanostructures called plasmonic "nano-galaxies" which increases the SERS enhancement factors by 2--3 orders of magnitude while preserving the reproducibility. In this thesis, along with presenting details of fabrication and SERS characterization of these "rationally designed" SERS substrates, I will also present results on using these substrates for detection of DNA nucleobases, as well as reproducible label-free detection of pathogenic bacteria with species specificity. In addition to biochemical detection, the combination of broadband light scattering behavior and the ability for the generation of reproducible high fields in DANS make these
Software for Simulating Air Traffic
NASA Technical Reports Server (NTRS)
Sridhar, Banavar; Bilimoria, Karl; Grabbe, Shon; Chatterji, Gano; Sheth, Kapil; Mulfinger, Daniel
2006-01-01
Future Air Traffic Management Concepts Evaluation Tool (FACET) is a system of software for performing computational simulations for evaluating advanced concepts of advanced air-traffic management. FACET includes a program that generates a graphical user interface plus programs and databases that implement computational models of weather, airspace, airports, navigation aids, aircraft performance, and aircraft trajectories. Examples of concepts studied by use of FACET include aircraft self-separation for free flight; prediction of air-traffic-controller workload; decision support for direct routing; integration of spacecraft-launch operations into the U.S. national airspace system; and traffic- flow-management using rerouting, metering, and ground delays. Aircraft can be modeled as flying along either flight-plan routes or great-circle routes as they climb, cruise, and descend according to their individual performance models. The FACET software is modular and is written in the Java and C programming languages. The architecture of FACET strikes a balance between flexibility and fidelity; as a consequence, FACET can be used to model systemwide airspace operations over the contiguous U.S., involving as many as 10,000 aircraft, all on a single desktop or laptop computer running any of a variety of operating systems. Two notable applications of FACET include: (1) reroute conformance monitoring algorithms that have been implemented in one of the Federal Aviation Administration s nationally deployed, real-time, operational systems; and (2) the licensing and integration of FACET with the commercially available Flight Explorer, which is an Internet- based, real-time flight-tracking system.
Stochastic and Deterministic Assembly Processes in Subsurface Microbial Communities
Stegen, James C.; Lin, Xueju; Konopka, Allan; Fredrickson, Jim K.
2012-03-29
A major goal of microbial community ecology is to understand the forces that structure community composition. Deterministic selection by specific environmental factors is sometimes important, but in other cases stochastic or ecologically neutral processes dominate. Lacking is a unified conceptual framework aiming to understand why deterministic processes dominate in some contexts but not others. Here we work towards such a framework. By testing predictions derived from general ecological theory we aim to uncover factors that govern the relative influences of deterministic and stochastic processes. We couple spatiotemporal data on subsurface microbial communities and environmental parameters with metrics and null models of within and between community phylogenetic composition. Testing for phylogenetic signal in organismal niches showed that more closely related taxa have more similar habitat associations. Community phylogenetic analyses further showed that ecologically similar taxa coexist to a greater degree than expected by chance. Environmental filtering thus deterministically governs subsurface microbial community composition. More importantly, the influence of deterministic environmental filtering relative to stochastic factors was maximized at both ends of an environmental variation gradient. A stronger role of stochastic factors was, however, supported through analyses of phylogenetic temporal turnover. While phylogenetic turnover was on average faster than expected, most pairwise comparisons were not themselves significantly non-random. The relative influence of deterministic environmental filtering over community dynamics was elevated, however, in the most temporally and spatially variable environments. Our results point to general rules governing the relative influences of stochastic and deterministic processes across micro- and macro-organisms.
Al-Shargabi, Mohammed A; Shaikh, Asadullah; Ismail, Abdulsamad S
2016-01-01
Optical burst switching (OBS) networks have been attracting much consideration as a promising approach to build the next generation optical Internet. A solution for enhancing the Quality of Service (QoS) for high priority real time traffic over OBS with the fairness among the traffic types is absent in current OBS' QoS schemes. In this paper we present a novel Real Time Quality of Service with Fairness Ratio (RT-QoSFR) scheme that can adapt the burst assembly parameters according to the traffic QoS needs in order to enhance the real time traffic QoS requirements and to ensure the fairness for other traffic. The results show that RT-QoSFR scheme is able to fulfill the real time traffic requirements (end to end delay, and loss rate) ensuring the fairness for other traffics under various conditions such as the type of real time traffic and traffic load. RT-QoSFR can guarantee that the delay of the real time traffic packets does not exceed the maximum packets transfer delay value. Furthermore, it can reduce the real time traffic packets loss, at the same time guarantee the fairness for non real time traffic packets by determining the ratio of real time traffic inside the burst to be 50-60%, 30-40%, and 10-20% for high, normal, and low traffic loads respectively.
Al-Shargabi, Mohammed A.; Ismail, Abdulsamad S.
2016-01-01
Optical burst switching (OBS) networks have been attracting much consideration as a promising approach to build the next generation optical Internet. A solution for enhancing the Quality of Service (QoS) for high priority real time traffic over OBS with the fairness among the traffic types is absent in current OBS’ QoS schemes. In this paper we present a novel Real Time Quality of Service with Fairness Ratio (RT-QoSFR) scheme that can adapt the burst assembly parameters according to the traffic QoS needs in order to enhance the real time traffic QoS requirements and to ensure the fairness for other traffic. The results show that RT-QoSFR scheme is able to fulfill the real time traffic requirements (end to end delay, and loss rate) ensuring the fairness for other traffics under various conditions such as the type of real time traffic and traffic load. RT-QoSFR can guarantee that the delay of the real time traffic packets does not exceed the maximum packets transfer delay value. Furthermore, it can reduce the real time traffic packets loss, at the same time guarantee the fairness for non real time traffic packets by determining the ratio of real time traffic inside the burst to be 50–60%, 30–40%, and 10–20% for high, normal, and low traffic loads respectively. PMID:27583557
Nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates
Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E. , Guillorn, Michael A.; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TN; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN
2011-05-17
Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. A method includes depositing a catalyst particle on a surface of a substrate to define a deterministically located position; growing an aligned elongated nanostructure on the substrate, an end of the aligned elongated nanostructure coupled to the substrate at the deterministically located position; coating the aligned elongated nanostructure with a conduit material; removing a portion of the conduit material to expose the catalyst particle; removing the catalyst particle; and removing the elongated nanostructure to define a nanoconduit.
Surface plasmon field enhancements in deterministic aperiodic structures.
Shugayev, Roman
2010-11-22
In this paper we analyze optical properties and plasmonic field enhancements in large aperiodic nanostructures. We introduce extension of Generalized Ohm's Law approach to estimate electromagnetic properties of Fibonacci, Rudin-Shapiro, cluster-cluster aggregate and random deterministic clusters. Our results suggest that deterministic aperiodic structures produce field enhancements comparable to random morphologies while offering better understanding of field localizations and improved substrate design controllability. Generalized Ohm's law results for deterministic aperiodic structures are in good agreement with simulations obtained using discrete dipole method.
A Statistical Deterministic Approach to Hurricane Risk Assessment.
NASA Astrophysics Data System (ADS)
Emanuel, Kerry; Ravela, Sai; Vivant, Emmanuel; Risi, Camille
2006-03-01
Hurricanes are lethal and costly phenomena, and it is therefore of great importance to assess the long-term risk they pose to society. Among the greatest threats are those associated with high winds and related phenomena, such as storm surges. Here we assess the probability that hurricane winds will affect any given point in space by combining an estimate of the probability that a hurricane will pass within some given radius of the point in question with an estimate of the spatial probability density of storm winds.To assess the probability that storms will pass close enough to a point of interest to affect it, we apply two largely independent techniques for generating large numbers of synthetic hurricane tracks. The first treats each track as a Markov chain, using statistics derived from observed hurricanetrack data. The second technique begins by generating a large class of synthetic, time-varying wind fields at 850 and 250 hPa whose variance, covariance, and monthly means match NCEP-NCAR reanalysis data and whose kinetic energy follows an ω-3 geostrophic turbulence spectral frequency distribution. Hurricanes are assumed to move with a weighted mean of the 850- and 250-hPa flow plus a “beta drift” correction, after originating at points determined from historical genesis data. The statistical characteristics of tracks generated by these two means are compared.For a given point in space, many (104) synthetic tracks are generated that pass within a specified distance of a point of interest, using both track generation methods. For each of these tracks, a deterministic, coupled, numerical simulation of the storm's intensity is carried out, using monthly mean upper-ocean and potential intensity climatologies, together with time-varying vertical wind shear generated from the synthetic time series of 850- and 250-hPa winds, as described above. For the case in which the tracks are generated using the synthetic environmental flow, the tracks and the shear are
NASA Technical Reports Server (NTRS)
1997-01-01
The high level requirement of the Air Traffic Network (ATN) project is to provide a mechanism for evaluating the impact of router scheduling modifications on a networks efficiency, without implementing the modifications in the live network.
Reproducible and deterministic production of aspheres
NASA Astrophysics Data System (ADS)
Leitz, Ernst Michael; Stroh, Carsten; Schwalb, Fabian
2015-10-01
Aspheric lenses are ground in a single point cutting mode. Subsequently different iterative polishing methods are applied followed by aberration measurements on external metrology instruments. For an economical production, metrology and correction steps need to be reduced. More deterministic grinding and polishing is mandatory. Single point grinding is a path-controlled process. The quality of a ground asphere is mainly influenced by the accuracy of the machine. Machine improvements must focus on path accuracy and thermal expansion. Optimized design, materials and thermal management reduce thermal expansion. The path accuracy can be improved using ISO 230-2 standardized measurements. Repeated interferometric measurements over the total travel of all CNC axes in both directions are recorded. Position deviations evaluated in correction tables improve the path accuracy and that of the ground surface. Aspheric polishing using a sub-aperture flexible polishing tool is a dwell time controlled process. For plano and spherical polishing the amount of material removal during polishing is proportional to pressure, relative velocity and time (Preston). For the use of flexible tools on aspheres or freeform surfaces additional non-linear components are necessary. Satisloh ADAPT calculates a predicted removal function from lens geometry, tool geometry and process parameters with FEM. Additionally the tooĺs local removal characteristics is determined in a simple test. By oscillating the tool on a plano or spherical sample of the same lens material, a trench is created. Its 3-D profile is measured to calibrate the removal simulation. Remaining aberrations of the desired lens shape can be predicted, reducing iteration and metrology steps.
Deterministic transfer function for transionospheric propagation
Roussel-Dupre, R.; Argo, P.
1992-01-01
Recent interest in ground-to-satellite propagation of broadband signals has prompted investigation into the development of a transfer function for the ionosphere that includes effects such as dispersion, refraction, changes in polarization, reflection, absorption, and scattering. Depending on the application (e.g. geolocation), it may be necessary to incorporate all of these processes in order to extract the information of interest from the measured transionospheric signal. A transfer function for midlatitudes at VBF from 25--175 MHz is one of the goals of the BLACKBEARD program in characterizing propagation distortion. In support of this program we discuss in this paper an analytic model for the deterministic transfer function of the ionosphere that includes the effects of dispersion, refraction, and changes in polarization to second order in the parameter X = {omega}{sub pe}{sup 2}/{omega}{sup 2} where X is assumed to be small compared to one, {omega}{sub pe} is the peak plasma frequency of the ionosphere, and {omega} is the wave frequency. Analytic expressions for the total phase change, group delay, and polarization change in a spherical geometry assuming a radial, electron density profile are presented. A computer code ITF (Ionospheric Transfer Function) that makes use of the ICED (Ionospheric Conductivity and Electron Density) model to ,venerate electron density profiles was developed to calculate the ionospheric transfer function along a specified transmitter-to-receiver path. Details of this code will be presented as well as comparisons made between ITF analytic results and ray-tracing calculations.
Deterministic transfer function for transionospheric propagation
Roussel-Dupre, R.; Argo, P.
1992-09-01
Recent interest in ground-to-satellite propagation of broadband signals has prompted investigation into the development of a transfer function for the ionosphere that includes effects such as dispersion, refraction, changes in polarization, reflection, absorption, and scattering. Depending on the application (e.g. geolocation), it may be necessary to incorporate all of these processes in order to extract the information of interest from the measured transionospheric signal. A transfer function for midlatitudes at VBF from 25--175 MHz is one of the goals of the BLACKBEARD program in characterizing propagation distortion. In support of this program we discuss in this paper an analytic model for the deterministic transfer function of the ionosphere that includes the effects of dispersion, refraction, and changes in polarization to second order in the parameter X = {omega}{sub pe}{sup 2}/{omega}{sup 2} where X is assumed to be small compared to one, {omega}{sub pe} is the peak plasma frequency of the ionosphere, and {omega} is the wave frequency. Analytic expressions for the total phase change, group delay, and polarization change in a spherical geometry assuming a radial, electron density profile are presented. A computer code ITF (Ionospheric Transfer Function) that makes use of the ICED (Ionospheric Conductivity and Electron Density) model to ,venerate electron density profiles was developed to calculate the ionospheric transfer function along a specified transmitter-to-receiver path. Details of this code will be presented as well as comparisons made between ITF analytic results and ray-tracing calculations.
Understanding Vertical Jump Potentiation: A Deterministic Model.
Suchomel, Timothy J; Lamont, Hugh S; Moir, Gavin L
2016-06-01
This review article discusses previous postactivation potentiation (PAP) literature and provides a deterministic model for vertical jump (i.e., squat jump, countermovement jump, and drop/depth jump) potentiation. There are a number of factors that must be considered when designing an effective strength-power potentiation complex (SPPC) focused on vertical jump potentiation. Sport scientists and practitioners must consider the characteristics of the subject being tested and the design of the SPPC itself. Subject characteristics that must be considered when designing an SPPC focused on vertical jump potentiation include the individual's relative strength, sex, muscle characteristics, neuromuscular characteristics, current fatigue state, and training background. Aspects of the SPPC that must be considered for vertical jump potentiation include the potentiating exercise, level and rate of muscle activation, volume load completed, the ballistic or non-ballistic nature of the potentiating exercise, and the rest interval(s) used following the potentiating exercise. Sport scientists and practitioners should design and seek SPPCs that are practical in nature regarding the equipment needed and the rest interval required for a potentiated performance. If practitioners would like to incorporate PAP as a training tool, they must take the athlete training time restrictions into account as a number of previous SPPCs have been shown to require long rest periods before potentiation can be realized. Thus, practitioners should seek SPPCs that may be effectively implemented in training and that do not require excessive rest intervals that may take away from valuable training time. Practitioners may decrease the necessary time needed to realize potentiation by improving their subject's relative strength.
ZERODUR: deterministic approach for strength design
NASA Astrophysics Data System (ADS)
Hartmann, Peter
2012-12-01
There is an increasing request for zero expansion glass ceramic ZERODUR substrates being capable of enduring higher operational static loads or accelerations. The integrity of structures such as optical or mechanical elements for satellites surviving rocket launches, filigree lightweight mirrors, wobbling mirrors, and reticle and wafer stages in microlithography must be guaranteed with low failure probability. Their design requires statistically relevant strength data. The traditional approach using the statistical two-parameter Weibull distribution suffered from two problems. The data sets were too small to obtain distribution parameters with sufficient accuracy and also too small to decide on the validity of the model. This holds especially for the low failure probability levels that are required for reliable applications. Extrapolation to 0.1% failure probability and below led to design strengths so low that higher load applications seemed to be not feasible. New data have been collected with numbers per set large enough to enable tests on the applicability of the three-parameter Weibull distribution. This distribution revealed to provide much better fitting of the data. Moreover it delivers a lower threshold value, which means a minimum value for breakage stress, allowing of removing statistical uncertainty by introducing a deterministic method to calculate design strength. Considerations taken from the theory of fracture mechanics as have been proven to be reliable with proof test qualifications of delicate structures made from brittle materials enable including fatigue due to stress corrosion in a straight forward way. With the formulae derived, either lifetime can be calculated from given stress or allowable stress from minimum required lifetime. The data, distributions, and design strength calculations for several practically relevant surface conditions of ZERODUR are given. The values obtained are significantly higher than those resulting from the two
Deterministic versus stochastic trends: Detection and challenges
NASA Astrophysics Data System (ADS)
Fatichi, S.; Barbosa, S. M.; Caporali, E.; Silva, M. E.
2009-09-01
The detection of a trend in a time series and the evaluation of its magnitude and statistical significance is an important task in geophysical research. This importance is amplified in climate change contexts, since trends are often used to characterize long-term climate variability and to quantify the magnitude and the statistical significance of changes in climate time series, both at global and local scales. Recent studies have demonstrated that the stochastic behavior of a time series can change the statistical significance of a trend, especially if the time series exhibits long-range dependence. The present study examines the trends in time series of daily average temperature recorded in 26 stations in the Tuscany region (Italy). In this study a new framework for trend detection is proposed. First two parametric statistical tests, the Phillips-Perron test and the Kwiatkowski-Phillips-Schmidt-Shin test, are applied in order to test for trend stationary and difference stationary behavior in the temperature time series. Then long-range dependence is assessed using different approaches, including wavelet analysis, heuristic methods and by fitting fractionally integrated autoregressive moving average models. The trend detection results are further compared with the results obtained using nonparametric trend detection methods: Mann-Kendall, Cox-Stuart and Spearman's ρ tests. This study confirms an increase in uncertainty when pronounced stochastic behaviors are present in the data. Nevertheless, for approximately one third of the analyzed records, the stochastic behavior itself cannot explain the long-term features of the time series, and a deterministic positive trend is the most likely explanation.
Benke, G. |; Brandt, J.; Chen, H.; Dastangoo, S.; Miller, G.J.
1996-05-01
Recent empirical studies of traffic measurements of packet switched networks have demonstrated that actual network traffic is self-similar, or long range dependent, in nature. That is, the measured traffic is bursty over a wide range of time intervals. Furthermore, the emergence of high-speed network backbones demands the study of accurate models of aggregated traffic to assess network performance. This paper provides a method for generation of self-similar traffic, which can be used to drive network simulation models. The authors present the results of a simulation study of a two-node ATM network configuration that supports the ATM Forum`s Available Bit Rate (ABR) service. In this study, the authors compare the state of the queue at the source router at the edge of the ATM network under both Poisson and self-similar traffic loading. These findings indicate an order of magnitude increase in queue length for self-similar traffic loading as compared to Poisson loading. Moreover, when background VBR traffic is present, self-similar ABR traffic causes more congestion at the ATM switches than does Poisson traffic.
A superstatistical model of vehicular traffic flow
NASA Astrophysics Data System (ADS)
Kosun, Caglar; Ozdemir, Serhan
2016-02-01
In the analysis of vehicular traffic flow, a myriad of techniques have been implemented. In this study, superstatistics is used in modeling the traffic flow on a highway segment. Traffic variables such as vehicular speeds, volume, and headway were collected for three days. For the superstatistical approach, at least two distinct time scales must exist, so that a superposition of nonequilibrium systems assumption could hold. When the slow dynamics of the vehicle speeds exhibit a Gaussian distribution in between the fluctuations of the system at large, one speaks of a relaxation to a local equilibrium. These Gaussian distributions are found with corresponding standard deviations 1 /√{ β }. This translates into a series of fluctuating beta values, hence the statistics of statistics, superstatistics. The traffic flow model has generated an inverse temperature parameter (beta) distribution as well as the speed distribution. This beta distribution has shown that the fluctuations in beta are distributed with respect to a chi-square distribution. It must be mentioned that two distinct Tsallis q values are specified: one is time-dependent and the other is independent. A ramification of these q values is that the highway segment and the traffic flow generate separate characteristics. This highway segment in question is not only nonadditive in nature, but a nonequilibrium driven system, with frequent relaxations to a Gaussian.
Entrepreneurs, chance, and the deterministic concentration of wealth.
Fargione, Joseph E; Lehman, Clarence; Polasky, Stephen
2011-01-01
In many economies, wealth is strikingly concentrated. Entrepreneurs--individuals with ownership in for-profit enterprises--comprise a large portion of the wealthiest individuals, and their behavior may help explain patterns in the national distribution of wealth. Entrepreneurs are less diversified and more heavily invested in their own companies than is commonly assumed in economic models. We present an intentionally simplified individual-based model of wealth generation among entrepreneurs to assess the role of chance and determinism in the distribution of wealth. We demonstrate that chance alone, combined with the deterministic effects of compounding returns, can lead to unlimited concentration of wealth, such that the percentage of all wealth owned by a few entrepreneurs eventually approaches 100%. Specifically, concentration of wealth results when the rate of return on investment varies by entrepreneur and by time. This result is robust to inclusion of realities such as differing skill among entrepreneurs. The most likely overall growth rate of the economy decreases as businesses become less diverse, suggesting that high concentrations of wealth may adversely affect a country's economic growth. We show that a tax on large inherited fortunes, applied to a small portion of the most fortunate in the population, can efficiently arrest the concentration of wealth at intermediate levels.
Entrepreneurs, Chance, and the Deterministic Concentration of Wealth
Fargione, Joseph E.; Lehman, Clarence; Polasky, Stephen
2011-01-01
In many economies, wealth is strikingly concentrated. Entrepreneurs–individuals with ownership in for-profit enterprises–comprise a large portion of the wealthiest individuals, and their behavior may help explain patterns in the national distribution of wealth. Entrepreneurs are less diversified and more heavily invested in their own companies than is commonly assumed in economic models. We present an intentionally simplified individual-based model of wealth generation among entrepreneurs to assess the role of chance and determinism in the distribution of wealth. We demonstrate that chance alone, combined with the deterministic effects of compounding returns, can lead to unlimited concentration of wealth, such that the percentage of all wealth owned by a few entrepreneurs eventually approaches 100%. Specifically, concentration of wealth results when the rate of return on investment varies by entrepreneur and by time. This result is robust to inclusion of realities such as differing skill among entrepreneurs. The most likely overall growth rate of the economy decreases as businesses become less diverse, suggesting that high concentrations of wealth may adversely affect a country's economic growth. We show that a tax on large inherited fortunes, applied to a small portion of the most fortunate in the population, can efficiently arrest the concentration of wealth at intermediate levels. PMID:21814540
A deterministic method for transient, three-dimensional neutron transport
NASA Astrophysics Data System (ADS)
Goluoglu, Sedat
A deterministic method for solving the time-dependent, three-dimensional Boltzmann transport equation with explicit representation of delayed neutrons has been developed and evaluated. The methodology used in this study for the time variable is the improved quasi-static (IQS) method. The position, energy, and angle variables of the neutron flux are computed using the three-dimensional (3-D) discrete ordinates code TORT. The resulting time-dependent, 3-D code is called TDTORT. The flux shape calculated by TORT is used to compute the point kinetics parameters (e.g., reactivity, generation time, etc.). The amplitude function is calculated by solving the point kinetics equations using LSODE (Livermore Solver of Ordinary differential Equations). Several transient 1-D, 2-D, and 3-D benchmark problems are used to verify TDTORT. The results show that methodology and code developed in this work have sufficient accuracy and speed to serve as a benchmarking tool for other less accurate models and codes. More importantly, a new computational tool based on transport theory now exists for analyzing the dynamic behavior of complex neutronic systems.
Deterministic quantum nonlinear optics with single atoms and virtual photons
NASA Astrophysics Data System (ADS)
Kockum, Anton Frisk; Miranowicz, Adam; Macrı, Vincenzo; Savasta, Salvatore; Nori, Franco
2017-06-01
We show how analogs of a large number of well-known nonlinear-optics phenomena can be realized with one or more two-level atoms coupled to one or more resonator modes. Through higher-order processes, where virtual photons are created and annihilated, an effective deterministic coupling between two states of such a system can be created. In this way, analogs of three-wave mixing, four-wave mixing, higher-harmonic and -subharmonic generation (i.e., up- and down-conversion), multiphoton absorption, parametric amplification, Raman and hyper-Raman scattering, the Kerr effect, and other nonlinear processes can be realized. In contrast to most conventional implementations of nonlinear optics, these analogs can reach unit efficiency, only use a minimal number of photons (they do not require any strong external drive), and do not require more than two atomic levels. The strength of the effective coupling in our proposed setups becomes weaker the more intermediate transition steps are needed. However, given the recent experimental progress in ultrastrong light-matter coupling and improvement of coherence times for engineered quantum systems, especially in the field of circuit quantum electrodynamics, we estimate that many of these nonlinear-optics analogs can be realized with currently available technology.
Rapid detection of small oscillation faults via deterministic learning.
Wang, Cong; Chen, Tianrui
2011-08-01
Detection of small faults is one of the most important and challenging tasks in the area of fault diagnosis. In this paper, we present an approach for the rapid detection of small oscillation faults based on a recently proposed deterministic learning (DL) theory. The approach consists of two phases: the training phase and the test phase. In the training phase, the system dynamics underlying normal and fault oscillations are locally accurately approximated through DL. The obtained knowledge of system dynamics is stored in constant radial basis function (RBF) networks. In the diagnosis phase, rapid detection is implemented. Specially, a bank of estimators are constructed using the constant RBF neural networks to represent the training normal and fault modes. By comparing the set of estimators with the test monitored system, a set of residuals are generated, and the average L(1) norms of the residuals are taken as the measure of the differences between the dynamics of the monitored system and the dynamics of the training normal mode and oscillation faults. The occurrence of a test oscillation fault can be rapidly detected according to the smallest residual principle. A rigorous analysis of the performance of the detection scheme is also given. The novelty of the paper lies in that the modeling uncertainty and nonlinear fault functions are accurately approximated and then the knowledge is utilized to achieve rapid detection of small oscillation faults. Simulation studies are included to demonstrate the effectiveness of the approach.
Estimating interdependences in networks of weakly coupled deterministic systems
NASA Astrophysics Data System (ADS)
de Feo, Oscar; Carmeli, Cristian
2008-02-01
The extraction of information from measured data about the interactions taking place in a network of systems is a key topic in modern applied sciences. This topic has been traditionally addressed by considering bivariate time series, providing methods which are sometimes difficult to extend to multivariate data, the limiting factor being the computational complexity. Here, we present a computationally viable method based on black-box modeling which, while theoretically applicable only when a deterministic hypothesis about the processes behind the recordings is plausible, proves to work also when this assumption is severely affected. Conceptually, the method is very simple and is composed of three independent steps: in the first step a state-space reconstruction is performed separately on each measured signal; in the second step, a local model, i.e., a nonlinear dynamical system, is fitted separately on each (reconstructed) measured signal; afterward, a linear model of the dynamical interactions is obtained by cross-relating the (reconstructed) measured variables to the dynamics unexplained by the local models. The method is successfully validated on numerically generated data. An assessment of its sensitivity to data length and modeling and measurement noise intensity, and of its applicability to large-scale systems, is also provided.
NASA Astrophysics Data System (ADS)
He, Chong; Chiam, Keng-Hwee; Chew, Lock Yue
2016-10-01
Ultradian cycles are frequently observed in biological systems. They serve important roles in regulating, for example, cell fate and the development of the organism. Many mathematical models have been developed to analyze their behavior. Generally, these models can be classified into two classes: Deterministic models that generate oscillatory behavior by incorporating time delays or Hopf bifurcations, and stochastic models that generate oscillatory behavior by noise driven resonance. However, it is still unclear which of these two mechanisms applies to cellular oscillations. In this paper, we show through theoretical analysis and numerical simulation that we can distinguish which of these two mechanisms govern cellular oscillations, by measuring statistics of oscillation amplitudes for cells of different sizes. We found that, for oscillations driven deterministically, the normalized average amplitude is constant with respect to cell size, while the coefficient of variation of the amplitude scales with cell size with an exponent of -0.5 . On the other hand, for oscillations driven stochastically, the coefficient of variation of the amplitude is constant with respect to cell size, while the normalized average amplitude scales with cell size with an exponent of -0.5 . Our results provide a theoretical basis to discern whether a particular oscillatory behavior is governed by a deterministic or stochastic mechanism.
Chang, T; Schiff, S J; Sauer, T; Gossard, J P; Burke, R E
1994-08-01
Long time series of monosynaptic Ia-afferent to alpha-motoneuron reflexes were recorded in the L7 or S1 ventral roots in the cat. Time series were collected before and after spinalization at T13 during constant amplitude stimulations of group Ia muscle afferents in the triceps surae muscle nerves. Using autocorrelation to analyze the linear correlation in the time series demonstrated oscillations in the decerebrate state (4/4) that were eliminated after spinalization (5/5). Three tests for determinism were applied to these series: 1) local flow, 2) local dispersion, and 3) nonlinear prediction. These algorithms were validated with time series generated from known deterministic equations. For each experimental and theoretical time series used, matched time-series of stochastic surrogate data were generated to serve as mathematical and statistical controls. Two of the time series collected in the decerebrate state (2/4) demonstrated evidence for deterministic structure. This structure could not be accounted for by the autocorrelation in the data, and was abolished following spinalization. None of the time series collected in the spinalized state (0/5) demonstrated evidence of determinism. Although monosynaptic reflex variability is generally stochastic in the spinalized state, this simple driven system may display deterministic behavior in the decerebrate state.
Chang, T; Schiff, S J; Sauer, T; Gossard, J P; Burke, R E
1994-01-01
Long time series of monosynaptic Ia-afferent to alpha-motoneuron reflexes were recorded in the L7 or S1 ventral roots in the cat. Time series were collected before and after spinalization at T13 during constant amplitude stimulations of group Ia muscle afferents in the triceps surae muscle nerves. Using autocorrelation to analyze the linear correlation in the time series demonstrated oscillations in the decerebrate state (4/4) that were eliminated after spinalization (5/5). Three tests for determinism were applied to these series: 1) local flow, 2) local dispersion, and 3) nonlinear prediction. These algorithms were validated with time series generated from known deterministic equations. For each experimental and theoretical time series used, matched time-series of stochastic surrogate data were generated to serve as mathematical and statistical controls. Two of the time series collected in the decerebrate state (2/4) demonstrated evidence for deterministic structure. This structure could not be accounted for by the autocorrelation in the data, and was abolished following spinalization. None of the time series collected in the spinalized state (0/5) demonstrated evidence of determinism. Although monosynaptic reflex variability is generally stochastic in the spinalized state, this simple driven system may display deterministic behavior in the decerebrate state. Images FIGURE 1 PMID:7948680
The Total Exposure Model (TEM) uses deterministic and stochastic methods to estimate the exposure of a person performing daily activities of eating, drinking, showering, and bathing. There were 250 time histories generated, by subject with activities, for the three exposure ro...
The Total Exposure Model (TEM) uses deterministic and stochastic methods to estimate the exposure of a person performing daily activities of eating, drinking, showering, and bathing. There were 250 time histories generated, by subject with activities, for the three exposure ro...
Deterministic assisted clone of an unknown four-particle entangled cluster-type state
NASA Astrophysics Data System (ADS)
Ma, Song-Ya; Luo, Ming-Xing; Sun, Ying
2013-11-01
Two deterministic schemes are proposed to realize the assisted clone of an unknown four-particle entangled cluster-type state. The schemes include two stages. The first stage requires teleportation via maximal entanglement as the quantum channel. In the second stages of the protocols, two novel sets of mutually orthogonal basis vectors are constructed. With the assistance of the preparer through a four-particle or two-step two-particle projective measurement under these bases, the perfect copy of an original state can be produced. Comparing with the previous protocols which produce the unknown state and its orthogonal complement state at the site of the sender, the proposed schemes generate the unknown state deterministically.
Deterministic coupling of delta-doped nitrogen vacancy centers to a nanobeam photonic crystal cavity
Lee, Jonathan C.; Cui, Shanying; Zhang, Xingyu; Russell, Kasey J.; Magyar, Andrew P.; Hu, Evelyn L.; Bracher, David O.; Ohno, Kenichi; McLellan, Claire A.; Alemán, Benjamin; Bleszynski Jayich, Ania; Andrich, Paolo; Awschalom, David; Aharonovich, Igor
2014-12-29
The negatively charged nitrogen vacancy center (NV) in diamond has generated significant interest as a platform for quantum information processing and sensing in the solid state. For most applications, high quality optical cavities are required to enhance the NV zero-phonon line (ZPL) emission. An outstanding challenge in maximizing the degree of NV-cavity coupling is the deterministic placement of NVs within the cavity. Here, we report photonic crystal nanobeam cavities coupled to NVs incorporated by a delta-doping technique that allows nanometer-scale vertical positioning of the emitters. We demonstrate cavities with Q up to ∼24 000 and mode volume V ∼ 0.47(λ/n){sup 3} as well as resonant enhancement of the ZPL of an NV ensemble with Purcell factor of ∼20. Our fabrication technique provides a first step towards deterministic NV-cavity coupling using spatial control of the emitters.
Nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates
Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E [Greenback, TN; Guillorn, Michael A [Ithaca, NY; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TN; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN
2011-08-23
Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoreplicant structure coupled to a surface of the substrate.
Structural deterministic safety factors selection criteria and verification
NASA Technical Reports Server (NTRS)
Verderaime, V.
1992-01-01
Though current deterministic safety factors are arbitrarily and unaccountably specified, its ratio is rooted in resistive and applied stress probability distributions. This study approached the deterministic method from a probabilistic concept leading to a more systematic and coherent philosophy and criterion for designing more uniform and reliable high-performance structures. The deterministic method was noted to consist of three safety factors: a standard deviation multiplier of the applied stress distribution; a K-factor for the A- or B-basis material ultimate stress; and the conventional safety factor to ensure that the applied stress does not operate in the inelastic zone of metallic materials. The conventional safety factor is specifically defined as the ratio of ultimate-to-yield stresses. A deterministic safety index of the combined safety factors was derived from which the corresponding reliability proved the deterministic method is not reliability sensitive. The bases for selecting safety factors are presented and verification requirements are discussed. The suggested deterministic approach is applicable to all NASA, DOD, and commercial high-performance structures under static stresses.
Theory and Simulation for Traffic Characteristics on the Highway with a Slowdown Section
Xu, Dejie; Mao, Baohua; Rong, Yaping; Wei, Wei
2015-01-01
We study the traffic characteristics on a single-lane highway with a slowdown section using the deterministic cellular automaton (CA) model. Based on the theoretical analysis, the relationships among local mean densities, velocities, traffic fluxes, and global densities are derived. The results show that two critical densities exist in the evolutionary process of traffic state, and they are significant demarcation points for traffic phase transition. Furthermore, the changing laws of the two critical densities with different length of limit section are also investigated. It is shown that only one critical density appears if a highway is not slowdown section; nevertheless, with the growing length of slowdown section, one critical density separates into two critical densities; if the entire highway is slowdown section, they finally merge into one. The contrastive analysis proves that the analytical results are consistent with the numerical ones. PMID:26089864
Theory and Simulation for Traffic Characteristics on the Highway with a Slowdown Section.
Xu, Dejie; Mao, Baohua; Rong, Yaping; Wei, Wei
2015-01-01
We study the traffic characteristics on a single-lane highway with a slowdown section using the deterministic cellular automaton (CA) model. Based on the theoretical analysis, the relationships among local mean densities, velocities, traffic fluxes, and global densities are derived. The results show that two critical densities exist in the evolutionary process of traffic state, and they are significant demarcation points for traffic phase transition. Furthermore, the changing laws of the two critical densities with different length of limit section are also investigated. It is shown that only one critical density appears if a highway is not slowdown section; nevertheless, with the growing length of slowdown section, one critical density separates into two critical densities; if the entire highway is slowdown section, they finally merge into one. The contrastive analysis proves that the analytical results are consistent with the numerical ones.
The recursive deterministic perceptron neural network.
Tajine, Mohamed; Elizondo, David
1998-12-01
We introduce a feedforward multilayer neural network which is a generalization of the single layer perceptron topology (SLPT), called recursive deterministic perceptron (RDP). This new model is capable of solving any two-class classification problem, as opposed to the single layer perceptron which can only solve classification problems dealing with linearly separable sets (two subsets X and Y of R(d) are said to be linearly separable if there exists a hyperplane such that the elements of X and Y lie on the two opposite sides of R(d) delimited by this hyperplane). We propose several growing methods for constructing a RDP. These growing methods build a RDP by successively adding intermediate neurons (IN) to the topology (an IN corresponds to a SLPT). Thus, as a result, we obtain a multilayer perceptron topology, which together with the weights, are determined automatically by the constructing algorithms. Each IN augments the affine dimension of the set of input vectors. This augmentation is done by adding the output of each of these INs, as a new component, to every input vector. The construction of a new IN is made by selecting a subset from the set of augmented input vectors which is LS from the rest of this set. This process ends with LS classes in almost n-1 steps where n is the number of input vectors. For this construction, if we assume that the selected LS subsets are of maximum cardinality, the problem is proven to be NP-complete. We also introduce a generalization of the RDP model for classification of m classes (m>2) allowing to always separate m classes. This generalization is based on a new notion of linear separability for m classes, and it follows naturally from the RDP. This new model can be used to compute functions with a finite domain, and thus, to approximate continuous functions. We have also compared - over several classification problems - the percentage of test data correctly classified, or the topology of the 2 and m classes RDPs with that of
Single Ion Implantation and Deterministic Doping
Schenkel, Thomas
2010-06-11
The presence of single atoms, e.g. dopant atoms, in sub-100 nm scale electronic devices can affect the device characteristics, such as the threshold voltage of transistors, or the sub-threshold currents. Fluctuations of the number of dopant atoms thus poses a complication for transistor scaling. In a complementary view, new opportunities emerge when novel functionality can be implemented in devices deterministically doped with single atoms. The grand price of the latter might be a large scale quantum computer, where quantum bits (qubits) are encoded e.g. in the spin states of electrons and nuclei of single dopant atoms in silicon, or in color centers in diamond. Both the possible detrimental effects of dopant fluctuations and single atom device ideas motivate the development of reliable single atom doping techniques which are the subject of this chapter. Single atom doping can be approached with top down and bottom up techniques. Top down refers to the placement of dopant atoms into a more or less structured matrix environment, like a transistor in silicon. Bottom up refers to approaches to introduce single dopant atoms during the growth of the host matrix e.g. by directed self-assembly and scanning probe assisted lithography. Bottom up approaches are discussed in Chapter XYZ. Since the late 1960's, ion implantation has been a widely used technique to introduce dopant atoms into silicon and other materials in order to modify their electronic properties. It works particularly well in silicon since the damage to the crystal lattice that is induced by ion implantation can be repaired by thermal annealing. In addition, the introduced dopant atoms can be incorporated with high efficiency into lattice position in the silicon host crystal which makes them electrically active. This is not the case for e.g. diamond, which makes ion implantation doping to engineer the electrical properties of diamond, especially for n-type doping much harder then for silicon. Ion
NASA Astrophysics Data System (ADS)
Schreckenberg, Michael
2002-03-01
In the past decade the investigation of the complex behaviour of traffic dynamics became an active field of (interdisciplinary) research. On the one hand this is due to a fastly growing availability of 'experimental' data from measurements with various kinds of sensors, on the other hand due to an enormous improvement of the modelling techniques from statistical physics. This has led to the identification of several new phases of traffic flow and the characterization of the corresponding phase transitions between them. Nowadays many of the occurring dynamical phenomena are understood quite well although a complete understanding, especially of the interrelation between the models on the different scales (micro-, meso-, macroscopic), is still missing. Whereas earlier attempts tried to describe traffic flow in a hydrodynamical formulation the current microscopic models are able to take into not only the physically correct motion of single cars but also certain aspects of the driver's behaviour. It turns out that the simple car following theories cannot explain the complex structures found, e.g., in synchronized traffic, a new state found only recently. Here a more detailed analysis is necessary which goes far beyond the pure modelling of the motion of the cars in analogy to granular media (grains of sand, pills, corn, etc.). The detailed knowledge of traffic dynamics not of purely scientific interest but also absolutely necessary for practical applications. With the help of online data from measurements of flows and speeds it is possible to construct a complete picture of the actual traffic state with real time simulations. As a very efficient model ansatz cellular automata have been shown to be a reasonable compromise between simulation speed and descripiton accuracy. Beyond the reproduction of the actual state a reliable traffic forecast should be possible although the driver's reaction on the forecast still remains unclear.
Graphics development of DCOR: Deterministic combat model of Oak Ridge
Hunt, G.; Azmy, Y.Y.
1992-10-01
DCOR is a user-friendly computer implementation of a deterministic combat model developed at ORNL. To make the interpretation of the results more intuitive, a conversion of the numerical solution to a graphic animation sequence of battle evolution is desirable. DCOR uses a coarse computational spatial mesh superimposed on the battlefield. This research is aimed at developing robust methods for computing the position of the combative units over the continuum (and also pixeled) battlefield, from DCOR`s discrete-variable solution representing the density of each force type evaluated at gridpoints. Three main problems have been identified and solutions have been devised and implemented in a new visualization module of DCOR. First, there is the problem of distributing the total number of objects, each representing a combative unit of each force type, among the gridpoints at each time level of the animation. This problem is solved by distributing, for each force type, the total number of combative units, one by one, to the gridpoint with the largest calculated number of units. Second, there is the problem of distributing the number of units assigned to each computational gridpoint over the battlefield area attributed to that point. This problem is solved by distributing the units within that area by taking into account the influence of surrounding gridpoints using linear interpolation. Finally, time interpolated solutions must be generated to produce a sufficient number of frames to create a smooth animation sequence. Currently, enough frames may be generated either by direct computation via the PDE solver or by using linear programming techniques to linearly interpolate intermediate frames between calculated frames.
Virginia's traffic management system
Morris, J.; Marber, S. )
1992-07-01
This paper reports that Northern Virginia, like most other urban areas, faces the challenge of moving more and more vehicles on roads that are already overloaded. Traffic in Northern Virginia is continually increasing, but the development surrounding Interstate 395, 495, and 66 makes little room available for roadway expansion. Even if land were unlimited, the strict requirement of the Clean Air Act make building roads difficult. This paper reports that ensuring the most efficient use of the interstate highways is the goal of the Virginia Department of Transportation's (VDOT's) traffic management system (TMS). TMS is a computerized highway surveillance and control system that monitors 30 interstate miles on I-395, I-495, and I-66. The system helps squeeze the most use from these interstates by detecting and helping clear accidents or disabled vehicles and by smoothing traffic flow. TMS spots and helps clear an average of two incidents a day and prevents accidents caused by erratic traffic flow from ramps onto the main line. For motorists, these TMS functions translate into decreased travel time, vehicle operating costs, and air pollution. VDOT's TMS is the foundation for the intelligent vehicle-highway systems of tomorrow. It employs several elements that work together to improve traffic flow.
Modeling sulphur dioxide due to vehicular traffic using artificial neural network.
Singh, B K; Singh, A K; Prasad, S C
2009-10-01
The dispersion characteristics of vehicular exhaust are highly non-linear. The deterministic as well as numerical models are unable to predict these air pollutants precisely. Artificial neural network (ANN), having the capability to recognize the non-linearity present in the noisy data, has been used in the present work to model the emission concentration of sulphur dioxide from vehicular source in an urban area. ANN model is developed with different combinations of traffic and meteorological parameters. The model prediction reveals that the artificial neural network trained with both traffic and meteorological parameters together shows better performance in predicting SO2 concentration.
Chambers, David W
2005-01-01
Groups naturally promote their strengths and prefer values and rules that give them an identity and an advantage. This shows up as generational tensions across cohorts who share common experiences, including common elders. Dramatic cultural events in America since 1925 can help create an understanding of the differing value structures of the Silents, the Boomers, Gen Xers, and the Millennials. Differences in how these generations see motivation and values, fundamental reality, relations with others, and work are presented, as are some applications of these differences to the dental profession.
Nagel, K.; Paczuski, M. |
1995-04-01
We study a single-lane traffic model that is based on human driving behavior. The outflow from a traffic jam self-organizes to a critical state of maximum throughput. Small perturbations of the outflow far downstream create emergent traffic jams with a power law distribution {ital P}({ital t}){similar_to}{ital t}{sup {minus}3/2} of lifetimes {ital t}. On varying the vehicle density in a closed system, this critical state separates lamellar and jammed regimes and exhibits 1/{ital f} noise in the power spectrum. Using random walk arguments, in conjunction with a cascade equation, we develop a phenomenological theory that predicts the critical exponents for this transition and explains the self-organizing behavior. These predictions are consistent with all of our numerical results.
Bagieński, Zbigniew
2015-02-01
Vehicle emissions are responsible for a considerable share of urban air pollution concentrations. The traffic air quality index (TAQI) is proposed as a useful tool for evaluating air quality near roadways. The TAQI associates air quality with the equivalent emission from traffic sources and with street structure (roadway structure) as anthropogenic factors. The paper presents a method of determining the TAQI and defines the degrees of harmfulness of emitted pollution. It proposes a classification specifying a potential threat to human health based on the TAQI value and shows an example of calculating the TAQI value for real urban streets. It also considers the role that car traffic plays in creating a local UHI. Copyright © 2014 Elsevier B.V. All rights reserved.
Hunt, G. ); Azmy, Y.Y. )
1992-10-01
DCOR is a user-friendly computer implementation of a deterministic combat model developed at ORNL. To make the interpretation of the results more intuitive, a conversion of the numerical solution to a graphic animation sequence of battle evolution is desirable. DCOR uses a coarse computational spatial mesh superimposed on the battlefield. This research is aimed at developing robust methods for computing the position of the combative units over the continuum (and also pixeled) battlefield, from DCOR's discrete-variable solution representing the density of each force type evaluated at gridpoints. Three main problems have been identified and solutions have been devised and implemented in a new visualization module of DCOR. First, there is the problem of distributing the total number of objects, each representing a combative unit of each force type, among the gridpoints at each time level of the animation. This problem is solved by distributing, for each force type, the total number of combative units, one by one, to the gridpoint with the largest calculated number of units. Second, there is the problem of distributing the number of units assigned to each computational gridpoint over the battlefield area attributed to that point. This problem is solved by distributing the units within that area by taking into account the influence of surrounding gridpoints using linear interpolation. Finally, time interpolated solutions must be generated to produce a sufficient number of frames to create a smooth animation sequence. Currently, enough frames may be generated either by direct computation via the PDE solver or by using linear programming techniques to linearly interpolate intermediate frames between calculated frames.
Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.
2004-01-01
We successfully applied deterministic deconvolution to real ground-penetrating radar (GPR) data by using the source wavelet that was generated in and transmitted through air as the operator. The GPR data were collected with 400-MHz antennas on a bench adjacent to a cleanly exposed quarry face. The quarry site is characterized by horizontally bedded carbonate strata with shale partings. In order to provide groundtruth for this deconvolution approach, 23 conductive rods were drilled into the quarry face at key locations. The steel rods provided critical information for: (1) correlation between reflections on GPR data and geologic features exposed in the quarry face, (2) GPR resolution limits, (3) accuracy of velocities calculated from common midpoint data and (4) identifying any multiples. Comparing the results of deconvolved data with non-deconvolved data demonstrates the effectiveness of deterministic deconvolution in low dielectric-loss media for increased accuracy of velocity models (improved at least 10-15% in our study after deterministic deconvolution), increased vertical and horizontal resolution of specific geologic features and more accurate representation of geologic features as confirmed from detailed study of the adjacent quarry wall. ?? 2004 Elsevier B.V. All rights reserved.
A deterministic, gigabit serial timing, synchronization and data link for the RHIC LLRF
Hayes, T.; Smith, K.S.; Severino, F.
2011-03-28
A critical capability of the new RHIC low level rf (LLRF) system is the ability to synchronize signals across multiple locations. The 'Update Link' provides this functionality. The 'Update Link' is a deterministic serial data link based on the Xilinx RocketIO protocol that is broadcast over fiber optic cable at 1 gigabit per second (Gbps). The link provides timing events and data packets as well as time stamp information for synchronizing diagnostic data from multiple sources. The new RHIC LLRF was designed to be a flexible, modular system. The system is constructed of numerous independent RF Controller chassis. To provide synchronization among all of these chassis, the Update Link system was designed. The Update Link system provides a low latency, deterministic data path to broadcast information to all receivers in the system. The Update Link system is based on a central hub, the Update Link Master (ULM), which generates the data stream that is distributed via fiber optic links. Downstream chassis have non-deterministic connections back to the ULM that allow any chassis to provide data that is broadcast globally.
Deterministic Modeling of the High Temperature Test Reactor
Ortensi, J.; Cogliati, J. J.; Pope, M. A.; Ferrer, R. M.; Ougouag, A. M.
2010-06-01
Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL’s current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is used in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green’s Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2–3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the
Modeling traffic through a sequence of traffic lights.
Toledo, B A; Muñoz, V; Rogan, J; Tenreiro, C; Valdivia, J A
2004-01-01
We introduce a microscopic traffic model, based on kinematic behavior, which consists of a single vehicle traveling through a sequence of traffic lights that turn on and off with a specific frequency. The reconstructed function that maps the state of the vehicle from light to light displays complex behavior for certain conditions. This chaotic behavior, which arises by the discontinuous nature of the map, displays an essential ingredient in traffic patterns and could be of relevance in studying traffic situations.
Parkinson's disease classification using gait analysis via deterministic learning.
Zeng, Wei; Liu, Fenglin; Wang, Qinghui; Wang, Ying; Ma, Limin; Zhang, Yu
2016-10-28
Gait analysis plays an important role in maintaining the well-being of human mobility and health care, and is a valuable tool for obtaining quantitative information on motor deficits in Parkinson's disease (PD). In this paper, we propose a method to classify (diagnose) patients with PD and healthy control subjects using gait analysis via deterministic learning theory. The classification approach consists of two phases: a training phase and a classification phase. In the training phase, gait characteristics represented by the gait dynamics are derived from the vertical ground reaction forces under the usual and self-selected paces of the subjects. The gait dynamics underlying gait patterns of healthy controls and PD patients are locally accurately approximated by radial basis function (RBF) neural networks. The obtained knowledge of approximated gait dynamics is stored in constant RBF networks. The gait patterns of healthy controls and PD patients constitute a training set. In the classification phase, a bank of dynamical estimators is constructed for all the training gait patterns. Prior knowledge of gait dynamics represented by the constant RBF networks is embedded in the estimators. By comparing the set of estimators with a test gait pattern of a certain PD patient to be classified (diagnosed), a set of classification errors are generated. The average L1 norms of the errors are taken as the classification measure between the dynamics of the training gait patterns and the dynamics of the test PD gait pattern according to the smallest error principle. When the gait patterns of 93 PD patients and 73 healthy controls are classified with five-fold cross-validation method, the accuracy, sensitivity and specificity of the results are 96.39%, 96.77% and 95.89%, respectively. Based on the results, it may be claimed that the features and the classifiers used in the present study could effectively separate the gait patterns between the groups of PD patients and healthy
[The prevention of traffic traumatism].
Borovkov, V N
2009-01-01
The study reveals that traffic traumatism continues to be a serious problem for all international community. The causes of traffic traumatism and ways of its prevention are analyzed. The results of sociological analysis of opinions of injured persons about activities and modes of preventing traffic traumatism are discussed. The system of social medical, legal and educative impacts on all participants of traffic accidents is proposed.
Expanding Regional Airport Usage to Accommodate Increased Air Traffic Demand
NASA Technical Reports Server (NTRS)
Russell, Carl R.
2009-01-01
Small regional airports present an underutilized source of capacity in the national air transportation system. This study sought to determine whether a 50 percent increase in national operations could be achieved by limiting demand growth at large hub airports and instead growing traffic levels at the surrounding regional airports. This demand scenario for future air traffic in the United States was generated and used as input to a 24-hour simulation of the national airspace system. Results of the demand generation process and metrics predicting the simulation results are presented, in addition to the actual simulation results. The demand generation process showed that sufficient runway capacity exists at regional airports to offload a significant portion of traffic from hub airports. Predictive metrics forecast a large reduction of delays at most major airports when demand is shifted. The simulation results then show that offloading hub traffic can significantly reduce nationwide delays.
2014-11-01
9 2. SQUID PROXY SERVER...62 5 List of Figures Figure 1: How Squid Proxy...proxies in more detail since they are the most challenging ones to identify in network traffic logs. 2. SQUID PROXY SERVER Squid is a caching proxy
ERIC Educational Resources Information Center
Dickman, Frances Baker, Ed.
1988-01-01
Seven papers discuss current issues and applied social research concerning alcohol traffic safety. Prevention, policy input, methodology, planning strategies, anti-drinking/driving programs, social-programmatic orientations of Mothers Against Drunk Driving, Kansas Driving Under the Influence Law, New Jersey Driving While Impaired Programs,…
Surface Traffic Management Research
NASA Technical Reports Server (NTRS)
Jung, Yoo Chul
2012-01-01
This presentation discusses an overview of the surface traffic management research conducted by NASA Ames. The concept and human-in-the-loop simulation of the Spot and Runway Departure Advisor (SARDA), an integrated decision support tool for the tower controllers and airline ramp operators, is also discussed.
2003-08-13
An Air Traffic Control radar has been constructed at Shiloh for the NASA control tower at the Shuttle Landing Facility. It will be used by NASA and the Eastern Range for surveillance of controlled air space in Kennedy Space Center and Cape Canaveral Air Force Station restricted areas. Shiloh is on the northern end of Merritt Island.
2003-08-13
An Air Traffic Control radar is being constructed at Shiloh for the NASA control tower at the Shuttle Landing Facility. It will be used by NASA and the Eastern Range for surveillance of controlled air space in Kennedy Space Center and Cape Canaveral Air Force Station restricted areas. Shiloh is on the northern end of Merritt Island.
ERIC Educational Resources Information Center
Edwards, Arthur W.
1977-01-01
The importance of energy conservation is developed in this simulation. Children draw an automobile and then are asked to drive it through the classroom roadways. When a traffic jam results, students offer ways to eliminate it. The importance of mass transportation and car pools is stressed by the teacher. (MA)
ERIC Educational Resources Information Center
Dickman, Frances Baker, Ed.
1988-01-01
Seven papers discuss current issues and applied social research concerning alcohol traffic safety. Prevention, policy input, methodology, planning strategies, anti-drinking/driving programs, social-programmatic orientations of Mothers Against Drunk Driving, Kansas Driving Under the Influence Law, New Jersey Driving While Impaired Programs,…
Air Traffic Control Technology
NASA Technical Reports Server (NTRS)
Denery, Dallas; Lebacqz, Victor (Technical Monitor)
1997-01-01
The steady growth in air travel will lead to heightened demands for additional airspace capacity, particularly in the vicinity of terminals. New computer-aided processing of aircraft and navigational capability exemplified by the Global Positioning System will allow more sophisticated management of air traffic. Additional altitudes and over-flight areas may be necessary to effectively use the newer technology,
Highway traffic noise prediction based on GIS
NASA Astrophysics Data System (ADS)
Zhao, Jianghua; Qin, Qiming
2014-05-01
Before building a new road, we need to predict the traffic noise generated by vehicles. Traditional traffic noise prediction methods are based on certain locations and they are not only time-consuming, high cost, but also cannot be visualized. Geographical Information System (GIS) can not only solve the problem of manual data processing, but also can get noise values at any point. The paper selected a road segment from Wenxi to Heyang. According to the geographical overview of the study area and the comparison between several models, we combine the JTG B03-2006 model and the HJ2.4-2009 model to predict the traffic noise depending on the circumstances. Finally, we interpolate the noise values at each prediction point and then generate contours of noise. By overlaying the village data on the noise contour layer, we can get the thematic maps. The use of GIS for road traffic noise prediction greatly facilitates the decision-makers because of GIS spatial analysis function and visualization capabilities. We can clearly see the districts where noise are excessive, and thus it becomes convenient to optimize the road line and take noise reduction measures such as installing sound barriers and relocating villages and so on.
Onset of traffic congestion in complex networks.
Zhao, Liang; Lai, Ying-Cheng; Park, Kwangho; Ye, Nong
2005-02-01
Free traffic flow on a complex network is key to its normal and efficient functioning. Recent works indicate that many realistic networks possess connecting topologies with a scale-free feature: the probability distribution of the number of links at nodes, or the degree distribution, contains a power-law component. A natural question is then how the topology influences the dynamics of traffic flow on a complex network. Here we present two models to address this question, taking into account the network topology, the information-generating rate, and the information-processing capacity of individual nodes. For each model, we study four kinds of networks: scale-free, random, and regular networks and Cayley trees. In the first model, the capacity of packet delivery of each node is proportional to its number of links, while in the second model, it is proportional to the number of shortest paths passing through the node. We find, in both models, that there is a critical rate of information generation, below which the network traffic is free but above which traffic congestion occurs. Theoretical estimates are given for the critical point. For the first model, scale-free networks and random networks are found to be more tolerant to congestion. For the second model, the congestion condition is independent of network size and topology, suggesting that this model may be practically useful for designing communication protocols.
Identifying MMORPG Bots: A Traffic Analysis Approach
NASA Astrophysics Data System (ADS)
Chen, Kuan-Ta; Jiang, Jhih-Wei; Huang, Polly; Chu, Hao-Hua; Lei, Chin-Laung; Chen, Wen-Chin
2008-12-01
Massively multiplayer online role playing games (MMORPGs) have become extremely popular among network gamers. Despite their success, one of MMORPG's greatest challenges is the increasing use of game bots, that is, autoplaying game clients. The use of game bots is considered unsportsmanlike and is therefore forbidden. To keep games in order, game police, played by actual human players, often patrol game zones and question suspicious players. This practice, however, is labor-intensive and ineffective. To address this problem, we analyze the traffic generated by human players versus game bots and propose general solutions to identify game bots. Taking Ragnarok Online as our subject, we study the traffic generated by human players and game bots. We find that their traffic is distinguishable by 1) the regularity in the release time of client commands, 2) the trend and magnitude of traffic burstiness in multiple time scales, and 3) the sensitivity to different network conditions. Based on these findings, we propose four strategies and two ensemble schemes to identify bots. Finally, we discuss the robustness of the proposed methods against countermeasures of bot developers, and consider a number of possible ways to manage the increasingly serious bot problem.
Traffic Safety Facts, 2001: Pedalcylists.
ERIC Educational Resources Information Center
National Highway Traffic Safety Administration (DOT), Washington, DC.
This document provides statistical information on traffic accidents involving U.S. bicyclists. Data include: (1) trends in pedalcyclist and total traffic fatalities, 1991-2001; (2) non-occupant traffic fatalities, 1991-2001; (3) pedalcyclists killed and injured, and fatality and injury rates, by age and sex, 2000 [2001 population data by age group…
Traffic Safety Facts, 2001: Pedestrians.
ERIC Educational Resources Information Center
National Highway Traffic Safety Administration (DOT), Washington, DC.
This document provides statistical information on U.S. traffic accidents involving pedestrians. Data tables include: (1) trends in pedestrian and total traffic fatalities, 1991-2001; (2) pedestrians killed and injured, by age group, 2001; (3) non-occupant traffic fatalities, 1991-2001; (4) pedestrian fatalities, by time of day and day of week,…
Pedalcylists. Traffic Safety Facts, 2000.
ERIC Educational Resources Information Center
National Highway Traffic Safety Administration (DOT), Washington, DC.
This document provides statistical information on traffic accidents involving U.S. bicyclists. Data include: (1) trends in pedalcyclist and total traffic fatalities, 1990-2000; (2) non-occupant traffic fatalities, 1990-2000; (3) pedalcyclists killed and injured, and fatality and injury rates, by age and sex, 2000; and (4) pedalcyclist traffic…
Pedestrians. Traffic Safety Facts, 2000.
ERIC Educational Resources Information Center
National Highway Traffic Safety Administration (DOT), Washington, DC.
This document provides statistical information on U.S. traffic accidents involving pedestrians. Data tables include: (1) trends in pedestrian and total traffic fatalities, 1990-2000; (2) pedestrians killed and injured, by age group, 2000; (3) non-occupant traffic fatalities, 1990-2000; (4) pedestrian fatalities, by time of day and day of week,…
Deterministic dynamics of neural activity during absence seizures in rats
NASA Astrophysics Data System (ADS)
Ouyang, Gaoxiang; Li, Xiaoli; Dang, Chuangyin; Richards, Douglas A.
2009-04-01
The study of brain electrical activities in terms of deterministic nonlinear dynamics has recently received much attention. Forbidden ordinal patterns (FOP) is a recently proposed method to investigate the determinism of a dynamical system through the analysis of intrinsic ordinal properties of a nonstationary time series. The advantages of this method in comparison to others include simplicity and low complexity in computation without further model assumptions. In this paper, the FOP of the EEG series of genetic absence epilepsy rats from Strasbourg was examined to demonstrate evidence of deterministic dynamics during epileptic states. Experiments showed that the number of FOP of the EEG series grew significantly from an interictal to an ictal state via a preictal state. These findings indicated that the deterministic dynamics of neural networks increased significantly in the transition from the interictal to the ictal states and also suggested that the FOP measures of the EEG series could be considered as a predictor of absence seizures.
Deterministic dynamics of neural activity during absence seizures in rats.
Ouyang, Gaoxiang; Li, Xiaoli; Dang, Chuangyin; Richards, Douglas A
2009-04-01
The study of brain electrical activities in terms of deterministic nonlinear dynamics has recently received much attention. Forbidden ordinal patterns (FOP) is a recently proposed method to investigate the determinism of a dynamical system through the analysis of intrinsic ordinal properties of a nonstationary time series. The advantages of this method in comparison to others include simplicity and low complexity in computation without further model assumptions. In this paper, the FOP of the EEG series of genetic absence epilepsy rats from Strasbourg was examined to demonstrate evidence of deterministic dynamics during epileptic states. Experiments showed that the number of FOP of the EEG series grew significantly from an interictal to an ictal state via a preictal state. These findings indicated that the deterministic dynamics of neural networks increased significantly in the transition from the interictal to the ictal states and also suggested that the FOP measures of the EEG series could be considered as a predictor of absence seizures.
Estimating the epidemic threshold on networks by deterministic connections
Li, Kezan Zhu, Guanghu; Fu, Xinchu; Small, Michael
2014-12-15
For many epidemic networks some connections between nodes are treated as deterministic, while the remainder are random and have different connection probabilities. By applying spectral analysis to several constructed models, we find that one can estimate the epidemic thresholds of these networks by investigating information from only the deterministic connections. Nonetheless, in these models, generic nonuniform stochastic connections and heterogeneous community structure are also considered. The estimation of epidemic thresholds is achieved via inequalities with upper and lower bounds, which are found to be in very good agreement with numerical simulations. Since these deterministic connections are easier to detect than those stochastic connections, this work provides a feasible and effective method to estimate the epidemic thresholds in real epidemic networks.
Deterministic sensing matrices in compressive sensing: a survey.
Nguyen, Thu L N; Shin, Yoan
2013-01-01
Compressive sensing is a sampling method which provides a new approach to efficient signal compression and recovery by exploiting the fact that a sparse signal can be suitably reconstructed from very few measurements. One of the most concerns in compressive sensing is the construction of the sensing matrices. While random sensing matrices have been widely studied, only a few deterministic sensing matrices have been considered. These matrices are highly desirable on structure which allows fast implementation with reduced storage requirements. In this paper, a survey of deterministic sensing matrices for compressive sensing is presented. We introduce a basic problem in compressive sensing and some disadvantage of the random sensing matrices. Some recent results on construction of the deterministic sensing matrices are discussed.
Deterministic teleportation of electrons in a quantum dot nanostructure.
de Visser, R L; Blaauboer, M
2006-06-23
We present a proposal for deterministic quantum teleportation of electrons in a semiconductor nanostructure consisting of a single and a double quantum dot. The central issue addressed in this Letter is how to design and implement the most efficient--in terms of the required number of single and two-qubit operations--deterministic teleportation protocol for this system. Using a group-theoretical analysis, we show that deterministic teleportation requires a minimum of three single-qubit rotations and two entangling (square root SWAP) operations. These can be implemented for spin qubits in quantum dots using electron-spin resonance (for single-spin rotations) and exchange interaction (for square root SWAP operations).
Nonsignaling Deterministic Models for Nonlocal Correlations have to be Uncomputable.
Bendersky, Ariel; Senno, Gabriel; de la Torre, Gonzalo; Figueira, Santiago; Acín, Antonio
2017-03-31
Quantum mechanics postulates random outcomes. However, a model making the same output predictions but in a deterministic manner would be, in principle, experimentally indistinguishable from quantum theory. In this work we consider such models in the context of nonlocality on a device-independent scenario. That is, we study pairs of nonlocal boxes that produce their outputs deterministically. It is known that, for these boxes to be nonlocal, at least one of the boxes' outputs has to depend on the other party's input via some kind of hidden signaling. We prove that, if the deterministic mechanism is also algorithmic, there is a protocol that, with the sole knowledge of any upper bound on the time complexity of such an algorithm, extracts that hidden signaling and uses it for the communication of information.
The seismic traffic footprint: Tracking trains, aircraft, and cars seismically
NASA Astrophysics Data System (ADS)
Riahi, Nima; Gerstoft, Peter
2015-04-01
Although naturally occurring vibrations have proven useful to probe the subsurface, the vibrations caused by traffic have not been explored much. Such data, however, are less sensitive to weather and low visibility compared to some common out-of-road traffic sensing systems. We study traffic-generated seismic noise measured by an array of 5200 geophones that covered a 7 × 10 km area in Long Beach (California, USA) with a receiver spacing of 100 m. This allows us to look into urban vibrations below the resolution of a typical city block. The spatiotemporal structure of the anthropogenic seismic noise intensity reveals the Blue Line Metro train activity, departing and landing aircraft in Long Beach Airport and their acceleration, and gives clues about traffic movement along the I-405 highway at night. As low-cost, stand-alone seismic sensors are becoming more common, these findings indicate that seismic data may be useful for traffic monitoring.
Traffic congestion and the lifetime of networks with moving nodes
NASA Astrophysics Data System (ADS)
Yang, Xianxia; Li, Jie; Pu, Cunlai; Yan, Meichen; Sharafat, Rajput Ramiz; Yang, Jian; Gakis, Konstantinos; Pardalos, Panos M.
2017-01-01
For many power-limited networks, such as wireless sensor networks and mobile ad hoc networks, maximizing the network lifetime is the first concern in the related designing and maintaining activities. We study the network lifetime from the perspective of network science. In our model, nodes are initially assigned a fixed amount of energy moving in a square area and consume the energy when delivering packets. We obtain four different traffic regimes: no, slow, fast, and absolute congestion regimes, which are basically dependent on the packet generation rate. We derive the network lifetime by considering the specific regime of the traffic flow. We find that traffic congestion inversely affects network lifetime in the sense that high traffic congestion results in short network lifetime. We also discuss the impacts of factors such as communication radius, node moving speed, routing strategy, etc., on network lifetime and traffic congestion.
Traffic congestion and the lifetime of networks with moving nodes.
Yang, Xianxia; Li, Jie; Pu, Cunlai; Yan, Meichen; Sharafat, Rajput Ramiz; Yang, Jian; Gakis, Konstantinos; Pardalos, Panos M
2017-01-01
For many power-limited networks, such as wireless sensor networks and mobile ad hoc networks, maximizing the network lifetime is the first concern in the related designing and maintaining activities. We study the network lifetime from the perspective of network science. In our model, nodes are initially assigned a fixed amount of energy moving in a square area and consume the energy when delivering packets. We obtain four different traffic regimes: no, slow, fast, and absolute congestion regimes, which are basically dependent on the packet generation rate. We derive the network lifetime by considering the specific regime of the traffic flow. We find that traffic congestion inversely affects network lifetime in the sense that high traffic congestion results in short network lifetime. We also discuss the impacts of factors such as communication radius, node moving speed, routing strategy, etc., on network lifetime and traffic congestion.
Informal parental traffic training and children's traffic accidents.
Drott, Peder; Johansson, Bo S; Aström, Bo
2008-01-01
The aims of the present study were (a) to assess the relationship between informal traffic training by parents and their children's involvement in traffic accidents and (b) to identify factors contributing to this relationship. The first two studies involved questionnaires on informal parental traffic education, the child's exposure to traffic and traffic-related accidents. Both studies showed that rate of accidents increased with training, particularly for outdoor training. An accident analysis indicated that most accidents involved the use of the bicycle, and that the major part of the accidents resulted in light injuries and occurred when the child was practicing the act of manoeuvring the bicycle. An interview study with 10 preschool teachers identified two quite disparate traffic education goals: emphasis on cautiousness versus emphasis on independence. The major implications of the study are that efforts in traffic training should give more emphasis to bicycle use and should be planned and carried out in cooperation with the parents.
Deterministic and efficient quantum cryptography based on Bell's theorem
Chen Zengbing; Pan Jianwei; Zhang Qiang; Bao Xiaohui; Schmiedmayer, Joerg
2006-05-15
We propose a double-entanglement-based quantum cryptography protocol that is both efficient and deterministic. The proposal uses photon pairs with entanglement both in polarization and in time degrees of freedom; each measurement in which both of the two communicating parties register a photon can establish one and only one perfect correlation, and thus deterministically create a key bit. Eavesdropping can be detected by violation of local realism. A variation of the protocol shows a higher security, similar to the six-state protocol, under individual attacks. Our scheme allows a robust implementation under the current technology.
Deterministic extinction by mixing in cyclically competing species
NASA Astrophysics Data System (ADS)
Feldager, Cilie W.; Mitarai, Namiko; Ohta, Hiroki
2017-03-01
We consider a cyclically competing species model on a ring with global mixing at finite rate, which corresponds to the well-known Lotka-Volterra equation in the limit of infinite mixing rate. Within a perturbation analysis of the model from the infinite mixing rate, we provide analytical evidence that extinction occurs deterministically at sufficiently large but finite values of the mixing rate for any species number N ≥3 . Further, by focusing on the cases of rather small species numbers, we discuss numerical results concerning the trajectories toward such deterministic extinction, including global bifurcations caused by changing the mixing rate.
Complexity of Monte Carlo and deterministic dose-calculation methods.
Börgers, C
1998-03-01
Grid-based deterministic dose-calculation methods for radiotherapy planning require the use of six-dimensional phase space grids. Because of the large number of phase space dimensions, a growing number of medical physicists appear to believe that grid-based deterministic dose-calculation methods are not competitive with Monte Carlo methods. We argue that this conclusion may be premature. Our results do suggest, however, that finite difference or finite element schemes with orders of accuracy greater than one will probably be needed if such methods are to compete well with Monte Carlo methods for dose calculations.
Inherent Conservatism in Deterministic Quasi-Static Structural Analysis
NASA Technical Reports Server (NTRS)
Verderaime, V.
1997-01-01
The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.
Seismic hazard assessment of Western Coastal Province of Saudi Arabia: deterministic approach
NASA Astrophysics Data System (ADS)
Rehman, Faisal; El-Hady, Sherif M.; Atef, Ali H.; Harbi, Hussein M.
2016-10-01
Seismic hazard assessment is carried out by utilizing deterministic approach to evaluate the maximum expected earthquake ground motions along the Western Coastal Province of Saudi Arabia. The analysis is accomplished by incorporating seismotectonic source model, determination of earthquake magnitude ( M max), set of appropriate ground motion predictive equations (GMPE), and logic tree sequence. The logic tree sequence is built up to assign weight to ground motion scaling relationships. Contour maps of ground acceleration are generated at different spectral periods. These maps show that the largest ground motion values are emerged in northern and southern regions of the western coastal province in Saudi Arabia in comparison with the central region.
NASA Astrophysics Data System (ADS)
Yan, Zhihui; Jia, Xiaojun
2017-06-01
A quantum mechanical model of the non-measurement based coherent feedback control (CFC) is applied to deterministic atom-light entanglement with imperfect retrieval efficiency, which is generated based on Raman process. We investigate the influence of different experimental parameters on entanglement property of CFC Raman system. By tailoring the transmissivity of coherent feedback controller, it is possible to manipulate the atom-light entanglement. Particularly, we show that CFC allows atom-light entanglement enhancement under appropriate operating conditions. Our work can provide entanglement source between atomic ensemble and light of high quality for high-fidelity quantum networks and quantum computation based on atomic ensemble.
Elliptical quantum dots as on-demand single photons sources with deterministic polarization states
Teng, Chu-Hsiang; Demory, Brandon; Ku, Pei-Cheng; Zhang, Lei; Hill, Tyler A.; Deng, Hui
2015-11-09
In quantum information, control of the single photon's polarization is essential. Here, we demonstrate single photon generation in a pre-programmed and deterministic polarization state, on a chip-scale platform, utilizing site-controlled elliptical quantum dots (QDs) synthesized by a top-down approach. The polarization from the QD emission is found to be linear with a high degree of linear polarization and parallel to the long axis of the ellipse. Single photon emission with orthogonal polarizations is achieved, and the dependence of the degree of linear polarization on the QD geometry is analyzed.
Investigating the Use of 3-D Deterministic Transport for Core Safety Analysis
H. D. Gougar; D. Scott
2004-04-01
An LDRD (Laboratory Directed Research and Development) project is underway at the Idaho National Laboratory (INL) to demonstrate the feasibility of using a three-dimensional multi-group deterministic neutron transport code (Attila®) to perform global (core-wide) criticality, flux and depletion calculations for safety analysis of the Advanced Test Reactor (ATR). This paper discusses the ATR, model development, capabilities of Attila, generation of the cross-section libraries, comparisons to experimental results for Advanced Fuel Cycle (AFC) concepts, and future work planned with Attila.
Risk estimates for deterministic health effects of inhaled weapons grade plutonium.
Scott, Bobby R; Peterson, Vern L
2003-09-01
Risk estimates for deterministic effects of inhaled weapons-grade plutonium (WG Pu) are needed to evaluate potential serious harm to (1) U.S. Department of Energy nuclear workers from accidental or other work-place releases of WG Pu; and (2) the public from terrorist actions resulting in the release of WG Pu to the environment. Deterministic health effects (the most serious radiobiological consequences to humans) can arise when large amounts of WG Pu are taken into the body. Inhalation is considered the most likely route of intake during work-place accidents or during a nuclear terrorism incident releasing WG Pu to the environment. Our current knowledge about radiation-related harm is insufficient for generating precise estimates of risk for a given WG Pu exposure scenario. This relates largely to uncertainties associated with currently available risk and dosimetry models. Thus, rather than generating point estimates of risk, distributions that account for variability/uncertainty are needed to properly characterize potential harm to humans from a given WG Pu exposure scenario. In this manuscript, we generate and summarize risk distributions for deterministic radiation effects in the lungs of nuclear workers from inhaled WG Pu particles (standard isotopic mix). These distributions were developed using NUREG/CR-4214 risk models and time-dependent, dose conversion factor data based on Publication 30 of the International Commission on Radiological Protection. Dose conversion factors based on ICRP Publication 30 are more relevant to deterministic effects than are the dose conversion factors based on ICRP Publication 66, which relate to targets for stochastic effects. Risk distributions that account for NUREG/CR-4214 parameter and model uncertainties were generated using the Monte Carlo method. Risks were evaluated for both lethality (from radiation pneumonitis) and morbidity (due to radiation-induced respiratory dysfunction) and were found to depend strongly on absorbed
Traffic and emission simulation in China based on statistical methodology
NASA Astrophysics Data System (ADS)
Liu, Huan; He, Kebin; Barth, Matthew
2011-02-01
To better understand how the traffic control can affect vehicle emissions, a novel TRaffic And Vehicle Emission Linkage (TRAVEL) approach was developed based on local traffic activity and emission data. This approach consists of a two-stage mapping from general traffic information to traffic flow patterns, and then to the aggregated emission rates. 39 traffic flow patterns and corresponding emission rates for light-duty and heavy-duty vehicles considering emission standards classification are generated. As a case study, vehicle activity and emissions during the Beijing Olympics were simulated and compared to BAU scenario. Approximately 42-65% of the gaseous pollutants and 24% of the particle pollutants from cars, taxies and buses were reduced. These results are validated by traffic and air quality monitoring data during the Olympics, as well as other emission inventory studies. This approach improves the ability to fast predict emission variation from traffic control measurements in several typical Chinese cities. Comments related to application of this approach with both advantages and limitations are included.
Deterministic ratchet from stationary light fields.
Zapata, I; Albaladejo, S; Parrondo, J M R; Sáenz, J J; Sols, F
2009-09-25
Ratchets are dynamic systems where particle transport is induced by zero-average forces due to the interplay between nonlinearity and asymmetry. Generally, they rely on the effect of a strong external driving. We show that stationary optical lattices can be designed to generate particle flow in one direction while requiring neither noise nor driving. Such optical fields must be arranged to yield a combination of conservative (dipole) and nonconservative (radiation pressure) forces. Under strong friction all paths converge to a discrete set of limit periodic trajectories flowing in the same direction.
Application of tabu search to deterministic and stochastic optimization problems
NASA Astrophysics Data System (ADS)
Gurtuna, Ozgur
During the past two decades, advances in computer science and operations research have resulted in many new optimization methods for tackling complex decision-making problems. One such method, tabu search, forms the basis of this thesis. Tabu search is a very versatile optimization heuristic that can be used for solving many different types of optimization problems. Another research area, real options, has also gained considerable momentum during the last two decades. Real options analysis is emerging as a robust and powerful method for tackling decision-making problems under uncertainty. Although the theoretical foundations of real options are well-established and significant progress has been made in the theory side, applications are lagging behind. A strong emphasis on practical applications and a multidisciplinary approach form the basic rationale of this thesis. The fundamental concepts and ideas behind tabu search and real options are investigated in order to provide a concise overview of the theory supporting both of these two fields. This theoretical overview feeds into the design and development of algorithms that are used to solve three different problems. The first problem examined is a deterministic one: finding the optimal servicing tours that minimize energy and/or duration of missions for servicing satellites around Earth's orbit. Due to the nature of the space environment, this problem is modeled as a time-dependent, moving-target optimization problem. Two solution methods are developed: an exhaustive method for smaller problem instances, and a method based on tabu search for larger ones. The second and third problems are related to decision-making under uncertainty. In the second problem, tabu search and real options are investigated together within the context of a stochastic optimization problem: option valuation. By merging tabu search and Monte Carlo simulation, a new method for studying options, Tabu Search Monte Carlo (TSMC) method, is
NASA Astrophysics Data System (ADS)
Beckenbauer, Thomas
Road traffic is the most interfering noise source in developed countries. According to a publication of the European Union (EU) at the end of the twentieth century [1], about 40% of the population in 15 EU member states is exposed to road traffic noise at mean levels exceeding 55 dB(A). Nearly 80 million people, 20% of the population, are exposed to levels exceeding 65 dB(A) during daytime and more than 30% of the population is exposed to levels exceeding 55 dB(A) during night time. Such high noise levels cause health risks and social disorders (aggressiveness, protest, and helplessness), interference of communication and disturbance of sleep; the long- and short-term consequences cause adverse cardiovascular effects, detrimental hormonal responses (stress hormones), and possible disturbance of the human metabolism (nutrition) and the immune system. Even performance at work and school could be impaired.
A deterministic method for estimating attitude from magnetometer data only
NASA Technical Reports Server (NTRS)
Natanson, G. A.
1992-01-01
A new deterministic algorithm which estimates spacecraft attitude utilizing magnetometer data only is presented. This algorithm exploits the dynamic equations of motion to propagate attitude and thus requires knowledge of both internal and external torques, except in the special case of a spacecraft rotating with constant angular velocity. Preliminary results obtained for the uncontrolled Relay Mirror Experiment satellite utilizing real telemetry data are reported.
A Unit on Deterministic Chaos for Student Teachers
ERIC Educational Resources Information Center
Stavrou, D.; Assimopoulos, S.; Skordoulis, C.
2013-01-01
A unit aiming to introduce pre-service teachers of primary education to the limited predictability of deterministic chaotic systems is presented. The unit is based on a commercial chaotic pendulum system connected with a data acquisition interface. The capabilities and difficulties in understanding the notion of limited predictability of 18…
Deterministic retrieval of complex Green's functions using hard X rays.
Vine, D J; Paganin, D M; Pavlov, K M; Uesugi, K; Takeuchi, A; Suzuki, Y; Yagi, N; Kämpfe, T; Kley, E-B; Förster, E
2009-01-30
A massively parallel deterministic method is described for reconstructing shift-invariant complex Green's functions. As a first experimental implementation, we use a single phase contrast x-ray image to reconstruct the complex Green's function associated with Bragg reflection from a thick perfect crystal. The reconstruction is in excellent agreement with a classic prediction of dynamical diffraction theory.
Deterministic dense coding and faithful teleportation with multipartite graph states
Huang, C.-Y.; Yu, I-C.; Lin, F.-L.; Hsu, L.-Y.
2009-05-15
We propose schemes to perform the deterministic dense coding and faithful teleportation with multipartite graph states. We also find the sufficient and necessary condition of a viable graph state for the proposed schemes. That is, for the associated graph, the reduced adjacency matrix of the Tanner-type subgraph between senders and receivers should be invertible.
A Deterministic Annealing Approach to Clustering AIRS Data
NASA Technical Reports Server (NTRS)
Guillaume, Alexandre; Braverman, Amy; Ruzmaikin, Alexander
2012-01-01
We will examine the validity of means and standard deviations as a basis for climate data products. We will explore the conditions under which these two simple statistics are inadequate summaries of the underlying empirical probability distributions by contrasting them with a nonparametric, method called Deterministic Annealing technique
A Unit on Deterministic Chaos for Student Teachers
ERIC Educational Resources Information Center
Stavrou, D.; Assimopoulos, S.; Skordoulis, C.
2013-01-01
A unit aiming to introduce pre-service teachers of primary education to the limited predictability of deterministic chaotic systems is presented. The unit is based on a commercial chaotic pendulum system connected with a data acquisition interface. The capabilities and difficulties in understanding the notion of limited predictability of 18…
Risk-based versus deterministic explosives safety criteria
Wright, R.E.
1996-12-01
The Department of Defense Explosives Safety Board (DDESB) is actively considering ways to apply risk-based approaches in its decision- making processes. As such, an understanding of the impact of converting to risk-based criteria is required. The objectives of this project are to examine the benefits and drawbacks of risk-based criteria and to define the impact of converting from deterministic to risk-based criteria. Conclusions will be couched in terms that allow meaningful comparisons of deterministic and risk-based approaches. To this end, direct comparisons of the consequences and impacts of both deterministic and risk-based criteria at selected military installations are made. Deterministic criteria used in this report are those in DoD 6055.9-STD, `DoD Ammunition and Explosives Safety Standard.` Risk-based criteria selected for comparison are those used by the government of Switzerland, `Technical Requirements for the Storage of Ammunition (TLM 75).` The risk-based criteria used in Switzerland were selected because they have been successfully applied for over twenty-five years.
A Deterministic Annealing Approach to Clustering AIRS Data
NASA Technical Reports Server (NTRS)
Guillaume, Alexandre; Braverman, Amy; Ruzmaikin, Alexander
2012-01-01
We will examine the validity of means and standard deviations as a basis for climate data products. We will explore the conditions under which these two simple statistics are inadequate summaries of the underlying empirical probability distributions by contrasting them with a nonparametric, method called Deterministic Annealing technique
Comparison of deterministic and Monte Carlo methods in shielding design.
Oliveira, A D; Oliveira, C
2005-01-01
In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions.
NASA Astrophysics Data System (ADS)
Davis, L. C.
2015-03-01
The Texas A&M Transportation Institute estimated that traffic congestion cost the United States 121 billion in 2011 (the latest data available). The cost is due to wasted time and fuel. In addition to accidents and road construction, factors contributing to congestion include large demand, instability of high-density free flow and selfish behavior of drivers, which produces self-organized traffic bottlenecks. Extensive data collected on instrumented highways in various countries have led to a better understanding of traffic dynamics. From these measurements, Boris Kerner and colleagues developed a new theory called three-phase theory. They identified three major phases of flow observed in the data: free flow, synchronous flow and wide moving jams. The intermediate phase is called synchronous because vehicles in different lanes tend to have similar velocities. This congested phase, characterized by lower velocities yet modestly high throughput, frequently occurs near on-ramps and lane reductions. At present there are only two widely used methods of congestion mitigation: ramp metering and the display of current travel-time information to drivers. To find more effective methods to reduce congestion, researchers perform large-scale simulations using models based on the new theories. An algorithm has been proposed to realize Wardrop equilibria with real-time route information. Such equilibria have equal travel time on alternative routes between a given origin and destination. An active area of current research is the dynamics of connected vehicles, which communicate wirelessly with other vehicles and the surrounding infrastructure. These systems show great promise for improving traffic flow and safety.
Linear-Time Recognizable Classes of Tree Languages by Deterministic Linear Pushdown Tree Automata
NASA Astrophysics Data System (ADS)
Fujiyoshi, Akio
In this paper, we study deterministic linear pushdown tree automata (deterministic L-PDTAs) and some variations. Since recognition of an input tree by a deterministic L-PDTA can be done in linear time, deterministic L-PDTAs are applicable to many kinds of applications. A strict hierarchy will be shown among the classes of tree languages defined by a variety of deterministic L-PDTAs. It will be also shown that deterministic L-PDTAs are weakly equivalent to nondeterministic L-PDTAs.
Empirical and deterministic accuracies of across-population genomic prediction.
Wientjes, Yvonne C J; Veerkamp, Roel F; Bijma, Piter; Bovenhuis, Henk; Schrooten, Chris; Calus, Mario P L
2015-02-06
Differences in linkage disequilibrium and in allele substitution effects of QTL (quantitative trait loci) may hinder genomic prediction across populations. Our objective was to develop a deterministic formula to estimate the accuracy of across-population genomic prediction, for which reference individuals and selection candidates are from different populations, and to investigate the impact of differences in allele substitution effects across populations and of the number of QTL underlying a trait on the accuracy. A deterministic formula to estimate the accuracy of across-population genomic prediction was derived based on selection index theory. Moreover, accuracies were deterministically predicted using a formula based on population parameters and empirically calculated using simulated phenotypes and a GBLUP (genomic best linear unbiased prediction) model. Phenotypes of 1033 Holstein-Friesian, 105 Groninger White Headed and 147 Meuse-Rhine-Yssel cows were simulated by sampling 3000, 300, 30 or 3 QTL from the available high-density SNP (single nucleotide polymorphism) information of three chromosomes, assuming a correlation of 1.0, 0.8, 0.6, 0.4, or 0.2 between allele substitution effects across breeds. The simulated heritability was set to 0.95 to resemble the heritability of deregressed proofs of bulls. Accuracies estimated with the deterministic formula based on selection index theory were similar to empirical accuracies for all scenarios, while accuracies predicted with the formula based on population parameters overestimated empirical accuracies by ~25 to 30%. When the between-breed genetic correlation differed from 1, i.e. allele substitution effects differed across breeds, empirical and deterministic accuracies decreased in proportion to the genetic correlation. Using a multi-trait model, it was possible to accurately estimate the genetic correlation between the breeds based on phenotypes and high-density genotypes. The number of QTL underlying the simulated
Constructing stochastic models from deterministic process equations by propensity adjustment
2011-01-01
Background Gillespie's stochastic simulation algorithm (SSA) for chemical reactions admits three kinds of elementary processes, namely, mass action reactions of 0th, 1st or 2nd order. All other types of reaction processes, for instance those containing non-integer kinetic orders or following other types of kinetic laws, are assumed to be convertible to one of the three elementary kinds, so that SSA can validly be applied. However, the conversion to elementary reactions is often difficult, if not impossible. Within deterministic contexts, a strategy of model reduction is often used. Such a reduction simplifies the actual system of reactions by merging or approximating intermediate steps and omitting reactants such as transient complexes. It would be valuable to adopt a similar reduction strategy to stochastic modelling. Indeed, efforts have been devoted to manipulating the chemical master equation (CME) in order to achieve a proper propensity function for a reduced stochastic system. However, manipulations of CME are almost always complicated, and successes have been limited to relative simple cases. Results We propose a rather general strategy for converting a deterministic process model into a corresponding stochastic model and characterize the mathematical connections between the two. The deterministic framework is assumed to be a generalized mass action system and the stochastic analogue is in the format of the chemical master equation. The analysis identifies situations: where a direct conversion is valid; where internal noise affecting the system needs to be taken into account; and where the propensity function must be mathematically adjusted. The conversion from deterministic to stochastic models is illustrated with several representative examples, including reversible reactions with feedback controls, Michaelis-Menten enzyme kinetics, a genetic regulatory motif, and stochastic focusing. Conclusions The construction of a stochastic model for a biochemical
Rosenbloom, Tova; Pereg, Avihu; Perlman, Amotz
2014-01-01
The policy of a public organization, such as police, may shape the norms and the behavior of the citizens. In line with this, police officers are expected by the public to comply with traffic laws and serve as an example for the citizenry. This study used on-site observations of civilian and police driver, comparing police officers' compliance with traffic laws to that of civilians. We compared driver compliance with traffic laws for drivers in 3 groups of vehicles: traffic police cars, non-traffic police cars, and civilian cars. Four hundred sixty-six vehicles were observed and compared by vehicle type and whether a uniform was worn by the driver. We observed safety belt usage, signaling before turning, cellular phone usage, and giving way to traffic (measured by merging time). We found evidence that generally drivers in police cars use seat belts while driving more that drivers in civilian cars do. In particular, more traffic police car drivers used seat belts than non-traffic police car drivers do. In addition, drivers in civilian cars and non-traffic police cars waited longer periods of time before merging right into traffic compared to traffic police car drivers. Our findings supported the notion that on-duty police officers, and traffic police officers in particular, adhere more closely to traffic laws compared to civilian drivers. As the general public compliance with traffic laws is affected by the police perceived legitimacy, the publication of these results can both boost public cooperation with the police and encourage police officers to continue providing positive role models to the public.
A Novel Biobjective Risk-Based Model for Stochastic Air Traffic Network Flow Optimization Problem
Cai, Kaiquan; Jia, Yaoguang; Zhu, Yanbo; Xiao, Mingming
2015-01-01
Network-wide air traffic flow management (ATFM) is an effective way to alleviate demand-capacity imbalances globally and thereafter reduce airspace congestion and flight delays. The conventional ATFM models assume the capacities of airports or airspace sectors are all predetermined. However, the capacity uncertainties due to the dynamics of convective weather may make the deterministic ATFM measures impractical. This paper investigates the stochastic air traffic network flow optimization (SATNFO) problem, which is formulated as a weighted biobjective 0-1 integer programming model. In order to evaluate the effect of capacity uncertainties on ATFM, the operational risk is modeled via probabilistic risk assessment and introduced as an extra objective in SATNFO problem. Computation experiments using real-world air traffic network data associated with simulated weather data show that presented model has far less constraints compared to stochastic model with nonanticipative constraints, which means our proposed model reduces the computation complexity. PMID:26180842
A Novel Biobjective Risk-Based Model for Stochastic Air Traffic Network Flow Optimization Problem.
Cai, Kaiquan; Jia, Yaoguang; Zhu, Yanbo; Xiao, Mingming
2015-01-01
Network-wide air traffic flow management (ATFM) is an effective way to alleviate demand-capacity imbalances globally and thereafter reduce airspace congestion and flight delays. The conventional ATFM models assume the capacities of airports or airspace sectors are all predetermined. However, the capacity uncertainties due to the dynamics of convective weather may make the deterministic ATFM measures impractical. This paper investigates the stochastic air traffic network flow optimization (SATNFO) problem, which is formulated as a weighted biobjective 0-1 integer programming model. In order to evaluate the effect of capacity uncertainties on ATFM, the operational risk is modeled via probabilistic risk assessment and introduced as an extra objective in SATNFO problem. Computation experiments using real-world air traffic network data associated with simulated weather data show that presented model has far less constraints compared to stochastic model with nonanticipative constraints, which means our proposed model reduces the computation complexity.
Traffic Games: Modeling Freeway Traffic with Game Theory
Cortés-Berrueco, Luis E.; Gershenson, Carlos; Stephens, Christopher R.
2016-01-01
We apply game theory to a vehicular traffic model to study the effect of driver strategies on traffic flow. The resulting model inherits the realistic dynamics achieved by a two-lane traffic model and aims to incorporate phenomena caused by driver-driver interactions. To achieve this goal, a game-theoretic description of driver interaction was developed. This game-theoretic formalization allows one to model different lane-changing behaviors and to keep track of mobility performance. We simulate the evolution of cooperation, traffic flow, and mobility performance for different modeled behaviors. The analysis of these results indicates a mobility optimization process achieved by drivers’ interactions. PMID:27855176
Observations on traffic flow patterns and traffic engineering practice
NASA Astrophysics Data System (ADS)
Wang, Feng; Gao, Lixin
2002-07-01
Border Gateway Protocol allows ASs to apply diverse routing policies for selecting routes and propagating reachability information to other ASs. This enables network operators to configure routing policies so as to control traffic flows between ASs. However, BGP is not designed for the inter-AS traffic engineering. This makes it difficult to implement effective routing policies to address network performance and utilization problems. Network operators usually tweak routing policies to influence the inter-domain traffic among the available links. This can lead to undesirable traffic flow patterns across the Internet and degrade the Internet traffic performance. In this paper, we show several observations on Internet traffic flow patterns and derive routing policies that give rise to the traffic flow patterns. Our results show that an AS can reach as much as 20% of the prefixes via a peer link even though there is a path via a customer link. In addition, an AS can reach as much as 80% of the prefixes via a provider link even though there is a path via a peer link. Second, we analyze the cause of the prevalence of these traffic patterns. Our analysis shows that an AS typically does not receive the potential route from its customers or peers. Third, we find that alternate routes have with lower propagation delay than the chosen routes for some prefixes. This shows that some traffic engineering practices might adversely affect Internet performance.
Traffic Games: Modeling Freeway Traffic with Game Theory.
Cortés-Berrueco, Luis E; Gershenson, Carlos; Stephens, Christopher R
2016-01-01
We apply game theory to a vehicular traffic model to study the effect of driver strategies on traffic flow. The resulting model inherits the realistic dynamics achieved by a two-lane traffic model and aims to incorporate phenomena caused by driver-driver interactions. To achieve this goal, a game-theoretic description of driver interaction was developed. This game-theoretic formalization allows one to model different lane-changing behaviors and to keep track of mobility performance. We simulate the evolution of cooperation, traffic flow, and mobility performance for different modeled behaviors. The analysis of these results indicates a mobility optimization process achieved by drivers' interactions.
Hybrid Monte Carlo/deterministic methods for radiation shielding problems
NASA Astrophysics Data System (ADS)
Becker, Troy L.
For the past few decades, the most common type of deep-penetration (shielding) problem simulated using Monte Carlo methods has been the source-detector problem, in which a response is calculated at a single location in space. Traditionally, the nonanalog Monte Carlo methods used to solve these problems have required significant user input to generate and sufficiently optimize the biasing parameters necessary to obtain a statistically reliable solution. It has been demonstrated that this laborious task can be replaced by automated processes that rely on a deterministic adjoint solution to set the biasing parameters---the so-called hybrid methods. The increase in computational power over recent years has also led to interest in obtaining the solution in a region of space much larger than a point detector. In this thesis, we propose two methods for solving problems ranging from source-detector problems to more global calculations---weight windows and the Transform approach. These techniques employ sonic of the same biasing elements that have been used previously; however, the fundamental difference is that here the biasing techniques are used as elements of a comprehensive tool set to distribute Monte Carlo particles in a user-specified way. The weight window achieves the user-specified Monte Carlo particle distribution by imposing a particular weight window on the system, without altering the particle physics. The Transform approach introduces a transform into the neutron transport equation, which results in a complete modification of the particle physics to produce the user-specified Monte Carlo distribution. These methods are tested in a three-dimensional multigroup Monte Carlo code. For a basic shielding problem and a more realistic one, these methods adequately solved source-detector problems and more global calculations. Furthermore, they confirmed that theoretical Monte Carlo particle distributions correspond to the simulated ones, implying that these methods
Vardoulakis, Sotiris; Chalabi, Zaid; Fletcher, Tony; Grundy, Chris; Leonardi, Giovanni S
2008-05-15
In urban areas, road traffic is a major source of carcinogenic polycyclic aromatic hydrocarbons (PAH), thus any changes in traffic patterns are expected to affect PAH concentrations in ambient air. Exposure to PAH and other traffic-related air pollutants has often been quantified in a deterministic manner that disregards the various sources of uncertainty in the modelling systems used. In this study, we developed a generic method for handling uncertainty in population exposure models. The method was applied to quantify the uncertainty in population exposure to benzo[a]pyrene (BaP) before and after the implementation of a traffic management intervention. This intervention would affect the movement of vehicles in the studied area and consequently alter traffic emissions, pollutant concentrations and population exposure. Several models, including an emission calculator, a dispersion model and a Geographic Information System were used to quantify the impact of the traffic management intervention. We established four exposure zones defined by distance of residence postcode centroids from major road or intersection. A stochastic method was used to quantify the uncertainty in the population exposure model. The method characterises uncertainty using probability measures and propagates it applying Monte Carlo analysis. The overall model predicted that the traffic management scheme would lead to a minor reduction in mean population exposure to BaP in the studied area. However, the uncertainty associated with the exposure estimates was much larger than this reduction. The proposed method is generic and provides realistic estimates of population exposure to traffic-related pollutants, as well as characterises the uncertainty in these estimates. This method can be used within a decision support tool to evaluate the impact of alternative traffic management policies.
Deterministically Polarized Fluorescence from Single Dye Molecules Aligned in Liquid Crystal Host
Lukishova, S.G.; Schmid, A.W.; Knox, R.; Freivald, P.; Boyd, R. W.; Stroud, Jr., C. R.; Marshall, K.L.
2005-09-30
We demonstrated for the first time to our konwledge deterministically polarized fluorescence from single dye molecules. Planar aligned nematic liquid crystal hosts provide deterministic alignment of single dye molecules in a preferred direction.
NASA Astrophysics Data System (ADS)
Guarnaccia, Claudio; Quartieri, Joseph; Tepedino, Carmine
2017-06-01
One of the most hazardous physical polluting agents, considering their effects on human health, is acoustical noise. Airports are a strong source of acoustical noise, due to the airplanes turbines, to the aero-dynamical noise of transits, to the acceleration or the breaking during the take-off and landing phases of aircrafts, to the road traffic around the airport, etc.. The monitoring and the prediction of the acoustical level emitted by airports can be very useful to assess the impact on human health and activities. In the airports noise scenario, thanks to flights scheduling, the predominant sources may have a periodic behaviour. Thus, a Time Series Analysis approach can be adopted, considering that a general trend and a seasonal behaviour can be highlighted and used to build a predictive model. In this paper, two different approaches are adopted, thus two predictive models are constructed and tested. The first model is based on deterministic decomposition and is built composing the trend, that is the long term behaviour, the seasonality, that is the periodic component, and the random variations. The second model is based on seasonal autoregressive moving average, and it belongs to the stochastic class of models. The two different models are fitted on an acoustical level dataset collected close to the Nice (France) international airport. Results will be encouraging and will show good prediction performances of both the adopted strategies. A residual analysis is performed, in order to quantify the forecasting error features.
Economical Video Monitoring of Traffic
NASA Technical Reports Server (NTRS)
Houser, B. C.; Paine, G.; Rubenstein, L. D.; Parham, O. Bruce, Jr.; Graves, W.; Bradley, C.
1986-01-01
Data compression allows video signals to be transmitted economically on telephone circuits. Telephone lines transmit television signals to remote traffic-control center. Lines also carry command signals from center to TV camera and compressor at highway site. Video system with television cameras positioned at critical points on highways allows traffic controllers to determine visually, almost immediately, exact cause of traffic-flow disruption; e.g., accidents, breakdowns, or spills, almost immediately. Controllers can then dispatch appropriate emergency services and alert motorists to minimize traffic backups.
Computerized methods for trafficability analysis
NASA Technical Reports Server (NTRS)
Lewandowski, G. M.; Mc Adams, H. T.; Reese, P. A.
1971-01-01
Computer program produces trafficability maps displaying terrain characteristics in digital form for computer analysis. Maps serve as aid to vehicular operation and highway planning based on maneuverability parameters.
Economical Video Monitoring of Traffic
NASA Technical Reports Server (NTRS)
Houser, B. C.; Paine, G.; Rubenstein, L. D.; Parham, O. Bruce, Jr.; Graves, W.; Bradley, C.
1986-01-01
Data compression allows video signals to be transmitted economically on telephone circuits. Telephone lines transmit television signals to remote traffic-control center. Lines also carry command signals from center to TV camera and compressor at highway site. Video system with television cameras positioned at critical points on highways allows traffic controllers to determine visually, almost immediately, exact cause of traffic-flow disruption; e.g., accidents, breakdowns, or spills, almost immediately. Controllers can then dispatch appropriate emergency services and alert motorists to minimize traffic backups.
Realistic Data-Driven Traffic Flow Animation Using Texture Synthesis.
Chao, Qianwen; Deng, Zhigang; Ren, Jiaping; Ye, Qianqian; Jin, Xiaogang
2017-01-11
We present a novel data-driven approach to populate virtual road networks with realistic traffic flows. Specifically, given a limited set of vehicle trajectories as the input samples, our approach first synthesizes a large set of vehicle trajectories. By taking the spatio-temporal information of traffic flows as a 2D texture, the generation of new traffic flows can be formulated as a texture synthesis process, which is solved by minimizing a newly developed traffic texture energy. The synthesized output captures the spatio-temporal dynamics of the input traffic flows, and the vehicle interactions in it strictly follow traffic rules. After that, we position the synthesized vehicle trajectory data to virtual road networks using a cage-based registration scheme, where a few traffic-specific constraints are enforced to maintain each vehicle's original spatial location and synchronize its motion in concert with its neighboring vehicles. Our approach is intuitive to control and scalable to the complexity of virtual road networks. We validated our approach through many experiments and paired comparison user studies.
An intelligent traffic controller
Kagolanu, K.; Fink, R.; Smartt, H.; Powell, R.; Larsen, E.
1995-12-01
A controller with advanced control logic can significantly improve traffic flows at intersections. In this vein, this paper explores fuzzy rules and algorithms to improve the intersection operation by rationalizing phase changes and green times. The fuzzy logic for control is enhanced by the exploration of neural networks for families of membership functions and for ideal cost functions. The concepts of fuzzy logic control are carried forth into the controller architecture. Finally, the architecture and the modules are discussed. In essence, the control logic and architecture of an intelligent controller are explored.
Zhang, Jie; Draxl, Caroline; Hopson, Thomas; Monache, Luca Delle; Vanvyve, Emilie; Hodge, Bri-Mathias
2015-10-01
Numerical weather prediction (NWP) models have been widely used for wind resource assessment. Model runs with higher spatial resolution are generally more accurate, yet extremely computational expensive. An alternative approach is to use data generated by a low resolution NWP model, in conjunction with statistical methods. In order to analyze the accuracy and computational efficiency of different types of NWP-based wind resource assessment methods, this paper performs a comparison of three deterministic and probabilistic NWP-based wind resource assessment methodologies: (i) a coarse resolution (0.5 degrees x 0.67 degrees) global reanalysis data set, the Modern-Era Retrospective Analysis for Research and Applications (MERRA); (ii) an analog ensemble methodology based on the MERRA, which provides both deterministic and probabilistic predictions; and (iii) a fine resolution (2-km) NWP data set, the Wind Integration National Dataset (WIND) Toolkit, based on the Weather Research and Forecasting model. Results show that: (i) as expected, the analog ensemble and WIND Toolkit perform significantly better than MERRA confirming their ability to downscale coarse estimates; (ii) the analog ensemble provides the best estimate of the multi-year wind distribution at seven of the nine sites, while the WIND Toolkit is the best at one site; (iii) the WIND Toolkit is more accurate in estimating the distribution of hourly wind speed differences, which characterizes the wind variability, at five of the available sites, with the analog ensemble being best at the remaining four locations; and (iv) the analog ensemble computational cost is negligible, whereas the WIND Toolkit requires large computational resources. Future efforts could focus on the combination of the analog ensemble with intermediate resolution (e.g., 10-15 km) NWP estimates, to considerably reduce the computational burden, while providing accurate deterministic estimates and reliable probabilistic assessments.
Deterministic ion beam material adding technology for high-precision optical surfaces.
Liao, Wenlin; Dai, Yifan; Xie, Xuhui; Zhou, Lin
2013-02-20
Although ion beam figuring (IBF) provides a highly deterministic method for the precision figuring of optical components, several problems still need to be addressed, such as the limited correcting capability for mid-to-high spatial frequency surface errors and low machining efficiency for pit defects on surfaces. We propose a figuring method named deterministic ion beam material adding (IBA) technology to solve those problems in IBF. The current deterministic optical figuring mechanism, which is dedicated to removing local protuberances on optical surfaces, is enriched and developed by the IBA technology. Compared with IBF, this method can realize the uniform convergence of surface errors, where the particle transferring effect generated in the IBA process can effectively correct the mid-to-high spatial frequency errors. In addition, IBA can rapidly correct the pit defects on the surface and greatly improve the machining efficiency of the figuring process. The verification experiments are accomplished on our experimental installation to validate the feasibility of the IBA method. First, a fused silica sample with a rectangular pit defect is figured by using IBA. Through two iterations within only 47.5 min, this highly steep pit is effectively corrected, and the surface error is improved from the original 24.69 nm root mean square (RMS) to the final 3.68 nm RMS. Then another experiment is carried out to demonstrate the correcting capability of IBA for mid-to-high spatial frequency surface errors, and the final results indicate that the surface accuracy and surface quality can be simultaneously improved.
Enhancing traffic performance in hierarchical DHT system by exploiting network proximity
NASA Astrophysics Data System (ADS)
Zhong, Haifeng; Wu, Wei; Pei, Canhao; Zhang, Chengfeng
2009-08-01
Nowadays P2P systems have become increasingly popular for object distribution and file sharing, and the majority of Internet traffic is generated by P2P file sharing applications. However, those applications usually ignored the underlying proximity of physical nodes and regionalization of file accessing. As a result, they generate a large amount of unnecessary interdomain transit traffic and increase response latency. In this paper, we proposed a new traffic control approach to enhance p2p traffic locality and reduce the cross-group transfer. Using analysis, we show that the method substantially improves node transfer efficiency and significantly reduces file access latency compared with native P2P applications.
Deterministic error correction for nonlocal spatial-polarization hyperentanglement.
Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu
2016-02-10
Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication.
On the secure obfuscation of deterministic finite automata.
Anderson, William Erik
2008-06-01
In this paper, we show how to construct secure obfuscation for Deterministic Finite Automata, assuming non-uniformly strong one-way functions exist. We revisit the software protection approaches originally proposed by [5, 10, 12, 17] and revise them to the current obfuscation setting of Barak et al. [2]. Under this model, we introduce an efficient oracle that retains some 'small' secret about the original program. Using this secret, we can construct an obfuscator and two-party protocol that securely obfuscates Deterministic Finite Automata against malicious adversaries. The security of this model retains the strong 'virtual black box' property originally proposed in [2] while incorporating the stronger condition of dependent auxiliary inputs in [15]. Additionally, we show that our techniques remain secure under concurrent self-composition with adaptive inputs and that Turing machines are obfuscatable under this model.
Deterministic remote two-qubit state preparation in dissipative environments
NASA Astrophysics Data System (ADS)
Li, Jin-Fang; Liu, Jin-Ming; Feng, Xun-Li; Oh, C. H.
2016-05-01
We propose a new scheme for efficient remote preparation of an arbitrary two-qubit state, introducing two auxiliary qubits and using two Einstein-Podolsky-Rosen (EPR) states as the quantum channel in a non-recursive way. At variance with all existing schemes, our scheme accomplishes deterministic remote state preparation (RSP) with only one sender and the simplest entangled resource (say, EPR pairs). We construct the corresponding quantum logic circuit using a unitary matrix decomposition procedure and analytically obtain the average fidelity of the deterministic RSP process for dissipative environments. Our studies show that, while the average fidelity gradually decreases to a stable value without any revival in the Markovian regime, it decreases to the same stable value with a dampened revival amplitude in the non-Markovian regime. We also find that the average fidelity's approximate maximal value can be preserved for a long time if the non-Markovian and the detuning conditions are satisfied simultaneously.
Deterministic control of ferroelastic switching in multiferroic materials.
Balke, N; Choudhury, S; Jesse, S; Huijben, M; Chu, Y H; Baddorf, A P; Chen, L Q; Ramesh, R; Kalinin, S V
2009-12-01
Multiferroic materials showing coupled electric, magnetic and elastic orderings provide a platform to explore complexity and new paradigms for memory and logic devices. Until now, the deterministic control of non-ferroelectric order parameters in multiferroics has been elusive. Here, we demonstrate deterministic ferroelastic switching in rhombohedral BiFeO(3) by domain nucleation with a scanning probe. We are able to select among final states that have the same electrostatic energy, but differ dramatically in elastic or magnetic order, by applying voltage to the probe while it is in lateral motion. We also demonstrate the controlled creation of a ferrotoroidal order parameter. The ability to control local elastic, magnetic and torroidal order parameters with an electric field will make it possible to probe local strain and magnetic ordering, and engineer various magnetoelectric, domain-wall-based and strain-coupled devices.
Approaches to implementing deterministic models in a probabilistic framework
Talbott, D.V.
1995-04-01
The increasing use of results from probabilistic risk assessments in the decision-making process makes it ever more important to eliminate simplifications in probabilistic models that might lead to conservative results. One area in which conservative simplifications are often made is modeling the physical interactions that occur during the progression of an accident sequence. This paper demonstrates and compares different approaches for incorporating deterministic models of physical parameters into probabilistic models; parameter range binning, response curves, and integral deterministic models. An example that combines all three approaches in a probabilistic model for the handling of an energetic material (i.e. high explosive, rocket propellant,...) is then presented using a directed graph model.
Deterministic error correction for nonlocal spatial-polarization hyperentanglement
NASA Astrophysics Data System (ADS)
Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu
2016-02-01
Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication.
Autocatalytic genetic networks modeled by piecewise-deterministic Markov processes.
Zeiser, Stefan; Franz, Uwe; Liebscher, Volkmar
2010-02-01
In the present work we propose an alternative approach to model autocatalytic networks, called piecewise-deterministic Markov processes. These were originally introduced by Davis in 1984. Such a model allows for random transitions between the active and inactive state of a gene, whereas subsequent transcription and translation processes are modeled in a deterministic manner. We consider three types of autoregulated networks, each based on a positive feedback loop. It is shown that if the densities of the stationary distributions exist, they are the solutions of a system of equations for a one-dimensional correlated random walk. These stationary distributions are determined analytically. Further, the distributions are analyzed for different simulation periods and different initial concentration values by numerical means. We show that, depending on the network structure, beside a binary response also a graded response is observable.
Bayesian theory of probabilistic forecasting via deterministic hydrologic model
NASA Astrophysics Data System (ADS)
Krzysztofowicz, Roman
1999-09-01
Rational decision making (for flood warning, navigation, or reservoir systems) requires that the total uncertainty about a hydrologic predictand (such as river stage, discharge, or runoff volume) be quantified in terms of a probability distribution, conditional on all available information and knowledge. Hydrologic knowledge is typically embodied in a deterministic catchment model. Fundamentals are presented of a Bayesian forecasting system (BFS) for producing a probabilistic forecast of a hydrologic predictand via any deterministic catchment model. The BFS decomposes the total uncertainty into input uncertainty and hydrologic uncertainty, which are quantified independently and then integrated into a predictive (Bayes) distribution. This distribution results from a revision of a prior (climatic) distribution, is well calibrated, and has a nonnegative ex ante economic value. The BFS is compared with Monte Carlo simulation and "ensemble forecasting" technique, none of which can alone produce a probabilistic forecast that meets requirements of rational decision making, but each can serve as a component of the BFS.
Nano transfer and nanoreplication using deterministically grown sacrificial nanotemplates
Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E [Greenback, TN; Guillorn, Michael A [Ithaca, NY; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TX; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN
2012-03-27
Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoconduit material coupled to a surface of the substrate. The substrate defines an aperture and the nanoconduit material defines a nanoconduit that is i) contiguous with the aperture and ii) aligned substantially non-parallel to a plane defined by the surface of the substrate.
The deterministic SIS epidemic model in a Markovian random environment.
Economou, Antonis; Lopez-Herrero, Maria Jesus
2016-07-01
We consider the classical deterministic susceptible-infective-susceptible epidemic model, where the infection and recovery rates depend on a background environmental process that is modeled by a continuous time Markov chain. This framework is able to capture several important characteristics that appear in the evolution of real epidemics in large populations, such as seasonality effects and environmental influences. We propose computational approaches for the determination of various distributions that quantify the evolution of the number of infectives in the population.
A deterministic algorithm for constrained enumeration of transmembrane protein folds.
Brown, William Michael; Young, Malin M.; Sale, Kenneth L.; Faulon, Jean-Loup Michel; Schoeniger, Joseph S.
2004-07-01
A deterministic algorithm for enumeration of transmembrane protein folds is presented. Using a set of sparse pairwise atomic distance constraints (such as those obtained from chemical cross-linking, FRET, or dipolar EPR experiments), the algorithm performs an exhaustive search of secondary structure element packing conformations distributed throughout the entire conformational space. The end result is a set of distinct protein conformations, which can be scored and refined as part of a process designed for computational elucidation of transmembrane protein structures.
Uniform Deterministic Discrete Method for three dimensional systems
NASA Astrophysics Data System (ADS)
Li, Ben-Wen; Tao, Wen-Quan; Nie, Yu-Hong
1997-06-01
For radiative direct exchange areas in three dimensional system, the Uniform Deterministic Discrete Method (UDDM) was adopted. The spherical surface dividing method for sending area element and the regular icosahedron for sending volume element can meet with the direct exchange area computation of any kind of zone pairs. The numerical examples of direct exchange area in three dimensional system with nonhomogeneous attenuation coefficients indicated that the UDDM can give very high numerical accuracy.
Glass-ceramics: deterministic microgrinding, lapping, and polishing
NASA Astrophysics Data System (ADS)
Lambropoulos, John C.; Gillman, Birgit E.; Zhou, Yiyang; Jacobs, Stephen D.; Stevens, Harrie J.
1997-10-01
Glass-ceramics are composites consisting of glass and crystalline phases. We report a series of microgrinding and polishing experiments: our first goal is to correlate material mechanical properties with the quality of the resulting surface, determined by surface microroughness and surface grinding-induced residual stresses. Our second goal is to compare deterministic microgrinding and loose abrasive microgrinding in terms of material removal rates and resulting surface quality.
Pathological tremors: Deterministic chaos or nonlinear stochastic oscillators?
NASA Astrophysics Data System (ADS)
Timmer, Jens; Häußler, Siegfried; Lauk, Michael; Lücking, Carl
2000-02-01
Pathological tremors exhibit a nonlinear oscillation that is not strictly periodic. We investigate whether the deviation from periodicity is due to nonlinear deterministic chaotic dynamics or due to nonlinear stochastic dynamics. To do so, we apply methods from linear and nonlinear time series analysis to tremor time series. The results of the different methods suggest that the considered types of pathological tremors represent nonlinear stochastic second order processes.
Deterministic chaos control in neural networks on various topologies
NASA Astrophysics Data System (ADS)
Neto, A. J. F.; Lima, F. W. S.
2017-01-01
Using numerical simulations, we study the control of deterministic chaos in neural networks on various topologies like Voronoi-Delaunay, Barabási-Albert, Small-World networks and Erdös-Rényi random graphs by "pinning" the state of a "special" neuron. We show that the chaotic activity of the networks or graphs, when control is on, can become constant or periodic.
Efficient deterministic secure quantum communication protocols using multipartite entangled states
NASA Astrophysics Data System (ADS)
Joy, Dintomon; Surendran, Supin P.; Sabir, M.
2017-06-01
We propose two deterministic secure quantum communication protocols employing three-qubit GHZ-like states and five-qubit Brown states as quantum channels for secure transmission of information in units of two bits and three bits using multipartite teleportation schemes developed here. In these schemes, the sender's capability in selecting quantum channels and the measuring bases leads to improved qubit efficiency of the protocols.
Probabilistic versus deterministic hazard assessment in liquefaction susceptible zones
NASA Astrophysics Data System (ADS)
Daminelli, Rosastella; Gerosa, Daniele; Marcellini, Alberto; Tento, Alberto
2015-04-01
Probabilistic seismic hazard assessment (PSHA), usually adopted in the framework of seismic codes redaction, is based on Poissonian description of the temporal occurrence, negative exponential distribution of magnitude and attenuation relationship with log-normal distribution of PGA or response spectrum. The main positive aspect of this approach stems into the fact that is presently a standard for the majority of countries, but there are weak points in particular regarding the physical description of the earthquake phenomenon. Factors like site effects, source characteristics like duration of the strong motion and directivity that could significantly influence the expected motion at the site are not taken into account by PSHA. Deterministic models can better evaluate the ground motion at a site from a physical point of view, but its prediction reliability depends on the degree of knowledge of the source, wave propagation and soil parameters. We compare these two approaches in selected sites affected by the May 2012 Emilia-Romagna and Lombardia earthquake, that caused widespread liquefaction phenomena unusually for magnitude less than 6. We focus on sites liquefiable because of their soil mechanical parameters and water table level. Our analysis shows that the choice between deterministic and probabilistic hazard analysis is strongly dependent on site conditions. The looser the soil and the higher the liquefaction potential, the more suitable is the deterministic approach. Source characteristics, in particular the duration of strong ground motion, have long since recognized as relevant to induce liquefaction; unfortunately a quantitative prediction of these parameters appears very unlikely, dramatically reducing the possibility of their adoption in hazard assessment. Last but not least, the economic factors are relevant in the choice of the approach. The case history of 2012 Emilia-Romagna and Lombardia earthquake, with an officially estimated cost of 6 billions
Large scale traffic simulations
Nagel, K.; Barrett, C.L. |; Rickert, M. |
1997-04-01
Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computational speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated {open_quotes}looping{close_quotes} between the microsimulation and the simulated planning of individual person`s behavior is necessary). As a rough number, a real-time simulation of an area such as Los Angeles (ca. 1 million travellers) will need a computational speed of much higher than 1 million {open_quotes}particle{close_quotes} (= vehicle) updates per second. This paper reviews how this problem is approached in different projects and how these approaches are dependent both on the specific questions and on the prospective user community. The approaches reach from highly parallel and vectorizable, single-bit implementations on parallel supercomputers for Statistical Physics questions, via more realistic implementations on coupled workstations, to more complicated driving dynamics implemented again on parallel supercomputers. 45 refs., 9 figs., 1 tab.
Automatic Data Traffic Control on DSM Architecture
NASA Technical Reports Server (NTRS)
Frumkin, Michael; Jin, Hao-Qiang; Yan, Jerry; Kwak, Dochan (Technical Monitor)
2000-01-01
We study data traffic on distributed shared memory machines and conclude that data placement and grouping improve performance of scientific codes. We present several methods which user can employ to improve data traffic in his code. We report on implementation of a tool which detects the code fragments causing data congestions and advises user on improvements of data routing in these fragments. The capabilities of the tool include deduction of data alignment and affinity from the source code; detection of the code constructs having abnormally high cache or TLB misses; generation of data placement constructs. We demonstrate the capabilities of the tool on experiments with NAS parallel benchmarks and with a simple computational fluid dynamics application ARC3D.
Deterministic Binary Vectors for Efficient Automated Indexing of MEDLINE/PubMed Abstracts
Wahle, Manuel; Widdows, Dominic; Herskovic, Jorge R.; Bernstam, Elmer V.; Cohen, Trevor
2012-01-01
The need to maintain accessibility of the biomedical literature has led to development of methods to assist human indexers by recommending index terms for newly encountered articles. Given the rapid expansion of this literature, it is essential that these methods be scalable. Document vector representations are commonly used for automated indexing, and Random Indexing (RI) provides the means to generate them efficiently. However, RI is difficult to implement in real-world indexing systems, as (1) efficient nearest-neighbor search requires retaining all document vectors in RAM, and (2) it is necessary to maintain a store of randomly generated term vectors to index future documents. Motivated by these concerns, this paper documents the development and evaluation of a deterministic binary variant of RI. The increased capacity demonstrated by binary vectors has implications for information retrieval, and the elimination of the need to retain term vectors facilitates distributed implementations, enhancing the scalability of RI. PMID:23304369
Deterministic binary vectors for efficient automated indexing of MEDLINE/PubMed abstracts.
Wahle, Manuel; Widdows, Dominic; Herskovic, Jorge R; Bernstam, Elmer V; Cohen, Trevor
2012-01-01
The need to maintain accessibility of the biomedical literature has led to development of methods to assist human indexers by recommending index terms for newly encountered articles. Given the rapid expansion of this literature, it is essential that these methods be scalable. Document vector representations are commonly used for automated indexing, and Random Indexing (RI) provides the means to generate them efficiently. However, RI is difficult to implement in real-world indexing systems, as (1) efficient nearest-neighbor search requires retaining all document vectors in RAM, and (2) it is necessary to maintain a store of randomly generated term vectors to index future documents. Motivated by these concerns, this paper documents the development and evaluation of a deterministic binary variant of RI. The increased capacity demonstrated by binary vectors has implications for information retrieval, and the elimination of the need to retain term vectors facilitates distributed implementations, enhancing the scalability of RI.
De Los Ríos, F. A.; Paluszny, M.
2015-01-01
We consider some methods to extract information about the rotator cuff based on magnetic resonance images; the study aims to define an alternative method of display that might facilitate the detection of partial tears in the supraspinatus tendon. Specifically, we are going to use families of ellipsoidal triangular patches to cover the humerus head near the affected area. These patches are going to be textured and displayed with the information of the magnetic resonance images using the trilinear interpolation technique. For the generation of points to texture each patch, we propose a new method that guarantees the uniform distribution of its points using a random statistical method. Its computational cost, defined as the average computing time to generate a fixed number of points, is significantly lower as compared with deterministic and other standard statistical techniques. PMID:25650281
Deterministic Migration-Based Separation of White Blood Cells.
Kim, Byeongyeon; Choi, Young Joon; Seo, Hyekyung; Shin, Eui-Cheol; Choi, Sungyoung
2016-10-01
Functional and phenotypic analyses of peripheral white blood cells provide useful clinical information. However, separation of white blood cells from peripheral blood requires a time-consuming, inconvenient process and thus analyses of separated white blood cells are limited in clinical settings. To overcome this limitation, a microfluidic separation platform is developed to enable deterministic migration of white blood cells, directing the cells into designated positions according to a ridge pattern. The platform uses slant ridge structures on the channel top to induce the deterministic migration, which allows efficient and high-throughput separation of white blood cells from unprocessed whole blood. The extent of the deterministic migration under various rheological conditions is explored, enabling highly efficient migration of white blood cells in whole blood and achieving high-throughput separation of the cells (processing 1 mL of whole blood less than 7 min). In the separated cell population, the composition of lymphocyte subpopulations is well preserved, and T cells secrete cytokines without any functional impairment. On the basis of the results, this microfluidic platform is a promising tool for the rapid enrichment of white blood cells, and it is useful for functional and phenotypic analyses of peripheral white blood cells. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
How Does Quantum Uncertainty Emerge from Deterministic Bohmian Mechanics?
NASA Astrophysics Data System (ADS)
Solé, A.; Oriols, X.; Marian, D.; Zanghì, N.
2016-10-01
Bohmian mechanics is a theory that provides a consistent explanation of quantum phenomena in terms of point particles whose motion is guided by the wave function. In this theory, the state of a system of particles is defined by the actual positions of the particles and the wave function of the system; and the state of the system evolves deterministically. Thus, the Bohmian state can be compared with the state in classical mechanics, which is given by the positions and momenta of all the particles, and which also evolves deterministically. However, while in classical mechanics it is usually taken for granted and considered unproblematic that the state is, at least in principle, measurable, this is not the case in Bohmian mechanics. Due to the linearity of the quantum dynamical laws, one essential component of the Bohmian state, the wave function, is not directly measurable. Moreover, it turns out that the measurement of the other component of the state — the positions of the particles — must be mediated by the wave function; a fact that in turn implies that the positions of the particles, though measurable, are constrained by absolute uncertainty. This is the key to understanding how Bohmian mechanics, despite being deterministic, can account for all quantum predictions, including quantum randomness and uncertainty.
Demographic noise can reverse the direction of deterministic selection
Constable, George W. A.; Rogers, Tim; McKane, Alan J.; Tarnita, Corina E.
2016-01-01
Deterministic evolutionary theory robustly predicts that populations displaying altruistic behaviors will be driven to extinction by mutant cheats that absorb common benefits but do not themselves contribute. Here we show that when demographic stochasticity is accounted for, selection can in fact act in the reverse direction to that predicted deterministically, instead favoring cooperative behaviors that appreciably increase the carrying capacity of the population. Populations that exist in larger numbers experience a selective advantage by being more stochastically robust to invasions than smaller populations, and this advantage can persist even in the presence of reproductive costs. We investigate this general effect in the specific context of public goods production and find conditions for stochastic selection reversal leading to the success of public good producers. This insight, developed here analytically, is missed by the deterministic analysis as well as by standard game theoretic models that enforce a fixed population size. The effect is found to be amplified by space; in this scenario we find that selection reversal occurs within biologically reasonable parameter regimes for microbial populations. Beyond the public good problem, we formulate a general mathematical framework for models that may exhibit stochastic selection reversal. In this context, we describe a stochastic analog to r−K theory, by which small populations can evolve to higher densities in the absence of disturbance. PMID:27450085
Probabilistic vs deterministic views in facing natural hazards
NASA Astrophysics Data System (ADS)
Arattano, Massimo; Coviello, Velio
2015-04-01
Natural hazards can be mitigated through active or passive measures. Among these latter countermeasures, Early Warning Systems (EWSs) are playing an increasing and significant role. In particular, a growing number of studies investigate the reliability of landslide EWSs, their comparability to alternative protection measures and their cost-effectiveness. EWSs, however, inevitably and intrinsically imply the concept of probability of occurrence and/or probability of error. Since a long time science has accepted and integrated the probabilistic nature of reality and its phenomena. The same cannot be told for other fields of knowledge, such as law or politics, with which scientists sometimes have to interact. These disciplines are in fact still linked to more deterministic views of life. The same is true for what is perceived by the public opinion, which often requires or even pretends a deterministic type of answer to its needs. So, as an example, it might be easy for people to feel completely safe because an EWS has been installed. It is also easy for an administrator or a politician to contribute to spread this wrong feeling, together with the idea of having dealt with the problem and done something definitive to face it. May geoethics play a role to create a link between the probabilistic world of nature and science and the tendency of the society to a more deterministic view of things? Answering this question could help scientists to feel more confident in planning and performing their research activities.
Deterministic form correction of extreme freeform optical surfaces
NASA Astrophysics Data System (ADS)
Lynch, Timothy P.; Myer, Brian W.; Medicus, Kate; DeGroote Nelson, Jessica
2015-10-01
The blistering pace of recent technological advances has led lens designers to rely increasingly on freeform optical components as crucial pieces of their designs. As these freeform components increase in geometrical complexity and continue to deviate further from traditional optical designs, the optical manufacturing community must rethink their fabrication processes in order to keep pace. To meet these new demands, Optimax has developed a variety of new deterministic freeform manufacturing processes. Combining traditional optical fabrication techniques with cutting edge technological innovations has yielded a multifaceted manufacturing approach that can successfully handle even the most extreme freeform optical surfaces. In particular, Optimax has placed emphasis on refining the deterministic form correction process. By developing many of these procedures in house, changes can be implemented quickly and efficiently in order to rapidly converge on an optimal manufacturing method. Advances in metrology techniques allow for rapid identification and quantification of irregularities in freeform surfaces, while deterministic correction algorithms precisely target features on the part and drastically reduce overall correction time. Together, these improvements have yielded significant advances in the realm of freeform manufacturing. With further refinements to these and other aspects of the freeform manufacturing process, the production of increasingly radical freeform optical components is quickly becoming a reality.
Non-equilibrium Thermodynamics of Piecewise Deterministic Markov Processes
NASA Astrophysics Data System (ADS)
Faggionato, A.; Gabrielli, D.; Ribezzi Crivellari, M.
2009-10-01
We consider a class of stochastic dynamical systems, called piecewise deterministic Markov processes, with states ( x, σ)∈Ω×Γ, Ω being a region in ℝ d or the d-dimensional torus, Γ being a finite set. The continuous variable x follows a piecewise deterministic dynamics, the discrete variable σ evolves by a stochastic jump dynamics and the two resulting evolutions are fully-coupled. We study stationarity, reversibility and time-reversal symmetries of the process. Increasing the frequency of the σ-jumps, the system behaves asymptotically as deterministic and we investigate the structure of its fluctuations (i.e. deviations from the asymptotic behavior), recovering in a non Markovian frame results obtained by Bertini et al. (Phys. Rev. Lett. 87(4):040601, 2001; J. Stat. Phys. 107(3-4):635-675, 2002; J. Stat. Mech. P07014, 2007; Preprint available online at http://www.arxiv.org/abs/0807.4457, 2008), in the context of Markovian stochastic interacting particle systems. Finally, we discuss a Gallavotti-Cohen-type symmetry relation with involution map different from time-reversal.
A deterministic version of Pollard's p-1 algorithm
NASA Astrophysics Data System (ADS)
Zral/Ek, Bartosz
2010-01-01
In this article we present applications of smooth numbers to the unconditional derandomization of some well-known integer factoring algo- rithms. We begin with Pollard's p-1 algorithm, which finds in random polynomial time the prime divisors p of an integer n such that p-1 is smooth. We show that these prime factors can be recovered in deterministic polynomial time. We further generalize this result to give a partial derandomization of the k -th cyclotomic method of factoring ( k≥2 ) devised by Bach and Shallit. We also investigate reductions of factoring to computing Euler's totient function ϕ . We point out some explicit sets of integers n that are completely factorable in deterministic polynomial time given ϕ(n) . These sets consist, roughly speaking, of products of primes p satisfying, with the exception of at most two, certain conditions somewhat weaker than the smoothness of p-1 . Finally, we prove that O(ln n) oracle queries for values of ϕ are sufficient to completely factor any integer n in less than expBigl((1+o(1))(ln n)^{1/3} (lnln n)^{2/3}Bigr) deterministic time.
Extinction thresholds in deterministic and stochastic epidemic models.
Allen, Linda J S; Lahodny, Glenn E
2012-01-01
The basic reproduction number, ℛ(0), one of the most well-known thresholds in deterministic epidemic theory, predicts a disease outbreak if ℛ(0)>1. In stochastic epidemic theory, there are also thresholds that predict a major outbreak. In the case of a single infectious group, if ℛ(0)>1 and i infectious individuals are introduced into a susceptible population, then the probability of a major outbreak is approximately 1-(1/ℛ(0))( i ). With multiple infectious groups from which the disease could emerge, this result no longer holds. Stochastic thresholds for multiple groups depend on the number of individuals within each group, i ( j ), j=1, …, n, and on the probability of disease extinction for each group, q ( j ). It follows from multitype branching processes that the probability of a major outbreak is approximately [Formula: see text]. In this investigation, we summarize some of the deterministic and stochastic threshold theory, illustrate how to calculate the stochastic thresholds, and derive some new relationships between the deterministic and stochastic thresholds.
Traffic-driven epidemic spreading in correlated networks
NASA Astrophysics Data System (ADS)
Yang, Han-Xin; Tang, Ming; Lai, Ying-Cheng
2015-06-01
In spite of the extensive previous efforts on traffic dynamics and epidemic spreading in complex networks, the problem of traffic-driven epidemic spreading on correlated networks has not been addressed. Interestingly, we find that the epidemic threshold, a fundamental quantity underlying the spreading dynamics, exhibits a nonmonotonic behavior in that it can be minimized for some critical value of the assortativity coefficient, a parameter characterizing the network correlation. To understand this phenomenon, we use the degree-based mean-field theory to calculate the traffic-driven epidemic threshold for correlated networks. The theory predicts that the threshold is inversely proportional to the packet-generation rate and the largest eigenvalue of the betweenness matrix. We obtain consistency between theory and numerics. Our results may provide insights into the important problem of controlling and/or harnessing real-world epidemic spreading dynamics driven by traffic flows.
Design of automated system for management of arrival traffic
NASA Technical Reports Server (NTRS)
Erzberger, Heinz; Nedell, William
1989-01-01
The design of an automated air traffic control system based on a hierarchy of advisory tools for controllers is described. Compatibility of the tools with the human controller, a key objective of the design, is achieved by a judicious selection of tasks to be automated and careful attention to the design of the controller system interface. The design comprises three interconnected subsystems referred to as the Traffic Management Advisor, the Descent Advisor, and the Final Approach Spacing Tool. Each of these subsystems provides a collection of tools for specific controller positions and tasks. The design of two of these tools, the Descent Advisor, which provides automation tools for managing descent traffic, and the Traffic Management Advisor, which generates optimum landing schedules is focused on. The algorithms, automation modes, and graphical interfaces incorporated in the design are described.
Steering Kids to Traffic Safety.
ERIC Educational Resources Information Center
PTA Today, 1991
1991-01-01
Guidelines to help parents explain traffic safety to children cover the following: school bus safety (e.g., remain seated, do not shout); walking (e.g., obey traffic signals, cross at crosswalks); driving (e.g., wear seatbelts, enter and exit from the curb side); and biking (e.g., wear helmets, do not ride at night). (SM)
Probabilistic description of traffic flow
NASA Astrophysics Data System (ADS)
Mahnke, R.; Kaupužs, J.; Lubashevsky, I.
2005-03-01
A stochastic description of traffic flow, called probabilistic traffic flow theory, is developed. The general master equation is applied to relatively simple models to describe the formation and dissolution of traffic congestions. Our approach is mainly based on spatially homogeneous systems like periodically closed circular rings without on- and off-ramps. We consider a stochastic one-step process of growth or shrinkage of a car cluster (jam). As generalization we discuss the coexistence of several car clusters of different sizes. The basic problem is to find a physically motivated ansatz for the transition rates of the attachment and detachment of individual cars to a car cluster consistent with the empirical observations in real traffic. The emphasis is put on the analogy with first-order phase transitions and nucleation phenomena in physical systems like supersaturated vapour. The results are summarized in the flux-density relation, the so-called fundamental diagram of traffic flow, and compared with empirical data. Different regimes of traffic flow are discussed: free flow, congested mode as stop-and-go regime, and heavy viscous traffic. The traffic breakdown is studied based on the master equation as well as the Fokker-Planck approximation to calculate mean first passage times or escape rates. Generalizations are developed to allow for on-ramp effects. The calculated flux-density relation and characteristic breakdown times coincide with empirical data measured on highways. Finally, a brief summary of the stochastic cellular automata approach is given.
Traffic Safety for Special Children
ERIC Educational Resources Information Center
Wilson, Val; MacKenzie, R. A.
1974-01-01
In a 6 weeks' unit on traffic education using flannel graphs, filmstrips and models, 12 special class students (IQ 55-82) ages 7- to 11-years-old learned six basic skills including crossing a road, obeying traffic lights and walking on country roads. (CL)
TRAFFIC AND TRANSPORTATION. (BUSINESS TECHNOLOGY).
ERIC Educational Resources Information Center
North Carolina State Dept. of Community Colleges, Raleigh.
THE PREEMPLOYMENT, 6-QUARTER CURRICULUM IS FOR USE IN TECHNICAL INSTITUTES AND COMMUNITY COLLEGES. ITS PURPOSE IS TO PROVIDE TRAINING IN NEW TECHNIQUES AND UNDERSTANDING OF THE LATEST STATE AND FEDERAL REGULATIONS APPLICABLE TO TRAFFIC AND TRANSPORTATION. GRADUATES OF THIS CURRICULUM MAY SEEK CAREER OPPORTUNITIES AS TRAFFIC REPRESENTATIVES, CLAIMS…
Traffic Calming: A Social Issue
ERIC Educational Resources Information Center
Crouse, David W.
2004-01-01
Substantial urban growth fueled by a strong economy often results in heavy traffic thus making streets less hospitable. Traffic calming is one response to the pervasiveness of the automobile. The issues concern built environments and involve multiple actors reflecting different interests. The issues are rarely technical and involve combinations of…
Traffic Calming: A Social Issue
ERIC Educational Resources Information Center
Crouse, David W.
2004-01-01
Substantial urban growth fueled by a strong economy often results in heavy traffic thus making streets less hospitable. Traffic calming is one response to the pervasiveness of the automobile. The issues concern built environments and involve multiple actors reflecting different interests. The issues are rarely technical and involve combinations of…
Stochastic Processes in Physics: Deterministic Origins and Control
NASA Astrophysics Data System (ADS)
Demers, Jeffery
Stochastic processes are ubiquitous in the physical sciences and engineering. While often used to model imperfections and experimental uncertainties in the macroscopic world, stochastic processes can attain deeper physical significance when used to model the seemingly random and chaotic nature of the underlying microscopic world. Nowhere more prevalent is this notion than in the field of stochastic thermodynamics - a modern systematic framework used describe mesoscale systems in strongly fluctuating thermal environments which has revolutionized our understanding of, for example, molecular motors, DNA replication, far-from equilibrium systems, and the laws of macroscopic thermodynamics as they apply to the mesoscopic world. With progress, however, come further challenges and deeper questions, most notably in the thermodynamics of information processing and feedback control. Here it is becoming increasingly apparent that, due to divergences and subtleties of interpretation, the deterministic foundations of the stochastic processes themselves must be explored and understood. This thesis presents a survey of stochastic processes in physical systems, the deterministic origins of their emergence, and the subtleties associated with controlling them. First, we study time-dependent billiards in the quivering limit - a limit where a billiard system is indistinguishable from a stochastic system, and where the simplified stochastic system allows us to view issues associated with deterministic time-dependent billiards in a new light and address some long-standing problems. Then, we embark on an exploration of the deterministic microscopic Hamiltonian foundations of non-equilibrium thermodynamics, and we find that important results from mesoscopic stochastic thermodynamics have simple microscopic origins which would not be apparent without the benefit of both the micro and meso perspectives. Finally, we study the problem of stabilizing a stochastic Brownian particle with
The Effect of Damaged Vehicles Evacuation on Traffic Flow Behavior
NASA Astrophysics Data System (ADS)
Mhirech, Abdelaziz; Ez-Zahraouy, Hamid; Ismaili, Assia Alaoui
The effect of the damaged car evacuation on the traffic flow behavior is investigated, in the one-dimensional deterministic Nagel-Schreckenberg model, using parallel dynamics. A realistic model applied to the cars involved in collisions is considered. Indeed, in this model we suppose that the damaged cars must be removed from the ring with a probability Pexit. This investigation enables us to understand how the combination of the two probabilities, namely Pcol and Pexit, acts on density and current. It is found that the current and density at the steady state, depend strongly on the initial density of cars in the ring. However, for the intermediate initial density ρi, the current J decreases when increasing either Pexit and/or Pcol. While, for high initial density, J increases passes through a maximum and decreases for large values of Pexit. Furthermore, the current can decrease or increase with the collision probability depending on the initial density.
Systematic and deterministic graph minor embedding for Cartesian products of graphs
NASA Astrophysics Data System (ADS)
Zaribafiyan, Arman; Marchand, Dominic J. J.; Changiz Rezaei, Seyed Saeed
2017-05-01
The limited connectivity of current and next-generation quantum annealers motivates the need for efficient graph minor embedding methods. These methods allow non-native problems to be adapted to the target annealer's architecture. The overhead of the widely used heuristic techniques is quickly proving to be a significant bottleneck for solving real-world applications. To alleviate this difficulty, we propose a systematic and deterministic embedding method, exploiting the structures of both the specific problem and the quantum annealer. We focus on the specific case of the Cartesian product of two complete graphs, a regular structure that occurs in many problems. We decompose the embedding problem by first embedding one of the factors of the Cartesian product in a repeatable pattern. The resulting simplified problem comprises the placement and connecting together of these copies to reach a valid solution. Aside from the obvious advantage of a systematic and deterministic approach with respect to speed and efficiency, the embeddings produced are easily scaled for larger processors and show desirable properties for the number of qubits used and the chain length distribution. We conclude by briefly addressing the problem of circumventing inoperable qubits by presenting possible extensions of our method.
Godt, J.W.; Baum, R.L.; Savage, W.Z.; Salciarini, D.; Schulz, W.H.; Harp, E.L.
2008-01-01
Application of transient deterministic shallow landslide models over broad regions for hazard and susceptibility assessments requires information on rainfall, topography and the distribution and properties of hillside materials. We survey techniques for generating the spatial and temporal input data for such models and present an example using a transient deterministic model that combines an analytic solution to assess the pore-pressure response to rainfall infiltration with an infinite-slope stability calculation. Pore-pressures and factors of safety are computed on a cell-by-cell basis and can be displayed or manipulated in a grid-based GIS. Input data are high-resolution (1.8??m) topographic information derived from LiDAR data and simple descriptions of initial pore-pressure distribution and boundary conditions for a study area north of Seattle, Washington. Rainfall information is taken from a previously defined empirical rainfall intensity-duration threshold and material strength and hydraulic properties were measured both in the field and laboratory. Results are tested by comparison with a shallow landslide inventory. Comparison of results with those from static infinite-slope stability analyses assuming fixed water-table heights shows that the spatial prediction of shallow landslide susceptibility is improved using the transient analyses; moreover, results can be depicted in terms of the rainfall intensity and duration known to trigger shallow landslides in the study area.
Barcelo, Steven J; Kim, Ansoon; Wu, Wei; Li, Zhiyong
2012-07-24
Deterministic patterning or assembly of nanoparticles often requires complex processes that are not easily incorporated into system architectures of arbitrary design. We have developed a technique to fabricate deterministic nanoparticle assemblies using simple and inexpensive nanoimprinting equipment and procedures. First, a metal film is evaporated onto flexible polymer pillars made by nanoimprinting. The resulting metal caps on top of the pillars can be pulled into assemblies of arbitrary design by collapsing the pillars in a well-controlled manner. The nanoparticle assemblies are then transferred from the pillars onto a new substrate via nanoimprinting with the aid of either cold welding or chemical bonding. Using this technique, a variety of patterned nanoparticle assemblies of Au and Ag with a critical dimension less than 2 nm were fabricated and transferred to silicon-, glass-, and metal-coated substrates. Separating the nanostructure assembly from the final architecture removes significant design constraints from devices incorporating nanoparticle assemblies. The application of this process as a technique for generating surface-enhanced Raman spectroscopy substrates is presented.
Varouchakis, Epsilon A; Hristopulos, D T
2013-01-01
In sparsely monitored basins, accurate mapping of the spatial variability of groundwater level requires the interpolation of scattered data. This paper presents a comparison of deterministic interpolation methods, i.e. inverse distance weight (IDW) and minimum curvature (MC), with stochastic methods, i.e. ordinary kriging (OK), universal kriging (UK) and kriging with Delaunay triangulation (DK). The study area is the Mires Basin of Mesara Valley in Crete (Greece). This sparsely sampled basin has limited groundwater resources which are vital for the island's economy; spatial variations of the groundwater level are important for developing management and monitoring strategies. We evaluate the performance of the interpolation methods with respect to different statistical measures. The Spartan variogram family is applied for the first time to hydrological data and is shown to be optimal with respect to stochastic interpolation of this dataset. The three stochastic methods (OK, DK and UK) perform overall better than the deterministic counterparts (IDW and MC). DK, which is herein for the first time applied to hydrological data, yields the most accurate cross-validation estimate for the lowest value in the dataset. OK and UK lead to smooth isolevel contours, whilst DK and IDW generate more edges. The stochastic methods deliver estimates of prediction uncertainty which becomes highest near the southeastern border of the basin.
NASA Astrophysics Data System (ADS)
Galvan-Sosa, M.; Portilla, J.; Hernandez-Rueda, J.; Siegel, J.; Moreno, L.; Ruiz de la Cruz, A.; Solis, J.
2014-02-01
Femtosecond laser pulse temporal shaping techniques have led to important advances in different research fields like photochemistry, laser physics, non-linear optics, biology, or materials processing. This success is partly related to the use of optimal control algorithms. Due to the high dimensionality of the solution and control spaces, evolutionary algorithms are extensively applied and, among them, genetic ones have reached the status of a standard adaptive strategy. Still, their use is normally accompanied by a reduction of the problem complexity by different modalities of parameterization of the spectral phase. Exploiting Rabitz and co-authors' ideas about the topology of quantum landscapes, in this work we analyze the optimization of two different problems under a deterministic approach, using a multiple one-dimensional search (MODS) algorithm. In the first case we explore the determination of the optimal phase mask required for generating arbitrary temporal pulse shapes and compare the performance of the MODS algorithm to the standard iterative Gerchberg-Saxton algorithm. Based on the good performance achieved, the same method has been applied for optimizing two-photon absorption starting from temporally broadened laser pulses, or from laser pulses temporally and spectrally distorted by non-linear absorption in air, obtaining similarly good results which confirm the validity of the deterministic search approach.
NASA Astrophysics Data System (ADS)
Galvan-Sosa, M.; Portilla, J.; Hernandez-Rueda, J.; Siegel, J.; Moreno, L.; Ruiz de la Cruz, A.; Solis, J.
2013-04-01
Femtosecond laser pulse temporal shaping techniques have led to important advances in different research fields like photochemistry, laser physics, non-linear optics, biology, or materials processing. This success is partly related to the use of optimal control algorithms. Due to the high dimensionality of the solution and control spaces, evolutionary algorithms are extensively applied and, among them, genetic ones have reached the status of a standard adaptive strategy. Still, their use is normally accompanied by a reduction of the problem complexity by different modalities of parameterization of the spectral phase. Exploiting Rabitz and co-authors' ideas about the topology of quantum landscapes, in this work we analyze the optimization of two different problems under a deterministic approach, using a multiple one-dimensional search (MODS) algorithm. In the first case we explore the determination of the optimal phase mask required for generating arbitrary temporal pulse shapes and compare the performance of the MODS algorithm to the standard iterative Gerchberg-Saxton algorithm. Based on the good performance achieved, the same method has been applied for optimizing two-photon absorption starting from temporally broadened laser pulses, or from laser pulses temporally and spectrally distorted by non-linear absorption in air, obtaining similarly good results which confirm the validity of the deterministic search approach.
Sub-surface single ion detection in diamond: A path for deterministic color center creation
NASA Astrophysics Data System (ADS)
Abraham, John; Aguirre, Brandon; Pacheco, Jose; Camacho, Ryan; Bielejec, Edward; Sandia National Laboratories Team
Deterministic single color center creation remains a critical milestone for the integrated use of diamond color centers. It depends on three components: focused ion beam implantation to control the location, yield improvement to control the activation, and single ion implantation to control the number of implanted ions. A surface electrode detector has been fabricated on diamond where the electron hole pairs generated during ion implantation are used as the detection signal. Results will be presented demonstrating single ion detection. The detection efficiency of the device will be described as a function of implant energy and device geometry. It is anticipated that the controlled introduction of single dopant atoms in diamond will provide a basis for deterministic single localized color centers. This work was performed, in part, at the Center for Integrated Nanotechnologies, an Office of Science User Facility operated for the U.S. Department of Energy Office of Science. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Delimata, Paweł; Marszał-Paszek, Barbara; Moshkov, Mikhail; Paszek, Piotr; Skowron, Andrzej; Suraj, Zbigniew
We discuss two, in a sense extreme, kinds of nondeterministic rules in decision tables. The first kind of rules, called as inhibitory rules, are blocking only one decision value (i.e., they have all but one decisions from all possible decisions on their right hand sides). Contrary to this, any rule of the second kind, called as a bounded nondeterministic rule, can have on the right hand side only a few decisions. We show that both kinds of rules can be used for improving the quality of classification. In the paper, two lazy classification algorithms of polynomial time complexity are considered. These algorithms are based on deterministic and inhibitory decision rules, but the direct generation of rules is not required. Instead of this, for any new object the considered algorithms extract from a given decision table efficiently some information about the set of rules. Next, this information is used by a decision-making procedure. The reported results of experiments show that the algorithms based on inhibitory decision rules are often better than those based on deterministic decision rules. We also present an application of bounded nondeterministic rules in construction of rule based classifiers. We include the results of experiments showing that by combining rule based classifiers based on minimal decision rules with bounded nondeterministic rules having confidence close to 1 and sufficiently large support, it is possible to improve the classification quality.
A JESD204B-Compliant Architecture for Remote and Deterministic-Latency Operation
NASA Astrophysics Data System (ADS)
Giordano, Raffaele; Izzo, Vincenzo; Perrella, Sabrina; Aloisio, Alberto
2017-06-01
High-speed analog-to-digital converters (ADCs) are key components in a huge variety of systems, including trigger and data acquisition (TDAQ) systems of nuclear and subnuclear physics experiments. Over the last decades, the sample rate and dynamic range of high-speed ADCs underwent a continuous growth, and it required the development of suitable interface protocols, such as the new JESD204B serial interface protocol. In this paper, we present an original JESD204B-compliant architecture we designed, which is able to operate an ADC in a remote fashion. Our design includes a deterministic-latency high-speed serial link, which is the only connection between the local and remote logic of the architecture and which preserves the deterministic timing features of the protocol. By means of our solution, it is possible to read data out of several converters, even remote to each other, and keep them operating synchronously. Our link also supports forward error correction (FEC) capabilities, in the view of the operation in radiation areas (e.g., on-detector in TDAQ systems). We describe an implementation of our concept in a latest generation field programmable gate array (Xilinx Kintex-7 325T) for reading data from a high-speed JESD204B-compliant ADC. We present measurements of the jitter of JESD204B timing-critical signals forwarded over the link and of latency determinism of the FEC-protected link.
Systematic and Deterministic Graph-Minor Embedding of Cartesian Products of Complete Graphs
NASA Astrophysics Data System (ADS)
Zaribafiyan, Arman; Marchand, Dominic J. J.; Changiz Rezaei, Seyed Saeed
The limited connectivity of current and next-generation quantum annealers motivates the need for efficient graph-minor embedding methods. The overhead of the widely used heuristic techniques is quickly proving to be a significant bottleneck for real-world applications. To alleviate this obstacle, we propose a systematic deterministic embedding method that exploits the structures of both the input graph of the specific combinatorial optimization problem and the quantum annealer. We focus on the specific case of the Cartesian product of two complete graphs, a regular structure that occurs in many problems. We first divide the problem by embedding one of the factors of the Cartesian product in a repeatable unit. The resulting simplified problem consists of placing copies of this unit and connecting them together appropriately. Aside from the obvious speed and efficiency advantages of a systematic deterministic approach, the embeddings produced can be easily scaled for larger processors and show desirable properties with respect to the number of qubits used and the chain length distribution.
Bianchini, G.; Burgio, N.; Carta, M.; Peluso, V.; Fabrizio, V.; Ricci, L.
2012-07-01
The GUINEVERE experiment (Generation of Uninterrupted Intense Neutrons at the lead Venus Reactor) is an experimental program in support of the ADS technology presently carried out at SCK-CEN in Mol (Belgium). In the experiment a modified lay-out of the original thermal VENUS critical facility is coupled to an accelerator, built by the French body CNRS in Grenoble, working in both continuous and pulsed mode and delivering 14 MeV neutrons by bombardment of deuterons on a tritium-target. The modified lay-out of the facility consists of a fast subcritical core made of 30% U-235 enriched metallic Uranium in a lead matrix. Several off-line and on-line reactivity measurement techniques will be investigated during the experimental campaign. This report is focused on the simulation by deterministic (ERANOS French code) and Monte Carlo (MCNPX US code) calculations of three reactivity measurement techniques, Slope ({alpha}-fitting), Area-ratio and Source-jerk, applied to a GUINEVERE subcritical configuration (namely SC1). The inferred reactivity, in dollar units, by the Area-ratio method shows an overall agreement between the two deterministic and Monte Carlo computational approaches, whereas the MCNPX Source-jerk results are affected by large uncertainties and allow only partial conclusions about the comparison. Finally, no particular spatial dependence of the results is observed in the case of the GUINEVERE SC1 subcritical configuration. (authors)
Deterministic time-reversible thermostats: chaos, ergodicity, and the zeroth law of thermodynamics
NASA Astrophysics Data System (ADS)
Patra, Puneet Kumar; Sprott, Julien Clinton; Hoover, William Graham; Griswold Hoover, Carol
2015-09-01
The relative stability and ergodicity of deterministic time-reversible thermostats, both singly and in coupled pairs, are assessed through their Lyapunov spectra. Five types of thermostat are coupled to one another through a single Hooke's-law harmonic spring. The resulting dynamics shows that three specific thermostat types, Hoover-Holian, Ju-Bulgac, and Martyna-Klein-Tuckerman, have very similar Lyapunov spectra in their equilibrium four-dimensional phase spaces and when coupled in equilibrium or nonequilibrium pairs. All three of these oscillator-based thermostats are shown to be ergodic, with smooth analytic Gaussian distributions in their extended phase spaces (coordinate, momentum, and two control variables). Evidently these three ergodic and time-reversible thermostat types are particularly useful as statistical-mechanical thermometers and thermostats. Each of them generates Gibbs' universal canonical distribution internally as well as for systems to which they are coupled. Thus they obey the zeroth law of thermodynamics, as a good heat bath should. They also provide dissipative heat flow with relatively small nonlinearity when two or more such temperature baths interact and provide useful deterministic replacements for the stochastic Langevin equation.
NASA Astrophysics Data System (ADS)
Sohn, Hyunmin; Liang, Cheng-yen; Nowakowski, Mark E.; Hwang, Yongha; Han, Seungoh; Bokor, Jeffrey; Carman, Gregory P.; Candler, Robert N.
2017-10-01
We demonstrate deterministic multi-step rotation of a magnetic single-domain (SD) state in Nickel nanodisks using the multiferroic magnetoelastic effect. Ferromagnetic Nickel nanodisks are fabricated on a piezoelectric Lead Zirconate Titanate (PZT) substrate, surrounded by patterned electrodes. With the application of a voltage between opposing electrode pairs, we generate anisotropic in-plane strains that reshape the magnetic energy landscape of the Nickel disks, reorienting magnetization toward a new easy axis. By applying a series of voltages sequentially to adjacent electrode pairs, circulating in-plane anisotropic strains are applied to the Nickel disks, deterministically rotating a SD state in the Nickel disks by increments of 45°. The rotation of the SD state is numerically predicted by a fully-coupled micromagnetic/elastodynamic finite element analysis (FEA) model, and the predictions are experimentally verified with magnetic force microscopy (MFM). This experimental result will provide a new pathway to develop energy efficient magnetic manipulation techniques at the nanoscale.
NASA Astrophysics Data System (ADS)
Sakai, Kenshi; Upadhyaya, Shrinivasa K.; Andrade-Sanchez, Pedro; Sviridova, Nina V.
2017-03-01
Real-world processes are often combinations of deterministic and stochastic processes. Soil failure observed during farm tillage is one example of this phenomenon. In this paper, we investigated the nonlinear features of soil failure patterns in a farm tillage process. We demonstrate emerging determinism in soil failure patterns from stochastic processes under specific soil conditions. We normalized the deterministic nonlinear prediction considering autocorrelation and propose it as a robust way of extracting a nonlinear dynamical system from noise contaminated motion. Soil is a typical granular material. The results obtained here are expected to be applicable to granular materials in general. From a global scale to nano scale, the granular material is featured in seismology, geotechnology, soil mechanics, and particle technology. The results and discussions presented here are applicable in these wide research areas. The proposed method and our findings are useful with respect to the application of nonlinear dynamics to investigate complex motions generated from granular materials.
Photonics approach to traffic signs
NASA Astrophysics Data System (ADS)
Litwin, Dariusz; Galas, Jacek; CzyŻewski, Adam; Rymsza, Barbara; Kornalewski, Leszek; Kryszczyński, Tadeusz; Mikucki, Jerzy; Wikliński, Piotr; Daszkiewicz, Marek; Malasek, Jacek
2016-12-01
The automotive industry has been always a driving force for all economies. Despite of its beneficial meaning to every society it brings also many issues including wide area of road safety. The latter has been enforced by the increasing number of cars and the dynamic development of the traffic as a whole. Road signs and traffic lights are crucial in context of good traffic arrangement and its fluency. Traffic designers are used to treat horizontal road signs independently of vertical signs. However, modern light sources and growing flexibility in shaping optical systems create opportunity to design more advanced and smart solutions. In this paper we present an innovative, multidisciplinary approach that consists in tight interdependence of different traffic signals. We describe new optical systems together with their influence on the perception of the road user. The analysis includes maintenance and visibility in different weather conditions. A special attention has been focused on intersections of complex geometry.
NASA Astrophysics Data System (ADS)
Raeesi, M.; Mesgari, M. S.; Mahmoudi, P.
2014-10-01
Short time prediction is one of the most important factors in intelligence transportation system (ITS). In this research, the use of feed forward neural network for traffic time-series prediction is presented. In this paper, the traffic in one direction of the road segment is predicted. The input of the neural network is the time delay data exported from the road traffic data of Monroe city. The time delay data is used for training the network. For generating the time delay data, the traffic data related to the first 300 days of 2008 is used. The performance of the feed forward neural network model is validated using the real observation data of the 301st day.
Comparison of probabilistic and deterministic fiber tracking of cranial nerves.
Zolal, Amir; Sobottka, Stephan B; Podlesek, Dino; Linn, Jennifer; Rieger, Bernhard; Juratli, Tareq A; Schackert, Gabriele; Kitzler, Hagen H
2017-09-01
OBJECTIVE The depiction of cranial nerves (CNs) using diffusion tensor imaging (DTI) is of great interest in skull base tumor surgery and DTI used with deterministic tracking methods has been reported previously. However, there are still no good methods usable for the elimination of noise from the resulting depictions. The authors have hypothesized that probabilistic tracking could lead to more accurate results, because it more efficiently extracts information from the underlying data. Moreover, the authors have adapted a previously described technique for noise elimination using gradual threshold increases to probabilistic tracking. To evaluate the utility of this new approach, a comparison is provided with this work between the gradual threshold increase method in probabilistic and deterministic tracking of CNs. METHODS Both tracking methods were used to depict CNs II, III, V, and the VII+VIII bundle. Depiction of 240 CNs was attempted with each of the above methods in 30 healthy subjects, which were obtained from 2 public databases: the Kirby repository (KR) and Human Connectome Project (HCP). Elimination of erroneous fibers was attempted by gradually increasing the respective thresholds (fractional anisotropy [FA] and probabilistic index of connectivity [PICo]). The results were compared with predefined ground truth images based on corresponding anatomical scans. Two label overlap measures (false-positive error and Dice similarity coefficient) were used to evaluate the success of both methods in depicting the CN. Moreover, the differences between these parameters obtained from the KR and HCP (with higher angular resolution) databases were evaluated. Additionally, visualization of 10 CNs in 5 clinical cases was attempted with both methods and evaluated by comparing the depictions with intraoperative findings. RESULTS Maximum Dice similarity coefficients were significantly higher with probabilistic tracking (p < 0.001; Wilcoxon signed-rank test). The false
Traffic information computing platform for big data
Duan, Zongtao Li, Ying Zheng, Xibin Liu, Yan Dai, Jiting Kang, Jun
2014-10-06
Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users.
Traffic information computing platform for big data
NASA Astrophysics Data System (ADS)
Duan, Zongtao; Li, Ying; Zheng, Xibin; Liu, Yan; Dai, Jiting; Kang, Jun
2014-10-01
Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users.
Automated Conflict Resolution For Air Traffic Control
NASA Technical Reports Server (NTRS)
Erzberger, Heinz
2005-01-01
The ability to detect and resolve conflicts automatically is considered to be an essential requirement for the next generation air traffic control system. While systems for automated conflict detection have been used operationally by controllers for more than 20 years, automated resolution systems have so far not reached the level of maturity required for operational deployment. Analytical models and algorithms for automated resolution have been traffic conditions to demonstrate that they can handle the complete spectrum of conflict situations encountered in actual operations. The resolution algorithm described in this paper was formulated to meet the performance requirements of the Automated Airspace Concept (AAC). The AAC, which was described in a recent paper [1], is a candidate for the next generation air traffic control system. The AAC's performance objectives are to increase safety and airspace capacity and to accommodate user preferences in flight operations to the greatest extent possible. In the AAC, resolution trajectories are generated by an automation system on the ground and sent to the aircraft autonomously via data link .The algorithm generating the trajectories must take into account the performance characteristics of the aircraft, the route structure of the airway system, and be capable of resolving all types of conflicts for properly equipped aircraft without requiring supervision and approval by a controller. Furthermore, the resolution trajectories should be compatible with the clearances, vectors and flight plan amendments that controllers customarily issue to pilots in resolving conflicts. The algorithm described herein, although formulated specifically to meet the needs of the AAC, provides a generic engine for resolving conflicts. Thus, it can be incorporated into any operational concept that requires a method for automated resolution, including concepts for autonomous air to air resolution.
Low Earth Orbit satellite traffic simulator
NASA Technical Reports Server (NTRS)
Hoelzel, John
1995-01-01
This paper describes a significant tool for Low Earth Orbit (LEO) capacity analysis, needed to support marketing, economic, and design analysis, known as a Satellite Traffic Simulator (STS). LEO satellites typically use multiple beams to help achieve the desired communication capacity, but the traffic demand in these beams in usually not uniform. Simulations of dynamic, average, and peak expected demand per beam is a very critical part of the marketing, economic, and design analysis necessary to field a viable LEO system. An STS is described in this paper which can simulate voice, data and FAX traffic carried by LEO satellite beams and Earth Station Gateways. It is applicable world-wide for any LEO satellite constellations operating over any regions. For aeronautical applications to LEO satellites. the anticipates aeronautical traffic (Erlangs for each hour of the day to be simulated) is prepared for geographically defined 'area targets' (each major operational region for the respective aircraft), and used as input to the STS. The STS was designed by Constellations Communications Inc. (CCI) and E-Systems for usage in Brazil in accordance with an ESCA/INPE Statement Of Work, and developed by Analytical Graphics Inc. (AGI) to execute on top of its Satellite Tool Kit (STK) commercial software. The STS simulates constellations of LEO satellite orbits, with input of traffic intensity (Erlangs) for each hour of the day generated from area targets (such as Brazilian States). accumulated in custom LEO satellite beams, and then accumulated in Earth Station Gateways. The STS is a very general simulator which can accommodate: many forms of orbital element and Walker Constellation input; simple beams or any user defined custom beams; and any location of Gateways. The paper describes some of these features, including Manual Mode dynamic graphical display of communication links, to illustrate which Gateway links are accessible and which links are not, at each 'step' of the
Low Earth Orbit satellite traffic simulator
NASA Technical Reports Server (NTRS)
Hoelzel, John
1995-01-01
This paper describes a significant tool for Low Earth Orbit (LEO) capacity analysis, needed to support marketing, economic, and design analysis, known as a Satellite Traffic Simulator (STS). LEO satellites typically use multiple beams to help achieve the desired communication capacity, but the traffic demand in these beams in usually not uniform. Simulations of dynamic, average, and peak expected demand per beam is a very critical part of the marketing, economic, and design analysis necessary to field a viable LEO system. An STS is described in this paper which can simulate voice, data and FAX traffic carried by LEO satellite beams and Earth Station Gateways. It is applicable world-wide for any LEO satellite constellations operating over any regions. For aeronautical applications to LEO satellites. the anticipates aeronautical traffic (Erlangs for each hour of the day to be simulated) is prepared for geographically defined 'area targets' (each major operational region for the respective aircraft), and used as input to the STS. The STS was designed by Constellations Communications Inc. (CCI) and E-Systems for usage in Brazil in accordance with an ESCA/INPE Statement Of Work, and developed by Analytical Graphics Inc. (AGI) to execute on top of its Satellite Tool Kit (STK) commercial software. The STS simulates constellations of LEO satellite orbits, with input of traffic intensity (Erlangs) for each hour of the day generated from area targets (such as Brazilian States). accumulated in custom LEO satellite beams, and then accumulated in Earth Station Gateways. The STS is a very general simulator which can accommodate: many forms of orbital element and Walker Constellation input; simple beams or any user defined custom beams; and any location of Gateways. The paper describes some of these features, including Manual Mode dynamic graphical display of communication links, to illustrate which Gateway links are accessible and which links are not, at each 'step' of the
Wildfire susceptibility mapping: comparing deterministic and stochastic approaches
NASA Astrophysics Data System (ADS)
Pereira, Mário; Leuenberger, Michael; Parente, Joana; Tonini, Marj
2016-04-01
Conservation of Nature and Forests (ICNF) (http://www.icnf.pt/portal) which provides a detailed description of the shape and the size of area burnt by each fire in each year of occurrence. Two methodologies for susceptibility mapping were compared. First, the deterministic approach, based on the study of Verde and Zêzere (2010), which includes the computation of the favorability scores for each variable and the fire occurrence probability, as well as the validation of each model, resulting from the integration of different variables. Second, as non-linear method we selected the Random Forest algorithm (Breiman, 2001): this led us to identifying the most relevant variables conditioning the presence of wildfire and allowed us generating a map of fire susceptibility based on the resulting variable importance measures. By means of GIS techniques, we mapped the obtained predictions which represent the susceptibility of the study area to fires. Results obtained applying both the methodologies for wildfire susceptibility mapping, as well as of wildfire hazard maps for different total annual burnt area scenarios, were compared with the reference maps and allow us to assess the best approach for susceptibility mapping in Portugal. References: - Breiman, L. (2001). Random forests. Machine Learning, 45, 5-32. - Verde, J. C., & Zêzere, J. L. (2010). Assessment and validation of wildfire susceptibility and hazard in Portugal. Natural Hazards and Earth System Science, 10(3), 485-497.
NASA Astrophysics Data System (ADS)
Balouchestani, Mohammadreza
2017-05-01
Network traffic or data traffic in a Wireless Local Area Network (WLAN) is the amount of network packets moving across a wireless network from each wireless node to another wireless node, which provide the load of sampling in a wireless network. WLAN's Network traffic is the main component for network traffic measurement, network traffic control and simulation. Traffic classification technique is an essential tool for improving the Quality of Service (QoS) in different wireless networks in the complex applications such as local area networks, wireless local area networks, wireless personal area networks, wireless metropolitan area networks, and wide area networks. Network traffic classification is also an essential component in the products for QoS control in different wireless network systems and applications. Classifying network traffic in a WLAN allows to see what kinds of traffic we have in each part of the network, organize the various kinds of network traffic in each path into different classes in each path, and generate network traffic matrix in order to Identify and organize network traffic which is an important key for improving the QoS feature. To achieve effective network traffic classification, Real-time Network Traffic Classification (RNTC) algorithm for WLANs based on Compressed Sensing (CS) is presented in this paper. The fundamental goal of this algorithm is to solve difficult wireless network management problems. The proposed architecture allows reducing False Detection Rate (FDR) to 25% and Packet Delay (PD) to 15 %. The proposed architecture is also increased 10 % accuracy of wireless transmission, which provides a good background for establishing high quality wireless local area networks.
Air Traffic Management Research at NASA
NASA Technical Reports Server (NTRS)
Farley, Todd
2012-01-01
The U.S. air transportation system is the most productive in the world, moving far more people and goods than any other. It is also the safest system in the world, thanks in part to its venerable air traffic control system. But as demand for air travel continues to grow, the air traffic control systems aging infrastructure and labor-intensive procedures are impinging on its ability to keep pace with demand. And that impinges on the growth of our economy. Part of NASA's current mission in aeronautics research is to invent new technologies and procedures for ATC that will enable our national airspace system to accommodate the increasing demand for air transportation well into the next generation while still maintaining its excellent record for safety. It is a challenging mission, as efforts to modernize have, for decades, been hamstrung by the inability to assure safety to the satisfaction of system operators, system regulators, and/or the traveling public. In this talk, we'll provide a brief history of air traffic control, focusing on the tension between efficiency and safety assurance, and we'll highlight some new NASA technologies coming down the pike.
Automated Traffic Management System and Method
NASA Technical Reports Server (NTRS)
Glass, Brian J. (Inventor); Spirkovska, Liljana (Inventor); McDermott, William J. (Inventor); Reisman, Ronald J. (Inventor); Gibson, James (Inventor); Iverson, David L. (Inventor)
2000-01-01
A data management system and method that enables acquisition, integration, and management of real-time data generated at different rates, by multiple heterogeneous incompatible data sources. The system achieves this functionality by using an expert system to fuse data from a variety of airline, airport operations, ramp control, and air traffic control tower sources, to establish and update reference data values for every aircraft surface operation. The system may be configured as a real-time airport surface traffic management system (TMS) that electronically interconnects air traffic control, airline data, and airport operations data to facilitate information sharing and improve taxi queuing. In the TMS operational mode, empirical data shows substantial benefits in ramp operations for airlines, reducing departure taxi times by about one minute per aircraft in operational use, translating as $12 to $15 million per year savings to airlines at the Atlanta, Georgia airport. The data management system and method may also be used for scheduling the movement of multiple vehicles in other applications, such as marine vessels in harbors and ports, trucks or railroad cars in ports or shipping yards, and railroad cars in switching yards. Finally, the data management system and method may be used for managing containers at a shipping dock, stock on a factory floor or in a warehouse, or as a training tool for improving situational awareness of FAA tower controllers, ramp and airport operators, or commercial airline personnel in airfield surface operations.
Fluctuations in Urban Traffic Networks
NASA Astrophysics Data System (ADS)
Chen, Yu-Dong; Li, Li; Zhang, Yi; Hu, Jian-Ming; Jin, Xue-Xiang
Urban traffic network is a typical complex system, in which movements of tremendous microscopic traffic participants (pedestrians, bicyclists and vehicles) form complicated spatial and temporal dynamics. We collected flow volumes data on the time-dependent activity of a typical urban traffic network, finding that the coupling between the average flux and the fluctuation on individual links obeys a certain scaling law, with a wide variety of scaling exponents between 1/2 and 1. These scaling phenomena can explain the interaction between the nodes' internal dynamics (i.e. queuing at intersections, car-following in driving) and changes in the external (network-wide) traffic demand (i.e. the every day increase of traffic amount during peak hours and shocking caused by traffic accidents), allowing us to further understand the mechanisms governing the transportation system's collective behavior. Multiscaling and hotspot features are observed in the traffic flow data as well. But the reason why the separated internal dynamics are comparable to the external dynamics in magnitude is still unclear and needs further investigations.
Memory effects in microscopic traffic models and wide scattering in flow-density data.
Treiber, Martin; Helbing, Dirk
2003-10-01
By means of microscopic simulations we show that noninstantaneous adaptation of the driving behavior to the traffic situation together with the conventional method to measure flow-density data provides a possible explanation for the observed inverse-lambda shape and the wide scattering of flow-density data in "synchronized" congested traffic. We model a memory effect in the response of drivers to the traffic situation for a wide class of car-following models by introducing an additional dynamical variable (the "subjective level of service") describing the adaptation of drivers to the surrounding traffic situation during the past few minutes and couple this internal state to parameters of the underlying model that are related to the driving style. For illustration, we use the intelligent-driver model (IDM) as the underlying model, characterize the level of service solely by the velocity, and couple the internal variable to the IDM parameter "time gap" to model an increase of the time gap in congested traffic ("frustration effect"), which is supported by single-vehicle data. We simulate open systems with a bottleneck and obtain flow-density data by implementing "virtual detectors." The shape, relative size, and apparent "stochasticity" of the region of the scattered data points agree nearly quantitatively with empirical data. Wide scattering is even observed for identical vehicles, although the proposed model is a time-continuous, deterministic, single-lane car-following model with a unique fundamental diagram.
Spatial continuity measures for probabilistic and deterministic geostatistics
Isaaks, E.H.; Srivastava, R.M.
1988-05-01
Geostatistics has traditionally used a probabilistic framework, one in which expected values or ensemble averages are of primary importance. The less familiar deterministic framework views geostatistical problems in terms of spatial integrals. This paper outlines the two frameworks and examines the issue of which spatial continuity measure, the covariance C(h) or the variogram ..sigma..(h), is appropriate for each framework. Although C(h) and ..sigma..(h) were defined originally in terms of spatial integrals, the convenience of probabilistic notation made the expected value definitions more common. These now classical expected value definitions entail a linear relationship between C(h) and ..sigma..(h); the spatial integral definitions do not. In a probabilistic framework, where available sample information is extrapolated to domains other than the one which was sampled, the expected value definitions are appropriate; furthermore, within a probabilistic framework, reasons exist for preferring the variogram to the covariance function. In a deterministic framework, where available sample information is interpolated within the same domain, the spatial integral definitions are appropriate and no reasons are known for preferring the variogram. A case study on a Wiener-Levy process demonstrates differences between the two frameworks and shows that, for most estimation problems, the deterministic viewpoint is more appropriate. Several case studies on real data sets reveal that the sample covariance function reflects the character of spatial continuity better than the sample variogram. From both theoretical and practical considerations, clearly for most geostatistical problems, direct estimation of the covariance is better than the traditional variogram approach.
Statistical methods of parameter estimation for deterministically chaotic time series.
Pisarenko, V F; Sornette, D
2004-03-01
We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A "segmentation fitting" maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x(1) considered as an additional unknown parameter. The segmentation fitting method, called "piece-wise" ML, is similar in spirit but simpler and has smaller bias than the "multiple shooting" previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically).
Matching solute breakthrough with deterministic and stochastic aquifer models.
Lemke, Lawrence D; Barrack, William A; Abriola, Linda M; Goovaerts, Pierre
2004-01-01
Two different deterministic and two alternative stochastic (i.e., geostatistical) approaches to modeling the distribution of hydraulic conductivity (K) in a nonuniform (sigma2ln(K)) = 0.29) glacial sand aquifer were used to explore the influence of conceptual model selection on simulations of three-dimensional tracer movement. The deterministic K models employed included a homogeneous effective K and a perfectly stratified 14 layer model. Stochastic K models were constructed using sequential Gaussian simulation and sequential i ndicator simulation conditioned to available K values estimated from measured grain size distributions. Standard simulation software packages MODFLOW, MT3DMS, and MODPATH were used to model three-dimensional ground water flow and transport in a field tracer test, where a pulse of bromide was injected through an array of three fully screened wells and extracted through a single fully screened well approximately 8 m away. Agreement between observed and simulated transport behavior was assessed through direct comparison of breakthrough curves (BTCs) and selected breakthrough metrics at the extraction well and at 26 individual multilevel sample ports distributed irregularly between the injection and extraction wells. Results indicate that conceptual models incorporating formation variability are better able to capture observed breakthrough behavior. Root mean square (RMS) error of the deterministic models bracketed the ensemble mean RMS error of stochastic models for simulated concentration vs. time series, but not for individual BTC characteristic metrics. The spatial variability models evaluated here may be better suited to simulating breakthrough behavior measured in wells screened over large intervals than at arbitrarily distributed observation points within a nonuniform aquifer domain.
Optical image encryption technique based on deterministic phase masks
NASA Astrophysics Data System (ADS)
Zamrani, Wiam; Ahouzi, Esmail; Lizana, Angel; Campos, Juan; Yzuel, María J.
2016-10-01
The double-random phase encoding (DRPE) scheme, which is based on a 4f optical correlator system, is considered as a reference for the optical encryption field. We propose a modification of the classical DRPE scheme based on the use of a class of structured phase masks, the deterministic phase masks. In particular, we propose to conduct the encryption process by using two deterministic phase masks, which are built from linear combinations of several subkeys. For the decryption step, the input image is retrieved by using the complex conjugate of the deterministic phase masks, which were set in the encryption process. This concept of structured masks gives rise to encryption-decryption keys which are smaller and more compact than those required in the classical DRPE. In addition, we show that our method significantly improves the tolerance of the DRPE method to shifts of the decrypting phase mask-when no shift is applied, it provides similar performance to the DRPE scheme in terms of encryption-decryption results. This enhanced tolerance to the shift, which is proven by providing numerical simulation results for grayscale and binary images, may relax the rigidity of an encryption-decryption experimental implementation setup. To evaluate the effectiveness of the described method, the mean-square-error and the peak signal-to-noise ratio between the input images and the recovered images are calculated. Different studies based on simulated data are also provided to highlight the suitability and robustness of the method when applied to the image encryption-decryption processes.
Deterministic side-branching during thermal dendritic growth
NASA Astrophysics Data System (ADS)
Mullis, Andrew M.
2015-06-01
The accepted view on dendritic side-branching is that side-branches grow as the result of selective amplification of thermal noise and that in the absence of such noise dendrites would grow without the development of side-arms. However, recently there has been renewed speculation about dendrites displaying deterministic side-branching [see e.g. ME Glicksman, Metall. Mater. Trans A 43 (2012) 391]. Generally, numerical models of dendritic growth, such as phase-field simulation, have tended to display behaviour which is commensurate with the former view, in that simulated dendrites do not develop side-branches unless noise is introduced into the simulation. However, here we present simulations at high undercooling that show that under certain conditions deterministic side-branching may occur. We use a model formulated in the thin interface limit and a range of advanced numerical techniques to minimise the numerical noise introduced into the solution, including a multigrid solver. Not only are multigrid solvers one of the most efficient means of inverting the large, but sparse, system of equations that results from implicit time-stepping, they are also very effective at smoothing noise at all wavelengths. This is in contrast to most Jacobi or Gauss-Seidel iterative schemes which are effective at removing noise with wavelengths comparable to the mesh size but tend to leave noise at longer wavelengths largely undamped. From an analysis of the tangential thermal gradients on the solid-liquid interface the mechanism for side-branching appears to be consistent with the deterministic model proposed by Glicksman.
The Future of Air Traffic Management
NASA Technical Reports Server (NTRS)
Denery, Dallas G.; Erzberger, Heinz; Edwards, Thomas A. (Technical Monitor)
1998-01-01
A system for the control of terminal area traffic to improve productivity, referred to as the Center-TRACON Automation System (CTAS), is being developed at NASA's Ames Research Center under a joint program with the FAA. CTAS consists of a set of integrated tools that provide computer-generated advisories for en-route and terminal area controllers. The premise behind the design of CTAS has been that successful planning of traffic requires accurate trajectory prediction. Data bases consisting of representative aircraft performance models, airline preferred operational procedures and a three dimensional wind model support the trajectory prediction. The research effort has been the design of a set of automation tools that make use of this trajectory prediction capability to assist controllers in overall management of traffic. The first tool, the Traffic Management Advisor (TMA), provides the overall flow management between the en route and terminal areas. A second tool, the Final Approach Spacing Tool (FAST) provides terminal area controllers with sequence and runway advisories to allow optimal use of the runways. The TMA and FAST are now being used in daily operations at Dallas/Ft. Worth airport. Additional activities include the development of several other tools. These include: 1) the En Route Descent Advisor that assist the en route controller in issuing conflict free descents and ascents; 2) the extension of FAST to include speed and heading advisories and the Expedite Departure Path (EDP) that assists the terminal controller in management of departures; and 3) the Collaborative Arrival Planner (CAP) that will assist the airlines in operational decision making. The purpose of this presentation is to review the CTAS concept and to present the results of recent field tests. The paper will first discuss the overall concept and then discuss the status of the individual tools.
The Future of Air Traffic Management
NASA Technical Reports Server (NTRS)
Denery, Dallas G.; Erzberger, Heinz; Edwards, Thomas A. (Technical Monitor)
1998-01-01
A system for the control of terminal area traffic to improve productivity, referred to as the Center-TRACON Automation System (CTAS), is being developed at NASA's Ames Research Center under a joint program with the FAA. CTAS consists of a set of integrated tools that provide computer-generated advisories for en-route and terminal area controllers. The premise behind the design of CTAS has been that successful planning of traffic requires accurate trajectory prediction. Data bases consisting of representative aircraft performance models, airline preferred operational procedures and a three dimensional wind model support the trajectory prediction. The research effort has been the design of a set of automation tools that make use of this trajectory prediction capability to assist controllers in overall management of traffic. The first tool, the Traffic Management Advisor (TMA), provides the overall flow management between the en route and terminal areas. A second tool, the Final Approach Spacing Tool (FAST) provides terminal area controllers with sequence and runway advisories to allow optimal use of the runways. The TMA and FAST are now being used in daily operations at Dallas/Ft. Worth airport. Additional activities include the development of several other tools. These include: 1) the En Route Descent Advisor that assist the en route controller in issuing conflict free descents and ascents; 2) the extension of FAST to include speed and heading advisories and the Expedite Departure Path (EDP) that assists the terminal controller in management of departures; and 3) the Collaborative Arrival Planner (CAP) that will assist the airlines in operational decision making. The purpose of this presentation is to review the CTAS concept and to present the results of recent field tests. The paper will first discuss the overall concept and then discuss the status of the individual tools.
Air Traffic Management Research at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Lee, Katharine
2005-01-01
Since the late 1980's, NASA Ames researchers have been investigating ways to improve the air transportation system through the development of decision support automation. These software advances, such as the Center-TRACON Automation System (eTAS) have been developed with teams of engineers, software developers, human factors experts, and air traffic controllers; some ASA Ames decision support tools are currently operational in Federal Aviation Administration (FAA) facilities and some are in use by the airlines. These tools have provided air traffic controllers and traffic managers the capabilities to help reduce overall delays and holding, and provide significant cost savings to the airlines as well as more manageable workload levels for air traffic service providers. NASA is continuing to collaborate with the FAA, as well as other government agencies, to plan and develop the next generation of decision support tools that will support anticipated changes in the air transportation system, including a projected increase to three times today's air-traffic levels by 2025. The presentation will review some of NASA Ames' recent achievements in air traffic management research, and discuss future tool developments and concepts currently under consideration.
Deterministic Remote State Preparation via the χ State
NASA Astrophysics Data System (ADS)
Zhang, Pei; Li, Xian; Ma, Song-Ya; Qu, Zhi-Guo
2017-05-01
Two deterministic schemes using the χ state as the entangled channel are put forward to realize the remote preparation of arbitrary two- and three-qubit states. To design the schemes, we construct sets of ingenious measurement bases, which have no restrictions on the coefficients of the prepared state. At variance with the existing schemes via the χ state, the success probabilities of the proposed schemes are greatly improved. Supported by the National Natural Science Foundation of China under Grant Nos. 61201253, 61373131, 61572246, Priority Academic Program Development of Jiangsu Higher Education Institutions, and Collaborative Innovation Center of Atmospheric Environment and Equipment Technology
CALTRANS: A parallel, deterministic, 3D neutronics code
Carson, L.; Ferguson, J.; Rogers, J.
1994-04-01
Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.
Deterministic Smoluchowski-Feynman ratchets driven by chaotic noise.
Chew, Lock Yue
2012-01-01
We have elucidated the effect of statistical asymmetry on the directed current in Smoluchowski-Feynman ratchets driven by chaotic noise. Based on the inhomogeneous Smoluchowski equation and its generalized version, we arrive at analytical expressions of the directed current that includes a source term. The source term indicates that statistical asymmetry can drive the system further away from thermodynamic equilibrium, as exemplified by the constant flashing, the state-dependent, and the tilted deterministic Smoluchowski-Feynman ratchets, with the consequence of an enhancement in the directed current.
Deterministic regularization of three-dimensional optical diffraction tomography
Sung, Yongjin; Dasari, Ramachandra R.
2012-01-01
In this paper we discuss a deterministic regularization algorithm to handle the missing cone problem of three-dimensional optical diffraction tomography (ODT). The missing cone problem arises in most practical applications of ODT and is responsible for elongation of the reconstructed shape and underestimation of the value of the refractive index. By applying positivity and piecewise-smoothness constraints in an iterative reconstruction framework, we effectively suppress the missing cone artifact and recover sharp edges rounded out by the missing cone, and we significantly improve the accuracy of the predictions of the refractive index. We also show the noise handling capability of our algorithm in the reconstruction process. PMID:21811316
Non-deterministic analysis of ocean environment loads
Fang Huacan; Xu Fayan; Gao Guohua; Xu Xingping
1995-12-31
Ocean environment loads consist of the wind force, sea wave force etc. Sea wave force not only has randomness, but also has fuzziness. Hence the non-deterministic description of wave environment must be carried out, in designing of an offshore structure or evaluation of the safety of offshore structure members in service. In order to consider the randomness of sea wave, the wind speed single parameter sea wave spectrum is proposed in the paper. And a new fuzzy grading statistic method for considering fuzziness of sea wave height H and period T is given in this paper. The principle and process of calculating fuzzy random sea wave spectrum will be published lastly.
Deterministic versus stochastic aspects of superexponential population growth models
NASA Astrophysics Data System (ADS)
Grosjean, Nicolas; Huillet, Thierry
2016-08-01
Deterministic population growth models with power-law rates can exhibit a large variety of growth behaviors, ranging from algebraic, exponential to hyperexponential (finite time explosion). In this setup, selfsimilarity considerations play a key role, together with two time substitutions. Two stochastic versions of such models are investigated, showing a much richer variety of behaviors. One is the Lamperti construction of selfsimilar positive stochastic processes based on the exponentiation of spectrally positive processes, followed by an appropriate time change. The other one is based on stable continuous-state branching processes, given by another Lamperti time substitution applied to stable spectrally positive processes.
The deterministic optical alignment of the HERMES spectrograph
NASA Astrophysics Data System (ADS)
Gers, Luke; Staszak, Nicholas
2014-07-01
The High Efficiency and Resolution Multi Element Spectrograph (HERMES) is a four channel, VPH-grating spectrograph fed by two 400 fiber slit assemblies whose construction and commissioning has now been completed at the Anglo Australian Telescope (AAT). The size, weight, complexity, and scheduling constraints of the system necessitated that a fully integrated, deterministic, opto-mechanical alignment system be designed into the spectrograph before it was manufactured. This paper presents the principles about which the system was assembled and aligned, including the equipment and the metrology methods employed to complete the spectrograph integration.
Role of infinite invariant measure in deterministic subdiffusion
NASA Astrophysics Data System (ADS)
Akimoto, Takuma; Miyaguchi, Tomoshige
2010-09-01
Statistical properties of the transport coefficient for deterministic subdiffusion are investigated from the viewpoint of infinite ergodic theory. We find that the averaged diffusion coefficient is characterized by the infinite invariant measure of the reduced map. We also show that when the time difference is much smaller than the total observation time, the time-averaged mean square displacement depends linearly on the time difference. Furthermore, the diffusion coefficient becomes a random variable and its limit distribution is characterized by the universal law called the Mittag-Leffler distribution.
Demonstration of deterministic and high fidelity squeezing of quantum information
Yoshikawa, Jun-ichi; Takei, Nobuyuki; Furusawa, Akira; Hayashi, Toshiki; Akiyama, Takayuki; Huck, Alexander; Andersen, Ulrik L.
2007-12-15
By employing a recent proposal [R. Filip, P. Marek, and U.L. Andersen, Phys. Rev. A 71, 042308 (2005)] we experimentally demonstrate a universal, deterministic, and high-fidelity squeezing transformation of an optical field. It relies only on linear optics, homodyne detection, feedforward, and an ancillary squeezed vacuum state, thus direct interaction between a strong pump and the quantum state is circumvented. We demonstrate three different squeezing levels for a coherent state input. This scheme is highly suitable for the fault-tolerant squeezing transformation in a continuous variable quantum computer.
Deterministic Ants in Labyrinth — Information Gained by Map Sharing
NASA Astrophysics Data System (ADS)
Malinowski, Janusz; Kantelhardt, Jan W.; Kułakowski, Krzysztof
2013-06-01
A few ant robots are placed in a labyrinth, formed by a square lattice with a small number of corridors removed. Ants move according to a deterministic algorithm designed to explore all corridors. Each ant remembers the shape of corridors which it has visited. Once two ants meet, they share the information acquired. We evaluate how the time of getting a complete information by an ant depends on the number of ants, and how the length known by an ant depends on time. Numerical results are presented in the form of scaling relations.
Deterministic shape control in plasma-aided nanotip assembly
NASA Astrophysics Data System (ADS)
Tam, E.; Levchenko, I.; Ostrikov, K.
2006-08-01
The possibility of deterministic plasma-assisted reshaping of capped cylindrical seed nanotips by manipulating the plasma parameter-dependent sheath width is shown. Multiscale hybrid gas phase/solid surface numerical experiments reveal that under the wide-sheath conditions the nanotips widen at the base and when the sheath is narrow, they sharpen up. By combining the wide- and narrow-sheath stages in a single process, it turns out possible to synthesize wide-base nanotips with long- and narrow-apex spikes, ideal for electron microemitter applications. This plasma-based approach is generic and can be applied to a larger number of multipurpose nanoassemblies.
Deterministic Models of Channel Headwall Erosion: Initiation and Propagation
1991-06-14
Port Ocean Div., Amer. Soc. Civil Engr. 106(WW3):369-389. Beltaos , S . 1976 Oblique impingement of plane turbulent jets. J. Hydr. Div. Amer. Soc. Civil...Engrs. 102(HY9): 1177-1192. Beltaos , S . and Rajaratnam. 1973. Plane turbulent impinging jets. J. Hydr. Res. 11:29-59. Bradford, J. M. and R. F. Priest...June 14, 1991 FINAL 7!5T Oy7- /%faq 4. TITLE AND SUBTITLE S . FUNDING NUMB ERS Deterministic Models of Channel Headwall Erosion: Initiation and
A Deterministic Transport Code for Space Environment Electrons
NASA Technical Reports Server (NTRS)
Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamczyk, Anne M.
2010-01-01
A deterministic computational procedure has been developed to describe transport of space environment electrons in various shield media. This code is an upgrade and extension of an earlier electron code. Whereas the former code was formulated on the basis of parametric functions derived from limited laboratory data, the present code utilizes well established theoretical representations to describe the relevant interactions and transport processes. The shield material specification has been made more general, as have the pertinent cross sections. A combined mean free path and average trajectory approach has been used in the transport formalism. Comparisons with Monte Carlo calculations are presented.
Lasing in an optimized deterministic aperiodic nanobeam cavity
NASA Astrophysics Data System (ADS)
Moon, Seul-Ki; Jeong, Kwang-Yong; Noh, Heeso; Yang, Jin-Kyu
2016-12-01
We have demonstrated lasing action from partially extended modes in deterministic aperiodic nanobeam cavities inflated by Rudin-Shapiro sequence with two different air holes at room temperature. By varying the size ratio of the holes and hence the structural aperiodicity, different optical lasing modes were obtained with maximized quality factors. The lasing characteristics of the partially extended modes were confirmed by numerical simulations based on scanning microscope images of the fabricated samples. We believe that this partially extended nanobeam modes will be useful for label-free optical biosensors.
[Comics for traffic education: evaluation of a traffic safety campaign].
Bonfadelli, H
1989-01-01
Traffic safety campaigns often are ineffective to change driving behavior because they don't reach the target group or are recognized only by people who are already interested or concerned. The evaluation of a traffic safety campaign called "Leo Lässig", addressed to young new drivers, shows that recognition and acceptance by the target group were stimulated by the age-conform means of comic-strips.
The effect of traffic tickets on road traffic crashes.
Factor, Roni
2014-03-01
Road traffic crashes are globally a leading cause of death. The current study tests the effect of traffic tickets issued to drivers on subsequent crashes, using a unique dataset that overcomes some shortcomings of previous studies. The study takes advantage of a national longitudinal dataset at the individual level that merges Israeli census data with data on traffic tickets issued by the police and official data on involvement in road traffic crashes over seven years. The results show that the estimated probability of involvement in a subsequent fatal or severe crash was more than eleven times higher for drivers with six traffic tickets per year compared to those with one ticket per year, while controlling for various confounders. However, the majority of fatal and severe crashes involved the larger population of drivers who received up to one ticket on average per year. The current findings indicate that reducing traffic violations may contribute significantly to crash and injury reduction. In addition, mass random enforcement programs may be more effective in reducing fatal and severe crashes than targeting high-risk recidivist drivers.
Characteristics of synchronized traffic in mixed traffic flow
NASA Astrophysics Data System (ADS)
Ning, Hong-Xin; Xue, Yu
2012-04-01
In this paper, the characteristics of synchronized traffic in mixed traffic flow are investigated based on the braking light model. By introducing the energy dissipation and the distribution of slowdown vehicles, the effects of the maximum velocity, the mixing ratio, and the length of vehicles on the synchronized flow are discussed. It is found that the maximum velocity plays a great role in the synchronized flow in mixed traffic. The energy dissipation and the distribution of slowdown vehicles in the synchronized flow region are greatly different from those in free flow and a traffic jamming region. When all of vehicles have the same maximum velocity with Vmax > 15, the mixed traffic significantly displays synchronized flow, which has been demonstrated by the relation between flow rate and occupancy and estimation of the cross-correlation function. Moreover, the energy dissipation in the synchronized flow region does not increase with occupancy. The distribution of slowdown vehicles shows a changeless platform in the synchronized flow region. This is an interesting phenomenon. It helps to deeply understand the synchronized flow and greatly reduce the energy dissipation of traffic flow.
Bauer, Miriam H. A.; Kuhnt, Daniela; Barbieri, Sebastiano; Klein, Jan; Becker, Andreas; Freisleben, Bernd; Hahn, Horst K.; Nimsky, Christopher
2013-01-01
Diffusion Tensor Imaging (DTI) and fiber tractography are established methods to reconstruct major white matter tracts in the human brain in-vivo. Particularly in the context of neurosurgical procedures, reliable information about the course of fiber bundles is important to minimize postoperative deficits while maximizing the tumor resection volume. Since routinely used deterministic streamline tractography approaches often underestimate the spatial extent of white matter tracts, a novel approach to improve fiber segmentation is presented here, considering clinical time constraints. Therefore, fiber tracking visualization is enhanced with statistical information from multiple tracking applications to determine uncertainty in reconstruction based on clinical DTI data. After initial deterministic fiber tracking and centerline calculation, new seed regions are generated along the result’s midline. Tracking is applied to all new seed regions afterwards, varying in number and applied offset. The number of fibers passing each voxel is computed to model different levels of fiber bundle membership. Experimental results using an artificial data set of an anatomical software phantom are presented, using the Dice Similarity Coefficient (DSC) as a measure of segmentation quality. Different parameter combinations were classified to be superior to others providing significantly improved results with DSCs of 81.02%±4.12%, 81.32%±4.22% and 80.99%±3.81% for different levels of added noise in comparison to the deterministic fiber tracking procedure using the two-ROI approach with average DSCs of 65.08%±5.31%, 64.73%±6.02% and 65.91%±6.42%. Whole brain tractography based on the seed volume generated by the calculated seeds delivers average DSCs of 67.12%±0.86%, 75.10%±0.28% and 72.91%±0.15%, original whole brain tractography delivers DSCs of 67.16%, 75.03% and 75.54%, using initial ROIs as combined include regions, which is clearly improved by the repeated fiber
Deterministic Coupling of Quantum Emitters in 2D Materials to Plasmonic Nanocavity Arrays.
Tran, Toan Trong; Wang, Danqing; Xu, Zai-Quan; Yang, Ankun; Toth, Milos; Odom, Teri W; Aharonovich, Igor
2017-04-12
Quantum emitters in two-dimensional materials are promising candidates for studies of light-matter interaction and next generation, integrated on-chip quantum nanophotonics. However, the realization of integrated nanophotonic systems requires the coupling of emitters to optical cavities and resonators. In this work, we demonstrate hybrid systems in which quantum emitters in 2D hexagonal boron nitride (hBN) are deterministically coupled to high-quality plasmonic nanocavity arrays. The plasmonic nanoparticle arrays offer a high-quality, low-loss cavity in the same spectral range as the quantum emitters in hBN. The coupled emitters exhibit enhanced emission rates and reduced fluorescence lifetimes, consistent with Purcell enhancement in the weak coupling regime. Our results provide the foundation for a versatile approach for achieving scalable, integrated hybrid systems based on low-loss plasmonic nanoparticle arrays and 2D materials.
Deterministic secure quantum communication using a single d-level system
Jiang, Dong; Chen, Yuanyuan; Gu, Xuemei; Xie, Ling; Chen, Lijun
2017-01-01
Deterministic secure quantum communication (DSQC) can transmit secret messages between two parties without first generating a shared secret key. Compared with quantum key distribution (QKD), DSQC avoids the waste of qubits arising from basis reconciliation and thus reaches higher efficiency. In this paper, based on data block transmission and order rearrangement technologies, we propose a DSQC protocol. It utilizes a set of single d-level systems as message carriers, which are used to directly encode the secret message in one communication process. Theoretical analysis shows that these employed technologies guarantee the security, and the use of a higher dimensional quantum system makes our protocol achieve higher security and efficiency. Since only quantum memory is required for implementation, our protocol is feasible with current technologies. Furthermore, Trojan horse attack (THA) is taken into account in our protocol. We give a THA model and show that THA significantly increases the multi-photon rate and can thus be detected. PMID:28327557
Deterministic secure quantum communication using a single d-level system
NASA Astrophysics Data System (ADS)
Jiang, Dong; Chen, Yuanyuan; Gu, Xuemei; Xie, Ling; Chen, Lijun
2017-03-01
Deterministic secure quantum communication (DSQC) can transmit secret messages between two parties without first generating a shared secret key. Compared with quantum key distribution (QKD), DSQC avoids the waste of qubits arising from basis reconciliation and thus reaches higher efficiency. In this paper, based on data block transmission and order rearrangement technologies, we propose a DSQC protocol. It utilizes a set of single d-level systems as message carriers, which are used to directly encode the secret message in one communication process. Theoretical analysis shows that these employed technologies guarantee the security, and the use of a higher dimensional quantum system makes our protocol achieve higher security and efficiency. Since only quantum memory is required for implementation, our protocol is feasible with current technologies. Furthermore, Trojan horse attack (THA) is taken into account in our protocol. We give a THA model and show that THA significantly increases the multi-photon rate and can thus be detected.
Hong-Ou-Mandel Interference between Two Deterministic Collective Excitations in an Atomic Ensemble
NASA Astrophysics Data System (ADS)
Li, Jun; Zhou, Ming-Ti; Jing, Bo; Wang, Xu-Jie; Yang, Sheng-Jun; Jiang, Xiao; Mølmer, Klaus; Bao, Xiao-Hui; Pan, Jian-Wei
2016-10-01
We demonstrate deterministic generation of two distinct collective excitations in one atomic ensemble, and we realize the Hong-Ou-Mandel interference between them. Using Rydberg blockade we create single collective excitations in two different Zeeman levels, and we use stimulated Raman transitions to perform a beam-splitter operation between the excited atomic modes. By converting the atomic excitations into photons, the two-excitation interference is measured by photon coincidence detection with a visibility of 0.89(6). The Hong-Ou-Mandel interference witnesses an entangled NOON state of the collective atomic excitations, and we demonstrate its two times enhanced sensitivity to a magnetic field compared with a single excitation. Our work implements a minimal instance of boson sampling and paves the way for further multimode and multiexcitation studies with collective excitations of atomic ensembles.
Deterministic secure quantum communication using a single d-level system.
Jiang, Dong; Chen, Yuanyuan; Gu, Xuemei; Xie, Ling; Chen, Lijun
2017-03-22
Deterministic secure quantum communication (DSQC) can transmit secret messages between two parties without first generating a shared secret key. Compared with quantum key distribution (QKD), DSQC avoids the waste of qubits arising from basis reconciliation and thus reaches higher efficiency. In this paper, based on data block transmission and order rearrangement technologies, we propose a DSQC protocol. It utilizes a set of single d-level systems as message carriers, which are used to directly encode the secret message in one communication process. Theoretical analysis shows that these employed technologies guarantee the security, and the use of a higher dimensional quantum system makes our protocol achieve higher security and efficiency. Since only quantum memory is required for implementation, our protocol is feasible with current technologies. Furthermore, Trojan horse attack (THA) is taken into account in our protocol. We give a THA model and show that THA significantly increases the multi-photon rate and can thus be detected.
Hong-Ou-Mandel Interference between Two Deterministic Collective Excitations in an Atomic Ensemble.
Li, Jun; Zhou, Ming-Ti; Jing, Bo; Wang, Xu-Jie; Yang, Sheng-Jun; Jiang, Xiao; Mølmer, Klaus; Bao, Xiao-Hui; Pan, Jian-Wei
2016-10-28
We demonstrate deterministic generation of two distinct collective excitations in one atomic ensemble, and we realize the Hong-Ou-Mandel interference between them. Using Rydberg blockade we create single collective excitations in two different Zeeman levels, and we use stimulated Raman transitions to perform a beam-splitter operation between the excited atomic modes. By converting the atomic excitations into photons, the two-excitation interference is measured by photon coincidence detection with a visibility of 0.89(6). The Hong-Ou-Mandel interference witnesses an entangled NOON state of the collective atomic excitations, and we demonstrate its two times enhanced sensitivity to a magnetic field compared with a single excitation. Our work implements a minimal instance of boson sampling and paves the way for further multimode and multiexcitation studies with collective excitations of atomic ensembles.
Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W; Grove, Robert E
2015-01-01
Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNce reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).
Latanision, R.M.
1990-12-01
Electrochemical corrosion is pervasive in virtually all engineering systems and in virtually all industrial circumstances. Although engineers now understand how to design systems to minimize corrosion in many instances, many fundamental questions remain poorly understood and, therefore, the development of corrosion control strategies is based more on empiricism than on a deep understanding of the processes by which metals corrode in electrolytes. Fluctuations in potential, or current, in electrochemical systems have been observed for many years. To date, all investigations of this phenomenon have utilized non-deterministic analyses. In this work it is proposed to study electrochemical noise from a deterministic viewpoint by comparison of experimental parameters, such as first and second order moments (non-deterministic), with computer simulation of corrosion at metal surfaces. In this way it is proposed to analyze the origins of these fluctuations and to elucidate the relationship between these fluctuations and kinetic parameters associated with metal dissolution and cathodic reduction reactions. This research program addresses in essence two areas of interest: (a) computer modeling of corrosion processes in order to study the electrochemical processes on an atomistic scale, and (b) experimental investigations of fluctuations in electrochemical systems and correlation of experimental results with computer modeling. In effect, the noise generated by mathematical modeling will be analyzed and compared to experimental noise in electrochemical systems. 1 fig.
36 CFR 4.13 - Obstructing traffic.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 36 Parks, Forests, and Public Property 1 2012-07-01 2012-07-01 false Obstructing traffic. 4.13... VEHICLES AND TRAFFIC SAFETY § 4.13 Obstructing traffic. The following are prohibited: (a) Stopping or... interfere with the normal flow of traffic....
36 CFR 4.13 - Obstructing traffic.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 36 Parks, Forests, and Public Property 1 2011-07-01 2011-07-01 false Obstructing traffic. 4.13... VEHICLES AND TRAFFIC SAFETY § 4.13 Obstructing traffic. The following are prohibited: (a) Stopping or... interfere with the normal flow of traffic....
36 CFR 4.13 - Obstructing traffic.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Obstructing traffic. 4.13... VEHICLES AND TRAFFIC SAFETY § 4.13 Obstructing traffic. The following are prohibited: (a) Stopping or... interfere with the normal flow of traffic....
36 CFR 4.13 - Obstructing traffic.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 36 Parks, Forests, and Public Property 1 2013-07-01 2013-07-01 false Obstructing traffic. 4.13... VEHICLES AND TRAFFIC SAFETY § 4.13 Obstructing traffic. The following are prohibited: (a) Stopping or... interfere with the normal flow of traffic....
36 CFR 4.13 - Obstructing traffic.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 36 Parks, Forests, and Public Property 1 2014-07-01 2014-07-01 false Obstructing traffic. 4.13... VEHICLES AND TRAFFIC SAFETY § 4.13 Obstructing traffic. The following are prohibited: (a) Stopping or... interfere with the normal flow of traffic....
14 CFR 25 - Traffic and Capacity Elements
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Traffic and Capacity Elements Section 25... Traffic Reporting Requirements Section 25 Traffic and Capacity Elements General Instructions. (a) All prescribed reporting for traffic and capacity elements shall conform with the data compilation standards...
NASA Astrophysics Data System (ADS)
Fischer, P.; Jardani, A.; Lecoq, N.
2017-03-01
Inverse problem permits to map the subsurface properties from a few observed data. The inverse problem can be physically constrained by a priori information on the property distribution in order to limit the nonuniqueness of the solution. The geostatistical information is often chosen as a priori information; however, when the field properties present a spatial locally distributed high variability, the geostatistical approach becomes inefficient. Therefore, we propose a new method adapted for fields presenting linear structures (such as a fractured field). The Cellular Automata-based Deterministic Inversion (CADI) method is, as far as we know when this paper is produced, the first inversion method which permits a deterministic inversion based on a Bayesian approach and using a dynamic optimization to generate different linear structures iteratively. The model is partitioned in cellular automaton subspaces, each one controlling a different zone of the model. A cellular automata subspace structures the properties of the model in two units ("structure" and "background") and control their dispensing direction and their values. The partitioning of the model in subspaces permits to monitor a large-scale structural model with only a few pilot-parameters and to generate linear structures with local direction changes. Thereby, the algorithm can easily handle with large-scale structures, and a sensitivity analysis is possible on these structural pilot-parameters, which permits to considerably accelerate the optimization process in order to find the best structural geometry. The algorithm has been successfully tested on simple, to more complex, theoretical models with different inversion techniques by using seismic and hydraulic data.
Neo-deterministic seismic hazard scenarios for India—a preventive tool for disaster mitigation
NASA Astrophysics Data System (ADS)
Parvez, Imtiyaz A.; Magrin, Andrea; Vaccari, Franco; Ashish; Mir, Ramees R.; Peresan, Antonella; Panza, Giuliano Francesco
2017-08-01
Current computational resources and physical knowledge of the seismic wave generation and propagation processes allow for reliable numerical and analytical models of waveform generation and propagation. From the simulation of ground motion, it is easy to extract the desired earthquake hazard parameters. Accordingly, a scenario-based approach to seismic hazard assessment has been developed, namely the neo-deterministic seismic hazard assessment (NDSHA), which allows for a wide range of possible seismic sources to be used in the definition of reliable scenarios by means of realistic waveforms modelling. Such reliable and comprehensive characterization of expected earthquake ground motion is essential to improve building codes, particularly for the protection of critical infrastructures and for land use planning. Parvez et al. (Geophys J Int 155:489-508, 2003) published the first ever neo-deterministic seismic hazard map of India by computing synthetic seismograms with input data set consisting of structural models, seismogenic zones, focal mechanisms and earthquake catalogues. As described in Panza et al. (Adv Geophys 53:93-165, 2012), the NDSHA methodology evolved with respect to the original formulation used by Parvez et al. (Geophys J Int 155:489-508, 2003): the computer codes were improved to better fit the need of producing realistic ground shaking maps and ground shaking scenarios, at different scale levels, exploiting the most significant pertinent progresses in data acquisition and modelling. Accordingly, the present study supplies a revised NDSHA map for India. The seismic hazard, expressed in terms of maximum displacement (Dmax), maximum velocity (Vmax) and design ground acceleration (DGA), has been extracted from the synthetic signals and mapped on a regular grid over the studied territory.
NASA Astrophysics Data System (ADS)
Jardani, A.; Fischer, P.; Lecoq, N.
2016-12-01
Inverse problem permits to map the subsurface properties from the data of a field investigation. The inverse problem can be physically constrained by a priori information on the properties distribution in order to limit the non-uniqueness of the solution. In this case, geostatistical information are often chosen as a priori information, because they are simple to incorporate as a covariance function and they produce realistic model in many cases. But when field properties present a spatial locally-distributed high variability, a geostatistical approach on the properties distribution becomes inefficient. Therefore, we propose a new method adapted for fields presenting linear structures (such as a fractured field). The Cellular Automata-based Deterministic Inversion (CADI) method is, as far as we know, the first inversion method which permits a deterministic inversion based on a Bayesian approach and using a dynamic optimization to generate different linear structures iteratively. The model is partitioned in cellular automaton subspaces, each one controlling a different zone of the model. A cellular automata subspace structures the properties of the model in two units (`structure' and `background') and control their dispensing direction and their values. The partitioning of the model in subspaces permit to monitor a large-scale structural model with only a few pilot-parameters and to generate linear structures with local direction changes. Thereby, the algorithm can easily handle with large-scale structures, and a sensitivity analysis is possible on these structural pilot-parameters, which permits to considerably accelerate the optimization process in order to find the best structural geometry to reproduce the data. The algorithm has been successfully tested on simple, to more complex, theoretical models with different inversion technics (linear, non-linear and joint inversion), by using seismic and hydraulic data.
Road traffic injuries: a stocktaking.
Mohan, Dinesh
2008-08-01
Once we accept that road traffic injury control is a public health problem, and that we have an ethical responsibility to arrange for the safety of individuals, then it follows that health and medical professionals have to assume responsibility for participating in efforts to control this pandemic. Over 1.2 million people die of road traffic crashes annually. Road traffic injuries are among the second to the sixth leading causes of death in the age groups 15-60 years in all countries around the world. Control of road traffic injuries is going to require very special efforts as patterns are different in high- and lower-income countries, and while some countermeasures are applicable internationally, others will need further research and innovation. We will need to focus on the safety of pedestrians, bicyclists and motorcyclists, speed control, and prevention of driving under the influence of alcohol.
Kinetic model of network traffic
NASA Astrophysics Data System (ADS)
Antoniou, I.; Ivanov, V. V.; Kalinovsky, Yu. L.
2002-05-01
We present the first results on the application of the Prigogine-Herman kinetic approach (Kinetic Theory of Vehicular Traffic, American Elsevier Publishing Company, Inc., New York, 1971) to the network traffic. We discuss the solution of the kinetic equation for homogeneous time-independent situations and for the desired speed distribution function, obtained from traffic measurements analysis. For the log-normal desired speed distribution function the solution clearly shows two modes corresponding to individual flow patterns (low-concentration mode) and to collective flow patterns (traffic jam mode). For low-concentration situations we found almost linear dependence of the information flow versus the concentration and that the higher the average speed the lower the concentration at which the optimum flow takes place. When approaching the critical concentration there are no essential differences in the flow for different desired average speeds, whereas for the individual flow regions there are dramatic differences.
2013 Traffic Safety Culture Index
... term term vision is to create a “social climate in which traffic safety is highly valued and ... consider it unacceptable to do so in a school zone. Attitudes and behavior: Red-light running Most ...
Real-Time Traffic Signal Control for Optimization of Traffic Jam Probability
NASA Astrophysics Data System (ADS)
Cui, Cheng-You; Shin, Ji-Sun; Miyazaki, Michio; Lee, Hee-Hyol
Real-time traffic signal control is an integral part of urban traffic control system. It can control traffic signals online according to variation of traffic flow. In this paper, we propose a new method for the real-time traffic signal control system. The system uses a Cellular Automaton model and a Bayesian Network model to predict probabilistic distributions of standing vehicles, and uses a Particle Swarm Optimization method to calculate optimal traffic signals. A simulation based on real traffic data was carried out to show the effectiveness of the proposed real-time traffic signal control system CAPSOBN using a micro traffic simulator.
Evidence of Long Range Dependence and Self-similarity in Urban Traffic Systems
Thakur, Gautam S; Helmy, Ahmed; Hui, Pan
2015-01-01
Transportation simulation technologies should accurately model traffic demand, distribution, and assignment parame- ters for urban environment simulation. These three param- eters significantly impact transportation engineering bench- mark process, are also critical in realizing realistic traffic modeling situations. In this paper, we model and charac- terize traffic density distribution of thousands of locations around the world. The traffic densities are generated from millions of images collected over several years and processed using computer vision techniques. The resulting traffic den- sity distribution time series are then analyzed. It is found using the goodness-of-fit test that the traffic density dis- tributions follows heavy-tail models such as Log-gamma, Log-logistic, and Weibull in over 90% of analyzed locations. Moreover, a heavy-tail gives rise to long-range dependence and self-similarity, which we studied by estimating the Hurst exponent (H). Our analysis based on seven different Hurst estimators strongly indicate that the traffic distribution pat- terns are stochastically self-similar (0.5 H 1.0). We believe this is an important finding that will influence the design and development of the next generation traffic simu- lation techniques and also aid in accurately modeling traffic engineering of urban systems. In addition, it shall provide a much needed input for the development of smart cities.
Mechanism of the jamming transition in the two-dimensional traffic networks. II
NASA Astrophysics Data System (ADS)
Ishibashi, Yoshihiro; Fukui, Minoru
2014-01-01
The jamming transition in a two-dimensional traffic network is investigated based upon the cellular automaton simulations, where the update rule is deterministic, though the initial car configuration is randomly set. The lifetime of the system is defined as the time until when all cars in the system come to a stop, and it will increase with decreasing car density from a higher density side. The critical car density is defined as the car density, at which the corresponding lifetime diverges. The analytical expression for the critical car density is proposed.
Fully automated urban traffic system
NASA Technical Reports Server (NTRS)
Dobrotin, B. M.; Hansen, G. R.; Peng, T. K. C.; Rennels, D. A.
1977-01-01
The replacement of the driver with an automatic system which could perform the functions of guiding and routing a vehicle with a human's capability of responding to changing traffic demands was discussed. The problem was divided into four technological areas; guidance, routing, computing, and communications. It was determined that the latter three areas being developed independent of any need for fully automated urban traffic. A guidance system that would meet system requirements was not being developed but was technically feasible.
Broadcast control of air traffic
NASA Technical Reports Server (NTRS)
Litchford, G. B.
1971-01-01
Concepts of increased pilot participation in air traffic control are presented. The design of an air traffic control system for pilot usage is considered. The operating and safety benefits of LF/VLF approaches in comparison to current nonprecision approach procedures and systems are discussed. With a good national system plan, flight testing and validation, and the use of local differential, or general diurnal, corrections, the LF/VLF system would provide service superior to that presently available.
An advanced deterministic method for spent fuel criticality safety analysis
DeHart, M.D.
1998-01-01
Over the past two decades, criticality safety analysts have come to rely to a large extent on Monte Carlo methods for criticality calculations. Monte Carlo has become popular because of its capability to model complex, non-orthogonal configurations or fissile materials, typical of real world problems. Over the last few years, however, interest in determinist transport methods has been revived, due shortcomings in the stochastic nature of Monte Carlo approaches for certain types of analyses. Specifically, deterministic methods are superior to stochastic methods for calculations requiring accurate neutron density distributions or differential fluxes. Although Monte Carlo methods are well suited for eigenvalue calculations, they lack the localized detail necessary to assess uncertainties and sensitivities important in determining a range of applicability. Monte Carlo methods are also inefficient as a transport solution for multiple pin depletion methods. Discrete ordinates methods have long been recognized as one of the most rigorous and accurate approximations used to solve the transport equation. However, until recently, geometric constraints in finite differencing schemes have made discrete ordinates methods impractical for non-orthogonal configurations such as reactor fuel assemblies. The development of an extended step characteristic (ESC) technique removes the grid structure limitations of traditional discrete ordinates methods. The NEWT computer code, a discrete ordinates code built upon the ESC formalism, is being developed as part of the SCALE code system. This paper will demonstrate the power, versatility, and applicability of NEWT as a state-of-the-art solution for current computational needs.
On the deterministic and stochastic use of hydrologic models
NASA Astrophysics Data System (ADS)
Farmer, William H.; Vogel, Richard M.
2016-07-01
Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.
Deterministic direct reprogramming of somatic cells to pluripotency.
Rais, Yoach; Zviran, Asaf; Geula, Shay; Gafni, Ohad; Chomsky, Elad; Viukov, Sergey; Mansour, Abed AlFatah; Caspi, Inbal; Krupalnik, Vladislav; Zerbib, Mirie; Maza, Itay; Mor, Nofar; Baran, Dror; Weinberger, Leehee; Jaitin, Diego A; Lara-Astiaso, David; Blecher-Gonen, Ronnie; Shipony, Zohar; Mukamel, Zohar; Hagai, Tzachi; Gilad, Shlomit; Amann-Zalcenstein, Daniela; Tanay, Amos; Amit, Ido; Novershtern, Noa; Hanna, Jacob H
2013-10-03
Somatic cells can be inefficiently and stochastically reprogrammed into induced pluripotent stem (iPS) cells by exogenous expression of Oct4 (also called Pou5f1), Sox2, Klf4 and Myc (hereafter referred to as OSKM). The nature of the predominant rate-limiting barrier(s) preventing the majority of cells to successfully and synchronously reprogram remains to be defined. Here we show that depleting Mbd3, a core member of the Mbd3/NuRD (nucleosome remodelling and deacetylation) repressor complex, together with OSKM transduction and reprogramming in naive pluripotency promoting conditions, result in deterministic and synchronized iPS cell reprogramming (near 100% efficiency within seven days from mouse and human cells). Our findings uncover a dichotomous molecular function for the reprogramming factors, serving to reactivate endogenous pluripotency networks while simultaneously directly recruiting the Mbd3/NuRD repressor complex that potently restrains the reactivation of OSKM downstream target genes. Subsequently, the latter interactions, which are largely depleted during early pre-implantation development in vivo, lead to a stochastic and protracted reprogramming trajectory towards pluripotency in vitro. The deterministic reprogramming approach devised here offers a novel platform for the dissection of molecular dynamics leading to establishing pluripotency at unprecedented flexibility and resolution.
Forced Translocation of Polymer through Nanopore: Deterministic Model and Simulations
NASA Astrophysics Data System (ADS)
Wang, Yanqian; Panyukov, Sergey; Liao, Qi; Rubinstein, Michael
2012-02-01
We propose a new theoretical model of forced translocation of a polymer chain through a nanopore. We assume that DNA translocation at high fields proceeds too fast for the chain to relax, and thus the chain unravels loop by loop in an almost deterministic way. So the distribution of translocation times of a given monomer is controlled by the initial conformation of the chain (the distribution of its loops). Our model predicts the translocation time of each monomer as an explicit function of initial polymer conformation. We refer to this concept as ``fingerprinting''. The width of the translocation time distribution is determined by the loop distribution in initial conformation as well as by the thermal fluctuations of the polymer chain during the translocation process. We show that the conformational broadening δt of translocation times of m-th monomer δtm^1.5 is stronger than the thermal broadening δtm^1.25 The predictions of our deterministic model were verified by extensive molecular dynamics simulations
On the deterministic and stochastic use of hydrologic models
Farmer, William H.; Vogel, Richard M.
2016-01-01
Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.
Deterministic composite nanophotonic lattices in large area for broadband applications
Xavier, Jolly; Probst, Jürgen; Becker, Christiane
2016-01-01
Exotic manipulation of the flow of photons in nanoengineered materials with an aperiodic distribution of nanostructures plays a key role in efficiency-enhanced broadband photonic and plasmonic technologies for spectrally tailorable integrated biosensing, nanostructured thin film solarcells, white light emitting diodes, novel plasmonic ensembles etc. Through a generic deterministic nanotechnological route here we show subwavelength-scale silicon (Si) nanostructures on nanoimprinted glass substrate in large area (4 cm2) with advanced functional features of aperiodic composite nanophotonic lattices. These nanophotonic aperiodic lattices have easily tailorable supercell tiles with well-defined and discrete lattice basis elements and they show rich Fourier spectra. The presented nanophotonic lattices are designed functionally akin to two-dimensional aperiodic composite lattices with unconventional flexibility- comprising periodic photonic crystals and/or in-plane photonic quasicrystals as pattern design subsystems. The fabricated composite lattice-structured Si nanostructures are comparatively analyzed with a range of nanophotonic structures with conventional lattice geometries of periodic, disordered random as well as in-plane quasicrystalline photonic lattices with comparable lattice parameters. As a proof of concept of compatibility with advanced bottom-up liquid phase crystallized (LPC) Si thin film fabrication, the experimental structural analysis is further extended to double-side-textured deterministic aperiodic lattice-structured 10 μm thick large area LPC Si film on nanoimprinted substrates. PMID:27941869
Predictability of normal heart rhythms and deterministic chaos
NASA Astrophysics Data System (ADS)
Lefebvre, J. H.; Goodings, D. A.; Kamath, M. V.; Fallen, E. L.
1993-04-01
The evidence for deterministic chaos in normal heart rhythms is examined. Electrocardiograms were recorded of 29 subjects falling into four groups—a young healthy group, an older healthy group, and two groups of patients who had recently suffered an acute myocardial infarction. From the measured R-R intervals, a time series of 1000 first differences was constructed for each subject. The correlation integral of Grassberger and Procaccia was calculated for several subjects using these relatively short time series. No evidence was found for the existence of an attractor having a dimension less than about 4. However, a prediction method recently proposed by Sugihara and May and an autoregressive linear predictor both show that there is a measure of short-term predictability in the differenced R-R intervals. Further analysis revealed that the short-term predictability calculated by the Sugihara-May method is not consistent with the null hypothesis of a Gaussian random process. The evidence for a small amount of nonlinear dynamical behavior together with the short-term predictability suggest that there is an element of deterministic chaos in normal heart rhythms, although it is not strong or persistent. Finally, two useful parameters of the predictability curves are identified, namely, the `first step predictability' and the `predictability decay rate,' neither of which appears to be significantly correlated with the standard deviation of the R-R intervals.
Deterministic composite nanophotonic lattices in large area for broadband applications.
Xavier, Jolly; Probst, Jürgen; Becker, Christiane
2016-12-12
Exotic manipulation of the flow of photons in nanoengineered materials with an aperiodic distribution of nanostructures plays a key role in efficiency-enhanced broadband photonic and plasmonic technologies for spectrally tailorable integrated biosensing, nanostructured thin film solarcells, white light emitting diodes, novel plasmonic ensembles etc. Through a generic deterministic nanotechnological route here we show subwavelength-scale silicon (Si) nanostructures on nanoimprinted glass substrate in large area (4 cm(2)) with advanced functional features of aperiodic composite nanophotonic lattices. These nanophotonic aperiodic lattices have easily tailorable supercell tiles with well-defined and discrete lattice basis elements and they show rich Fourier spectra. The presented nanophotonic lattices are designed functionally akin to two-dimensional aperiodic composite lattices with unconventional flexibility- comprising periodic photonic crystals and/or in-plane photonic quasicrystals as pattern design subsystems. The fabricated composite lattice-structured Si nanostructures are comparatively analyzed with a range of nanophotonic structures with conventional lattice geometries of periodic, disordered random as well as in-plane quasicrystalline photonic lattices with comparable lattice parameters. As a proof of concept of compatibility with advanced bottom-up liquid phase crystallized (LPC) Si thin film fabrication, the experimental structural analysis is further extended to double-side-textured deterministic aperiodic lattice-structured 10 μm thick large area LPC Si film on nanoimprinted substrates.
Made-to-order nanocarbons through deterministic plasma nanotechnology
NASA Astrophysics Data System (ADS)
Ren, Yuping; Xu, Shuyan; Rider, Amanda Evelyn; Ostrikov, Kostya (Ken)
2011-02-01
Through a combinatorial approach involving experimental measurement and plasma modelling, it is shown that a high degree of control over diamond-like nanocarbon film sp3/sp2 ratio (and hence film properties) may be exercised, starting at the level of electrons (through modification of the plasma electron energy distribution function). Hydrogenated amorphous carbon nanoparticle films with high percentages of diamond-like bonds are grown using a middle-frequency (2 MHz) inductively coupled Ar + CH4 plasma. The sp3 fractions measured by X-ray photoelectron spectroscopy (XPS) and Raman spectroscopy in the thin films are explained qualitatively using sp3/sp2 ratios 1) derived from calculated sp3 and sp2 hybridized precursor species densities in a global plasma discharge model and 2) measured experimentally. It is shown that at high discharge power and lower CH4 concentrations, the sp3/sp2 fraction is higher. Our results suggest that a combination of predictive modeling and experimental studies is instrumental to achieve deterministically grown made-to-order diamond-like nanocarbons suitable for a variety of applications spanning from nano-magnetic resonance imaging to spin-flip quantum information devices. This deterministic approach can be extended to graphene, carbon nanotips, nanodiamond and other nanocarbon materials for a variety of applications
Deterministic photon-emitter coupling in chiral photonic circuits
NASA Astrophysics Data System (ADS)
Söllner, Immo; Mahmoodian, Sahand; Hansen, Sofie Lindskov; Midolo, Leonardo; Javadi, Alisa; Kiršanskė, Gabija; Pregnolato, Tommaso; El-Ella, Haitham; Lee, Eun Hye; Song, Jin Dong; Stobbe, Søren; Lodahl, Peter
2015-09-01
Engineering photon emission and scattering is central to modern photonics applications ranging from light harvesting to quantum-information processing. To this end, nanophotonic waveguides are well suited as they confine photons to a one-dimensional geometry and thereby increase the light-matter interaction. In a regular waveguide, a quantum emitter interacts equally with photons in either of the two propagation directions. This symmetry is violated in nanophotonic structures in which non-transversal local electric-field components imply that photon emission and scattering may become directional. Here we show that the helicity of the optical transition of a quantum emitter determines the direction of single-photon emission in a specially engineered photonic-crystal waveguide. We observe single-photon emission into the waveguide with a directionality that exceeds 90% under conditions in which practically all the emitted photons are coupled to the waveguide. The chiral light-matter interaction enables deterministic and highly directional photon emission for experimentally achievable on-chip non-reciprocal photonic elements. These may serve as key building blocks for single-photon optical diodes, transistors and deterministic quantum gates. Furthermore, chiral photonic circuits allow the dissipative preparation of entangled states of multiple emitters for experimentally achievable parameters, may lead to novel topological photon states and could be applied for directional steering of light.
Deterministic Stress Modeling of Hot Gas Segregation in a Turbine
NASA Technical Reports Server (NTRS)
Busby, Judy; Sondak, Doug; Staubach, Brent; Davis, Roger
1998-01-01
Simulation of unsteady viscous turbomachinery flowfields is presently impractical as a design tool due to the long run times required. Designers rely predominantly on steady-state simulations, but these simulations do not account for some of the important unsteady flow physics. Unsteady flow effects can be modeled as source terms in the steady flow equations. These source terms, referred to as Lumped Deterministic Stresses (LDS), can be used to drive steady flow solution procedures to reproduce the time-average of an unsteady flow solution. The goal of this work is to investigate the feasibility of using inviscid lumped deterministic stresses to model unsteady combustion hot streak migration effects on the turbine blade tip and outer air seal heat loads using a steady computational approach. The LDS model is obtained from an unsteady inviscid calculation. The LDS model is then used with a steady viscous computation to simulate the time-averaged viscous solution. Both two-dimensional and three-dimensional applications are examined. The inviscid LDS model produces good results for the two-dimensional case and requires less than 10% of the CPU time of the unsteady viscous run. For the three-dimensional case, the LDS model does a good job of reproducing the time-averaged viscous temperature migration and separation as well as heat load on the outer air seal at a CPU cost that is 25% of that of an unsteady viscous computation.
Deterministic doping and the exploration of spin qubits
Schenkel, T.; Weis, C. D.; Persaud, A.; Lo, C. C.; Chakarov, I.; Schneider, D. H.; Bokor, J.
2015-01-09
Deterministic doping by single ion implantation, the precise placement of individual dopant atoms into devices, is a path for the realization of quantum computer test structures where quantum bits (qubits) are based on electron and nuclear spins of donors or color centers. We present a donor - quantum dot type qubit architecture and discuss the use of medium and highly charged ions extracted from an Electron Beam Ion Trap/Source (EBIT/S) for deterministic doping. EBIT/S are attractive for the formation of qubit test structures due to the relatively low emittance of ion beams from an EBIT/S and due to the potential energy associated with the ions' charge state, which can aid single ion impact detection. Following ion implantation, dopant specific diffusion mechanisms during device processing affect the placement accuracy and coherence properties of donor spin qubits. For bismuth, range straggling is minimal but its relatively low solubility in silicon limits thermal budgets for the formation of qubit test structures.
Deterministic composite nanophotonic lattices in large area for broadband applications
NASA Astrophysics Data System (ADS)
Xavier, Jolly; Probst, Jürgen; Becker, Christiane
2016-12-01
Exotic manipulation of the flow of photons in nanoengineered materials with an aperiodic distribution of nanostructures plays a key role in efficiency-enhanced broadband photonic and plasmonic technologies for spectrally tailorable integrated biosensing, nanostructured thin film solarcells, white light emitting diodes, novel plasmonic ensembles etc. Through a generic deterministic nanotechnological route here we show subwavelength-scale silicon (Si) nanostructures on nanoimprinted glass substrate in large area (4 cm2) with advanced functional features of aperiodic composite nanophotonic lattices. These nanophotonic aperiodic lattices have easily tailorable supercell tiles with well-defined and discrete lattice basis elements and they show rich Fourier spectra. The presented nanophotonic lattices are designed functionally akin to two-dimensional aperiodic composite lattices with unconventional flexibility- comprising periodic photonic crystals and/or in-plane photonic quasicrystals as pattern design subsystems. The fabricated composite lattice-structured Si nanostructures are comparatively analyzed with a range of nanophotonic structures with conventional lattice geometries of periodic, disordered random as well as in-plane quasicrystalline photonic lattices with comparable lattice parameters. As a proof of concept of compatibility with advanced bottom-up liquid phase crystallized (LPC) Si thin film fabrication, the experimental structural analysis is further extended to double-side-textured deterministic aperiodic lattice-structured 10 μm thick large area LPC Si film on nanoimprinted substrates.
Made-to-order nanocarbons through deterministic plasma nanotechnology.
Ren, Yuping; Xu, Shuyan; Rider, Amanda Evelyn; Ostrikov, Kostya Ken
2011-02-01
Through a combinatorial approach involving experimental measurement and plasma modelling, it is shown that a high degree of control over diamond-like nanocarbon film sp3/sp2 ratio (and hence film properties) may be exercised, starting at the level of electrons (through modification of the plasma electron energy distribution function). Hydrogenated amorphous carbon nanoparticle films with high percentages of diamond-like bonds are grown using a middle-frequency (2 MHz) inductively coupled Ar+CH4 plasma. The sp3 fractions measured by X-ray photoelectron spectroscopy (XPS) and Raman spectroscopy in the thin films are explained qualitatively using sp3/sp2 ratios 1) derived from calculated sp3 and sp2 hybridized precursor species densities in a global plasma discharge model and 2) measured experimentally. It is shown that at high discharge power and lower CH4 concentrations, the sp3/sp2 fraction is higher. Our results suggest that a combination of predictive modeling and experimental studies is instrumental to achieve deterministically grown made-to-order diamond-like nanocarbons suitable for a variety of applications spanning from nano-magnetic resonance imaging to spin-flip quantum information devices. This deterministic approach can be extended to graphene, carbon nanotips, nanodiamond and other nanocarbon materials for a variety of applications.
A deterministic approach to modeling a scintillator cell for NICADD
NASA Astrophysics Data System (ADS)
Barendregt, Alan Carl
CERN uses many detectors in the particle accelerator to find the building blocks of the universe. One type of detector used is a scintillator cell. These detectors are being optimized by NICADD to have a uniform detection range at the lowest cost. To assist in this endeavor, computer modeling gives the designer the ability to test many designs in a virtual environment prior to making a physical prototype design. Current virtual models for this field have been stochastic, which means the designer will have to repeatedly run the same simulation many times to get rid of the statistical "noise" in the results. In the field of nuclear science engineering, deterministic softwares have proven themselves to be a valid prediction tool for such problems as neutron embrittlement. These models account for the probabilities up front and will provide a single result that can help confirm improvements in design in a single step. This has advantages especially when it comes to comparing two different models. This thesis discusses the method of using TransMED, a deterministic software, to assist NICADD in optimizing scintillator cell design. This thesis provides a road-map on how NICADD can use TransMED in there design work for this project and other anisotripic scattering problems as well.
Stochastic and deterministic causes of streamer branching in liquid dielectrics
Jadidian, Jouya; Zahn, Markus; Lavesson, Nils; Widlund, Ola; Borg, Karl
2013-08-14
Streamer branching in liquid dielectrics is driven by stochastic and deterministic factors. The presence of stochastic causes of streamer branching such as inhomogeneities inherited from noisy initial states, impurities, or charge carrier density fluctuations is inevitable in any dielectric. A fully three-dimensional streamer model presented in this paper indicates that deterministic origins of branching are intrinsic attributes of streamers, which in some cases make the branching inevitable depending on shape and velocity of the volume charge at the streamer frontier. Specifically, any given inhomogeneous perturbation can result in streamer branching if the volume charge layer at the original streamer head is relatively thin and slow enough. Furthermore, discrete nature of electrons at the leading edge of an ionization front always guarantees the existence of a non-zero inhomogeneous perturbation ahead of the streamer head propagating even in perfectly homogeneous dielectric. Based on the modeling results for streamers propagating in a liquid dielectric, a gauge on the streamer head geometry is introduced that determines whether the branching occurs under particular inhomogeneous circumstances. Estimated number, diameter, and velocity of the born branches agree qualitatively with experimental images of the streamer branching.
Deterministic nature of the underlying dynamics of surface wind fluctuations
NASA Astrophysics Data System (ADS)
Sreelekshmi, R. C.; Asokan, K.; Satheesh Kumar, K.
2012-10-01
Modelling the fluctuations of the Earth's surface wind has a significant role in understanding the dynamics of atmosphere besides its impact on various fields ranging from agriculture to structural engineering. Most of the studies on the modelling and prediction of wind speed and power reported in the literature are based on statistical methods or the probabilistic distribution of the wind speed data. In this paper we investigate the suitability of a deterministic model to represent the wind speed fluctuations by employing tools of nonlinear dynamics. We have carried out a detailed nonlinear time series analysis of the daily mean wind speed data measured at Thiruvananthapuram (8.483° N,76.950° E) from 2000 to 2010. The results of the analysis strongly suggest that the underlying dynamics is deterministic, low-dimensional and chaotic suggesting the possibility of accurate short-term prediction. As most of the chaotic systems are confined to laboratories, this is another example of a naturally occurring time series showing chaotic behaviour.
Deterministic photon-emitter coupling in chiral photonic circuits.
Söllner, Immo; Mahmoodian, Sahand; Hansen, Sofie Lindskov; Midolo, Leonardo; Javadi, Alisa; Kiršanskė, Gabija; Pregnolato, Tommaso; El-Ella, Haitham; Lee, Eun Hye; Song, Jin Dong; Stobbe, Søren; Lodahl, Peter
2015-09-01
Engineering photon emission and scattering is central to modern photonics applications ranging from light harvesting to quantum-information processing. To this end, nanophotonic waveguides are well suited as they confine photons to a one-dimensional geometry and thereby increase the light-matter interaction. In a regular waveguide, a quantum emitter interacts equally with photons in either of the two propagation directions. This symmetry is violated in nanophotonic structures in which non-transversal local electric-field components imply that photon emission and scattering may become directional. Here we show that the helicity of the optical transition of a quantum emitter determines the direction of single-photon emission in a specially engineered photonic-crystal waveguide. We observe single-photon emission into the waveguide with a directionality that exceeds 90% under conditions in which practically all the emitted photons are coupled to the waveguide. The chiral light-matter interaction enables deterministic and highly directional photon emission for experimentally achievable on-chip non-reciprocal photonic elements. These may serve as key building blocks for single-photon optical diodes, transistors and deterministic quantum gates. Furthermore, chiral photonic circuits allow the dissipative preparation of entangled states of multiple emitters for experimentally achievable parameters, may lead to novel topological photon states and could be applied for directional steering of light.
Shock-induced explosive chemistry in a deterministic sample configuration.
Stuecker, John Nicholas; Castaneda, Jaime N.; Cesarano, Joseph, III; Trott, Wayne Merle; Baer, Melvin R.; Tappan, Alexander Smith
2005-10-01
Explosive initiation and energy release have been studied in two sample geometries designed to minimize stochastic behavior in shock-loading experiments. These sample concepts include a design with explosive material occupying the hole locations of a close-packed bed of inert spheres and a design that utilizes infiltration of a liquid explosive into a well-defined inert matrix. Wave profiles transmitted by these samples in gas-gun impact experiments have been characterized by both velocity interferometry diagnostics and three-dimensional numerical simulations. Highly organized wave structures associated with the characteristic length scales of the deterministic samples have been observed. Initiation and reaction growth in an inert matrix filled with sensitized nitromethane (a homogeneous explosive material) result in wave profiles similar to those observed with heterogeneous explosives. Comparison of experimental and numerical results indicates that energetic material studies in deterministic sample geometries can provide an important new tool for validation of models of energy release in numerical simulations of explosive initiation and performance.
Distributed Traffic Complexity Management by Preserving Trajectory Flexibility
NASA Technical Reports Server (NTRS)
Idris, Husni; Vivona, Robert A.; Garcia-Chico, Jose-Luis; Wing, David J.
2007-01-01
In order to handle the expected increase in air traffic volume, the next generation air transportation system is moving towards a distributed control architecture, in which groundbased service providers such as controllers and traffic managers and air-based users such as pilots share responsibility for aircraft trajectory generation and management. This paper presents preliminary research investigating a distributed trajectory-oriented approach to manage traffic complexity, based on preserving trajectory flexibility. The underlying hypotheses are that preserving trajectory flexibility autonomously by aircraft naturally achieves the aggregate objective of avoiding excessive traffic complexity, and that trajectory flexibility is increased by collaboratively minimizing trajectory constraints without jeopardizing the intended air traffic management objectives. This paper presents an analytical framework in which flexibility is defined in terms of robustness and adaptability to disturbances and preliminary metrics are proposed that can be used to preserve trajectory flexibility. The hypothesized impacts are illustrated through analyzing a trajectory solution space in a simple scenario with only speed as a degree of freedom, and in constraint situations involving meeting multiple times of arrival and resolving conflicts.
Switching performance of OBS network model under prefetched real traffic
NASA Astrophysics Data System (ADS)
Huang, Zhenhua; Xu, Du; Lei, Wen
2005-11-01
Optical Burst Switching (OBS) [1] is now widely considered as an efficient switching technique in building the next generation optical Internet .So it's very important to precisely evaluate the performance of the OBS network model. The performance of the OBS network model is variable in different condition, but the most important thing is that how it works under real traffic load. In the traditional simulation models, uniform traffics are usually generated by simulation software to imitate the data source of the edge node in the OBS network model, and through which the performance of the OBS network is evaluated. Unfortunately, without being simulated by real traffic, the traditional simulation models have several problems and their results are doubtable. To deal with this problem, we present a new simulation model for analysis and performance evaluation of the OBS network, which uses prefetched IP traffic to be data source of the OBS network model. The prefetched IP traffic can be considered as real IP source of the OBS edge node and the OBS network model has the same clock rate with a real OBS system. So it's easy to conclude that this model is closer to the real OBS system than the traditional ones. The simulation results also indicate that this model is more accurate to evaluate the performance of the OBS network system and the results of this model are closer to the actual situation.
Optimal structure of complex networks for minimizing traffic congestion.
Zhao, Liang; Cupertino, Thiago Henrique; Park, Kwangho; Lai, Ying-Cheng; Jin, Xiaogang
2007-12-01
To design complex networks to minimize traffic congestion, it is necessary to understand how traffic flow depends on network structure. We study data packet flow on complex networks, where the packet delivery capacity of each node is not fixed. The optimal configuration of capacities to minimize traffic congestion is derived and the critical packet generating rate is determined, below which the network is at a free flow state but above which congestion occurs. Our analysis reveals a direct relation between network topology and traffic flow. Optimal network structure, free of traffic congestion, should have two features: uniform distribution of load over all nodes and small network diameter. This finding is confirmed by numerical simulations. Our analysis also makes it possible to theoretically compare the congestion conditions for different types of complex networks. In particular, we find that network with low critical generating rate is more susceptible to congestion. The comparison has been made on the following complex-network topologies: random, scale-free, and regular.
Dynamical jamming transition induced by a car accident in traffic-flow model of a two-lane roadway
NASA Astrophysics Data System (ADS)
Nagatani, Takashi
1994-01-01
A deterministic cellular automaton model is presented to simulate the traffic jam induced by a car accident in a two-lane roadway. We study the traffic flow of the system when the translation invariance is broken by the insertion of a blockage which is induced by a car accident in the first lane. Using the computer simulation, it is shown that the dynamical jamming transitions occur successively from phase 1 to phase 4 with increasing the density of cars. In phase 1, no cars exist in the first lane and the cars in the second lane move with the maximal velocity. In phase 2, a discontinuity of the traffic-flow pattern appears and the cars move with the maximal current. Its discontinuity segregates system into two regions with different densities. In phase 3, the discontinuity disappears. In phase 4, the cars do not move ahead but vibrate between the first and second lanes.
Feasibility of a Monte Carlo-deterministic hybrid method for fast reactor analysis
Heo, W.; Kim, W.; Kim, Y.; Yun, S.
2013-07-01
A Monte Carlo and deterministic hybrid method is investigated for the analysis of fast reactors in this paper. Effective multi-group cross sections data are generated using a collision estimator in the MCNP5. A high order Legendre scattering cross section data generation module was added into the MCNP5 code. Both cross section data generated from MCNP5 and TRANSX/TWODANT using the homogeneous core model were compared, and were applied to DIF3D code for fast reactor core analysis of a 300 MWe SFR TRU burner core. For this analysis, 9 groups macroscopic-wise data was used. In this paper, a hybrid calculation MCNP5/DIF3D was used to analyze the core model. The cross section data was generated using MCNP5. The k{sub eff} and core power distribution were calculated using the 54 triangle FDM code DIF3D. A whole core calculation of the heterogeneous core model using the MCNP5 was selected as a reference. In terms of the k{sub eff}, 9-group MCNP5/DIF3D has a discrepancy of -154 pcm from the reference solution, 9-group TRANSX/TWODANT/DIF3D analysis gives -1070 pcm discrepancy. (authors)
Emergence of bistable states and phase diagrams of traffic flow at an unsignalized intersection
NASA Astrophysics Data System (ADS)
Li, Qi-Lang; Jiang, Rui; Wang, Bing-Hong
2015-02-01
This paper studies phase diagrams of traffic states induced by the bottleneck of an unsignalized intersection which consists of two perpendicular one-lane roads. Parallel updates rules are employed for both roads. At the crossing point, in order to avoid colliding, the consideration of yield dynamics may be suitable herein. Different from previous studies, the deterministic Nagel and Schreckenberg model is adopted in this work. Based on theoretical analysis and computer simulations, the phase diagrams of traffic flow have been presented and the flow formulas in all regions have been derived in the phase diagram. The results of theoretical analysis are in good agreement with computer simulation ones. One finds an interesting phenomenon: there exist bistable states in some regions of the phase diagrams.
An improved multi-value cellular automata model for heterogeneous bicycle traffic flow
NASA Astrophysics Data System (ADS)
Jin, Sheng; Qu, Xiaobo; Xu, Cheng; Ma, Dongfang; Wang, Dianhai
2015-10-01
This letter develops an improved multi-value cellular automata model for heterogeneous bicycle traffic flow taking the higher maximum speed of electric bicycles into consideration. The update rules of both regular and electric bicycles are improved, with maximum speeds of two and three cells per second respectively. Numerical simulation results for deterministic and stochastic cases are obtained. The fundamental diagrams and multiple states effects under different model parameters are analyzed and discussed. Field observations were made to calibrate the slowdown probabilities. The results imply that the improved extended Burgers cellular automata (IEBCA) model is more consistent with the field observations than previous models and greatly enhances the realism of the bicycle traffic model.
Self-Organized Criticality and Scaling in Lifetime of Traffic Jams
NASA Astrophysics Data System (ADS)
Nagatani, Takashi
1995-01-01
The deterministic cellular automaton 184 (the one-dimensional asymmetric simple-exclusion model with parallel dynamics) is extended to take into account injection or extraction of particles. The model presents the traffic flow on a highway with inflow or outflow of cars.Introducing injection or extraction of particles into the asymmetric simple-exclusion model drives the system asymptotically into a steady state exhibiting a self-organized criticality. The typical lifetime
A traffic analyzer for multiple SpaceWire links
NASA Astrophysics Data System (ADS)
Liu, Scige J.; Giusi, Giovanni; Di Giorgio, Anna M.; Vertolli, Nello; Galli, Emanuele; Biondi, David; Farina, Maria; Pezzuto, Stefano; Spinoglio, Luigi
2014-07-01
Modern space missions are becoming increasingly complex: the interconnection of the units in a satellite is now a network of terminals linked together through routers, where devices with different level of automation and intelligence share the same data-network. The traceability of the network transactions is performed mostly at terminal level through log analysis and hence it is difficult to verify in real time the reliability of the interconnections and the interchange protocols. To improve and ease the traffic analysis in a SpaceWire network we implemented a low-level link analyzer, with the specific goal to simplify the integration and test phases in the development of space instrumentation. The traffic analyzer collects signals coming from pod probes connected in-series on the interested links between two SpaceWire terminals. With respect to the standard traffic analyzers, the design of this new tool includes the possibility to internally reshape the LVDS signal. This improvement increases the robustness of the analyzer towards environmental noise effects and guarantees a deterministic delay on all analyzed signals. The analyzer core is implemented on a Xilinx FPGA, programmed to decode the bidirectional LVDS signals at Link and Network level. Successively, the core packetizes protocol characters in homogeneous sets of time ordered events. The analyzer provides time-tagging functionality for each characters set, with a precision down to the FPGA Clock, i.e. about 20nsec in the adopted HW environment. The use of a common time reference for each character stream allows synchronous performance measurements. The collected information is then routed to an external computer for quick analysis: this is done via high-speed USB2 connection. With this analyzer it is possible to verify the link performances in terms of induced delays in the transmitted signals. A case study focused on the analysis of the Time-Code synchronization in presence of a SpaceWire Router is
Ground Motion and Variability from 3-D Deterministic Broadband Simulations
NASA Astrophysics Data System (ADS)
Withers, Kyle Brett
The accuracy of earthquake source descriptions is a major limitation in high-frequency (> 1 Hz) deterministic ground motion prediction, which is critical for performance-based design by building engineers. With the recent addition of realistic fault topography in 3D simulations of earthquake source models, ground motion can be deterministically calculated more realistically up to higher frequencies. We first introduce a technique to model frequency-dependent attenuation and compare its impact on strong ground motions recorded for the 2008 Chino Hills earthquake. Then, we model dynamic rupture propagation for both a generic strike-slip event and blind thrust scenario earthquakes matching the fault geometry of the 1994 Mw 6.7 Northridge earthquake along rough faults up to 8 Hz. We incorporate frequency-dependent attenuation via a power law above a reference frequency in the form Q0fn, with high accuracy down to Q values of 15, and include nonlinear effects via Drucker-Prager plasticity. We model the region surrounding the fault with and without small-scale medium complexity in both a 1D layered model characteristic of southern California rock and a 3D medium extracted from the SCEC CVMSi.426 including a near-surface geotechnical layer. We find that the spectral acceleration from our models are within 1-2 interevent standard deviations from recent ground motion prediction equations (GMPEs) and compare well with that of recordings from strong ground motion stations at both short and long periods. At periods shorter than 1 second, Q(f) is needed to match the decay of spectral acceleration seen in the GMPEs as a function of distance from the fault. We find that the similarity between the intraevent variability of our simulations and observations increases when small-scale heterogeneity and plasticity are included, extremely important as uncertainty in ground motion estimates dominates the overall uncertainty in seismic risk. In addition to GMPEs, we compare with simple
Effect of degree correlations on networked traffic dynamics
NASA Astrophysics Data System (ADS)
Sun, Jin-Tu; Wang, Sheng-Jun; Huang, Zi-Gang; Wang, Ying-Hai
2009-08-01
In order to enhance the transport capacity of scale-free networks, we study the relation between the degree correlation and the transport capacity of the network. We calculate the degree-degree correlation coefficient, the maximal betweenness and the critical value of the generating rate Rc (traffic congestion occurs for R>Rc). Numerical experiments indicate that both assortative mixing and disassortative mixing can enhance the transport capacity. We also reveal how the network structure affects the transport capacity. Assortative (disassortative) mixing changes distributions of nodes’ betweennesses, and as a result, the traffic decreases through nodes with the highest degree while it increases through the initially idle nodes.
Traffic-driven SIR epidemic spreading in networks
NASA Astrophysics Data System (ADS)
Pu, Cunlai; Li, Siyuan; Yang, XianXia; Xu, Zhongqi; Ji, Zexuan; Yang, Jian
2016-03-01
We study SIR epidemic spreading in networks driven by traffic dynamics, which are further governed by static routing protocols. We obtain the maximum instantaneous population of infected nodes and the maximum population of ever infected nodes through simulation. We find that generally more balanced load distribution leads to more intense and wide spread of an epidemic in networks. Increasing either average node degree or homogeneity of degree distribution will facilitate epidemic spreading. When packet generation rate ρ is small, increasing ρ favors epidemic spreading. However, when ρ is large enough, traffic congestion appears which inhibits epidemic spreading.
Synchronized flow in oversaturated city traffic.
Kerner, Boris S; Klenov, Sergey L; Hermanns, Gerhard; Hemmerle, Peter; Rehborn, Hubert; Schreckenberg, Michael
2013-11-01
Based on numerical simulations with a stochastic three-phase traffic flow model, we reveal that moving queues (moving jams) in oversaturated city traffic dissolve at some distance upstream of the traffic signal while transforming into synchronized flow. It is found that, as in highway traffic [Kerner, Phys. Rev. E 85, 036110 (2012)], such a jam-absorption effect in city traffic is explained by a strong driver's speed adaptation: Time headways (space gaps) between vehicles increase upstream of a moving queue (moving jam), resulting in moving queue dissolution. It turns out that at given traffic signal parameters, the stronger the speed adaptation effect, the shorter the mean distance between the signal location and the road location at which moving queues dissolve fully and oversaturated traffic consists of synchronized flow only. A comparison of the synchronized flow in city traffic found in this Brief Report with synchronized flow in highway traffic is made.
Deterministic simulation of thermal neutron radiography and tomography
NASA Astrophysics Data System (ADS)
Pal Chowdhury, Rajarshi; Liu, Xin
2016-05-01
In recent years, thermal neutron radiography and tomography have gained much attention as one of the nondestructive testing methods. However, the application of thermal neutron radiography and tomography is hindered by their technical complexity, radiation shielding, and time-consuming data collection processes. Monte Carlo simulations have been developed in the past to improve the neutron imaging facility's ability. In this paper, a new deterministic simulation approach has been proposed and demonstrated to simulate neutron radiographs numerically using a ray tracing algorithm. This approach has made the simulation of neutron radiographs much faster than by previously used stochastic methods (i.e., Monte Carlo methods). The major problem with neutron radiography and tomography simulation is finding a suitable scatter model. In this paper, an analytic scatter model has been proposed that is validated by a Monte Carlo simulation.
Validation of a Deterministic Vibroacoustic Response Prediction Model
NASA Technical Reports Server (NTRS)
Caimi, Raoul E.; Margasahayam, Ravi
1997-01-01
This report documents the recently completed effort involving validation of a deterministic theory for the random vibration problem of predicting the response of launch pad structures in the low-frequency range (0 to 50 hertz). Use of the Statistical Energy Analysis (SEA) methods is not suitable in this range. Measurements of launch-induced acoustic loads and subsequent structural response were made on a cantilever beam structure placed in close proximity (200 feet) to the launch pad. Innovative ways of characterizing random, nonstationary, non-Gaussian acoustics are used for the development of a structure's excitation model. Extremely good correlation was obtained between analytically computed responses and those measured on the cantilever beam. Additional tests are recommended to bound the problem to account for variations in launch trajectory and inclination.
YALINA analytical benchmark analyses using the deterministic ERANOS code system.
Gohar, Y.; Aliberti, G.; Nuclear Engineering Division
2009-08-31
The growing stockpile of nuclear waste constitutes a severe challenge for the mankind for more than hundred thousand years. To reduce the radiotoxicity of the nuclear waste, the Accelerator Driven System (ADS) has been proposed. One of the most important issues of ADSs technology is the choice of the appropriate neutron spectrum for the transmutation of Minor Actinides (MA) and Long Lived Fission Products (LLFP). This report presents the analytical analyses obtained with the deterministic ERANOS code system for the YALINA facility within: (a) the collaboration between Argonne National Laboratory (ANL) of USA and the Joint Institute for Power and Nuclear Research (JIPNR) Sosny of Belarus; and (b) the IAEA coordinated research projects for accelerator driven systems (ADS). This activity is conducted as a part of the Russian Research Reactor Fuel Return (RRRFR) Program and the Global Threat Reduction Initiative (GTRI) of DOE/NNSA.
Deterministic Production of Photon Number States via Quantum Feedback Control
NASA Astrophysics Data System (ADS)
Geremia, J. M.
2006-05-01
It is well-known that measurements reduce the state of a quantum system, at least approximately, to an eigenstate of the operator associated with the physical property being measured. Here, we employ a continuous measurement of cavity photon number to achieve a robust, nondestructively verifiable procedure for preparing number states of an optical cavity mode. Such Fock states are highly sought after for the enabling role they play in quantum computing, networking and precision metrology. Furthermore, we demonstrate that the particular Fock state produced in each application of the continuous photon number measurement can be controlled using techniques from real-time quantum feedback control. The result of the feedback- stabilized measurement is a deterministic source of (nearly ideal) cavity Fock states. An analysis of feedback stability and the experimental viability of a quantum optical implementation currently underway at the University of New Mexico will be presented.
Capillary-mediated interface perturbations: Deterministic pattern formation
NASA Astrophysics Data System (ADS)
Glicksman, Martin E.
2016-09-01
Leibniz-Reynolds analysis identifies a 4th-order capillary-mediated energy field that is responsible for shape changes observed during melting, and for interface speed perturbations during crystal growth. Field-theoretic principles also show that capillary-mediated energy distributions cancel over large length scales, but modulate the interface shape on smaller mesoscopic scales. Speed perturbations reverse direction at specific locations where they initiate inflection and branching on unstable interfaces, thereby enhancing pattern complexity. Simulations of pattern formation by several independent groups of investigators using a variety of numerical techniques confirm that shape changes during both melting and growth initiate at locations predicted from interface field theory. Finally, limit cycles occur as an interface and its capillary energy field co-evolve, leading to synchronized branching. Synchronous perturbations produce classical dendritic structures, whereas asynchronous perturbations observed in isotropic and weakly anisotropic systems lead to chaotic-looking patterns that remain nevertheless deterministic.
Deterministic spin-wave interferometer based on the Rydberg blockade
Wei Ran; Deng Youjin; Pan Jianwei; Zhao Bo; Chen Yuao
2011-06-15
The spin-wave (SW) N-particle path-entangled |N,0>+|0,N> (NOON) state is an N-particle Fock state with two atomic spin-wave modes maximally entangled. Attributed to the property that the phase is sensitive to collective atomic motion, the SW NOON state can be utilized as an atomic interferometer and has promising application in quantum enhanced measurement. In this paper we propose an efficient protocol to deterministically produce the atomic SW NOON state by employing the Rydberg blockade. Possible errors in practical manipulations are analyzed. A feasible experimental scheme is suggested. Our scheme is far more efficient than the recent experimentally demonstrated one, which only creates a heralded second-order SW NOON state.
Safe microburst penetration techniques: A deterministic, nonlinear, optimal control approach
NASA Technical Reports Server (NTRS)
Psiaki, Mark L.
1987-01-01
A relatively large amount of computer time was used for the calculation of a optimal trajectory, but it is subject to reduction with moderate effort. The Deterministic, Nonlinear, Optimal Control algorithm yielded excellent aircraft performance in trajectory tracking for the given microburst. It did so by varying the angle of attack to counteract the lift effects of microburst induced airspeed variations. Throttle saturation and aerodynamic stall limits were not a problem for the case considered, proving that the aircraft's performance capabilities were not violated by the given wind field. All closed loop control laws previously considered performed very poorly in comparison, and therefore do not come near to taking full advantage of aircraft performance.
Robust Audio Watermarking Scheme Based on Deterministic Plus Stochastic Model
NASA Astrophysics Data System (ADS)
Dhar, Pranab Kumar; Kim, Cheol Hong; Kim, Jong-Myon
Digital watermarking has been widely used for protecting digital contents from unauthorized duplication. This paper proposes a new watermarking scheme based on spectral modeling synthesis (SMS) for copyright protection of digital contents. SMS defines a sound as a combination of deterministic events plus a stochastic component that makes it possible for a synthesized sound to attain all of the perceptual characteristics of the original sound. In our proposed scheme, watermarks are embedded into the highest prominent peak of the magnitude spectrum of each non-overlapping frame in peak trajectories. Simulation results indicate that the proposed watermarking scheme is highly robust against various kinds of attacks such as noise addition, cropping, re-sampling, re-quantization, and MP3 compression and achieves similarity values ranging from 17 to 22. In addition, our proposed scheme achieves signal-to-noise ratio (SNR) values ranging from 29 dB to 30 dB.
Deterministic secure communications using two-mode squeezed states
Marino, Alberto M.; Stroud, C. R. Jr.
2006-08-15
We propose a scheme for quantum cryptography that uses the squeezing phase of a two-mode squeezed state to transmit information securely between two parties. The basic principle behind this scheme is the fact that each mode of the squeezed field by itself does not contain any information regarding the squeezing phase. The squeezing phase can only be obtained through a joint measurement of the two modes. This, combined with the fact that it is possible to perform remote squeezing measurements, makes it possible to implement a secure quantum communication scheme in which a deterministic signal can be transmitted directly between two parties while the encryption is done automatically by the quantum correlations present in the two-mode squeezed state.
Qubit-mediated deterministic nonlinear gates for quantum oscillators.
Park, Kimin; Marek, Petr; Filip, Radim
2017-09-14
Quantum nonlinear operations for harmonic oscillator systems play a key role in the development of analog quantum simulators and computers. Since strong highly nonlinear operations are often unavailable in the existing physical systems, it is a common practice to approximate them by using conditional measurement-induced methods. The conditional approach has several drawbacks, the most severe of which is the exponentially decreasing success rate of the strong and complex nonlinear operations. We show that by using a suitable two level system sequentially interacting with the oscillator, it is possible to resolve these issues and implement a nonlinear operation both nearly deterministically and nearly perfectly. We explicitly demonstrate the approach by constructing self-Kerr and cross-Kerr couplings in a realistic situation, which require a feasible dispersive coupling between the two-level system and the oscillator.
Derivation Of Probabilistic Damage Definitions From High Fidelity Deterministic Computations
Leininger, L D
2004-10-26
This paper summarizes a methodology used by the Underground Analysis and Planning System (UGAPS) at Lawrence Livermore National Laboratory (LLNL) for the derivation of probabilistic damage curves for US Strategic Command (USSTRATCOM). UGAPS uses high fidelity finite element and discrete element codes on the massively parallel supercomputers to predict damage to underground structures from military interdiction scenarios. These deterministic calculations can be riddled with uncertainty, especially when intelligence, the basis for this modeling, is uncertain. The technique presented here attempts to account for this uncertainty by bounding the problem with reasonable cases and using those bounding cases as a statistical sample. Probability of damage curves are computed and represented that account for uncertainty within the sample and enable the war planner to make informed decisions. This work is flexible enough to incorporate any desired damage mechanism and can utilize the variety of finite element and discrete element codes within the national laboratory and government contractor community.
Deterministic processes vary during community assembly for ecologically dissimilar taxa
Powell, Jeff R.; Karunaratne, Senani; Campbell, Colin D.; Yao, Huaiying; Robinson, Lucinda; Singh, Brajesh K.
2015-01-01
The continuum hypothesis states that both deterministic and stochastic processes contribute to the assembly of ecological communities. However, the contextual dependency of these processes remains an open question that imposes strong limitations on predictions of community responses to environmental change. Here we measure community and habitat turnover across multiple vertical soil horizons at 183 sites across Scotland for bacteria and fungi, both dominant and functionally vital components of all soils but which differ substantially in their growth habit and dispersal capability. We find that habitat turnover is the primary driver of bacterial community turnover in general, although its importance decreases with increasing isolation and disturbance. Fungal communities, however, exhibit a highly stochastic assembly process, both neutral and non-neutral in nature, largely independent of disturbance. These findings suggest that increased focus on dispersal limitation and biotic interactions are necessary to manage and conserve the key ecosystem services provided by these assemblages. PMID:26436640
Location deterministic biosensing from quantum-dot-nanowire assemblies
Liu, Chao; Kim, Kwanoh; Fan, D. L.
2014-08-25
Semiconductor quantum dots (QDs) with high fluorescent brightness, stability, and tunable sizes, have received considerable interest for imaging, sensing, and delivery of biomolecules. In this research, we demonstrate location deterministic biochemical detection from arrays of QD-nanowire hybrid assemblies. QDs with diameters less than 10 nm are manipulated and precisely positioned on the tips of the assembled Gold (Au) nanowires. The manipulation mechanisms are quantitatively understood as the synergetic effects of dielectrophoretic (DEP) and alternating current electroosmosis (ACEO) due to AC electric fields. The QD-nanowire hybrid sensors operate uniquely by concentrating bioanalytes to QDs on the tips of nanowires before detection, offering much enhanced efficiency and sensitivity, in addition to the position-predictable rationality. This research could result in advances in QD-based biomedical detection and inspires an innovative approach for fabricating various QD-based nanodevices.
Deterministic single-file dynamics in collisional representation.
Marchesoni, F; Taloni, A
2007-12-01
We re-examine numerically the diffusion of a deterministic, or ballistic single file with preassigned velocity distribution (Jepsen's gas) from a collisional viewpoint. For a two-modal velocity distribution, where half the particles have velocity +/-c, the collisional statistics is analytically proven to reproduce the continuous time representation. For a three-modal velocity distribution with equal fractions, where less than 12 of the particles have velocity +/-c, with the remaining particles at rest, the collisional process is shown to be inhomogeneous; its stationary properties are discussed here by combining exact and phenomenological arguments. Collisional memory effects are then related to the negative power-law tails in the velocity autocorrelation functions, predicted earlier in the continuous time formalism. Numerical and analytical results for Gaussian and four-modal Jepsen's gases are also reported for the sake of a comparison.
Testing for chaos in deterministic systems with noise
NASA Astrophysics Data System (ADS)
Gottwald, Georg A.; Melbourne, Ian
2005-12-01
Recently, we introduced a new test for distinguishing regular from chaotic dynamics in deterministic dynamical systems and argued that the test had certain advantages over the traditional test for chaos using the maximal Lyapunov exponent. In this paper, we investigate the capability of the test to cope with moderate amounts of noisy data. Comparisons are made between an improved version of our test and both the “tangent space method” and “direct method” for computing the maximal Lyapunov exponent. The evidence of numerical experiments, ranging from the logistic map to an eight-dimensional Lorenz system of differential equations (the Lorenz 96 system), suggests that our method is superior to tangent space methods and that it compares very favourably with direct methods.
A deterministic computational procedure for space environment electron transport
NASA Astrophysics Data System (ADS)
Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamczyk, Anne M.
2010-08-01
A deterministic computational procedure for describing the transport of electrons in condensed media is formulated to simulate the effects and exposures from spectral distributions typical of electrons trapped in planetary magnetic fields. The primary purpose for developing the procedure is to provide a means of rapidly performing numerous repetitive transport calculations essential for electron radiation exposure assessments for complex space structures. The present code utilizes well-established theoretical representations to describe the relevant interactions and transport processes. A combined mean free path and average trajectory approach is used in the transport formalism. For typical space environment spectra, several favorable comparisons with Monte Carlo calculations are made which have indicated that accuracy is not compromised at the expense of the computational speed.
Turning Indium Oxide into a Superior Electrocatalyst: Deterministic Heteroatoms
NASA Astrophysics Data System (ADS)
Zhang, Bo; Zhang, Nan Nan; Chen, Jian Fu; Hou, Yu; Yang, Shuang; Guo, Jian Wei; Yang, Xiao Hua; Zhong, Ju Hua; Wang, Hai Feng; Hu, P.; Zhao, Hui Jun; Yang, Hua Gui
2013-10-01
The efficient electrocatalysts for many heterogeneous catalytic processes in energy conversion and storage systems must possess necessary surface active sites. Here we identify, from X-ray photoelectron spectroscopy and density functional theory calculations, that controlling charge density redistribution via the atomic-scale incorporation of heteroatoms is paramount to import surface active sites. We engineer the deterministic nitrogen atoms inserting the bulk material to preferentially expose active sites to turn the inactive material into a sufficient electrocatalyst. The excellent electrocatalytic activity of N-In2O3 nanocrystals leads to higher performance of dye-sensitized solar cells (DSCs) than the DSCs fabricated with Pt. The successful strategy provides the rational design of transforming abundant materials into high-efficient electrocatalysts. More importantly, the exciting discovery of turning the commonly used transparent conductive oxide (TCO) in DSCs into counter electrode material means that except for decreasing the cost, the device structure and processing techniques of DSCs can be simplified in future.
Reinforcement learning output feedback NN control using deterministic learning technique.
Xu, Bin; Yang, Chenguang; Shi, Zhongke
2014-03-01
In this brief, a novel adaptive-critic-based neural network (NN) controller is investigated for nonlinear pure-feedback systems. The controller design is based on the transformed predictor form, and the actor-critic NN control architecture includes two NNs, whereas the critic NN is used to approximate the strategic utility function, and the action NN is employed to minimize both the strategic utility function and the tracking error. A deterministic learning technique has been employed to guarantee that the partial persistent excitation condition of internal states is satisfied during tracking control to a periodic reference orbit. The uniformly ultimate boundedness of closed-loop signals is shown via Lyapunov stability analysis. Simulation results are presented to demonstrate the effectiveness of the proposed control.
Turning indium oxide into a superior electrocatalyst: deterministic heteroatoms.
Zhang, Bo; Zhang, Nan Nan; Chen, Jian Fu; Hou, Yu; Yang, Shuang; Guo, Jian Wei; Yang, Xiao Hua; Zhong, Ju Hua; Wang, Hai Feng; Hu, P; Zhao, Hui Jun; Yang, Hua Gui
2013-10-31
The efficient electrocatalysts for many heterogeneous catalytic processes in energy conversion and storage systems must possess necessary surface active sites. Here we identify, from X-ray photoelectron spectroscopy and density functional theory calculations, that controlling charge density redistribution via the atomic-scale incorporation of heteroatoms is paramount to import surface active sites. We engineer the deterministic nitrogen atoms inserting the bulk material to preferentially expose active sites to turn the inactive material into a sufficient electrocatalyst. The excellent electrocatalytic activity of N-In2O3 nanocrystals leads to higher performance of dye-sensitized solar cells (DSCs) than the DSCs fabricated with Pt. The successful strategy provides the rational design of transforming abundant materials into high-efficient electrocatalysts. More importantly, the exciting discovery of turning the commonly used transparent conductive oxide (TCO) in DSCs into counter electrode material means that except for decreasing the cost, the device structure and processing techniques of DSCs can be simplified in future.
A Deterministic Computational Procedure for Space Environment Electron Transport
NASA Technical Reports Server (NTRS)
Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamcyk, Anne M.
2010-01-01
A deterministic computational procedure for describing the transport of electrons in condensed media is formulated to simulate the effects and exposures from spectral distributions typical of electrons trapped in planetary magnetic fields. The primary purpose for developing the procedure is to provide a means of rapidly performing numerous repetitive transport calculations essential for electron radiation exposure assessments for complex space structures. The present code utilizes well-established theoretical representations to describe the relevant interactions and transport processes. A combined mean free path and average trajectory approach is used in the transport formalism. For typical space environment spectra, several favorable comparisons with Monte Carlo calculations are made which have indicated that accuracy is not compromised at the expense of the computational speed.
Deterministic Direct Aperture Optimization Using Multiphase Piecewise Constant Segmentation
NASA Astrophysics Data System (ADS)
Nguyen, Dan Minh
Purpose: Direct Aperture Optimization (DAO) attempts to incorporate machine constraints in the inverse optimization to eliminate the post-processing steps in fluence map optimization (FMO) that degrade plan quality. Current commercial DAO methods utilize a stochastic or greedy approach to search a small aperture solution space. In this study, we propose a novel deterministic direct aperture optimization that integrates the segmentation of fluence map in the optimization problem using the multiphase piecewise constant Mumford-Shah formulation. Methods: The deterministic DAO problem was formulated to include an L2-norm dose fidelity term to penalize differences between the projected dose and the prescribed dose, an anisotropic total variation term to promote piecewise continuity in the fluence maps, and the multiphase piecewise constant Mumford-Shah function to partition the fluence into pairwise discrete segments. A proximal-class, first-order primal-dual solver was implemented to solve the large scale optimization problem, and an alternating module strategy was implemented to update fluence and delivery segments. Three patients of varying complexity-one glioblastoma multiforme (GBM) patient, one lung (LNG) patient, and one bilateral head and neck (H&N) patient with 3 PTVs-were selected to test the new DAO method. For comparison, a popular and successful approach to DAO known as simulated annealing-a stochastic approach-was replicated. Each patient was planned using the Mumford-Shah based DAO (DAOMS) and the simulated annealing based DAO (DAOSA). PTV coverage, PTV homogeneity (D95/D5), and OAR sparing were assessed for each plan. In addition, high dose spillage, defined as the 50% isodose volume divided by the tumor volume, as well as conformity, defined as the van't Riet conformation number, were evaluated. Results: DAOMS achieved essentially the same OAR doses compared with the DAOSA plans for the GBM case. The average difference of OAR Dmax and Dmean between the
Conservative deterministic spectral Boltzmann solver near the grazing collisions limit
NASA Astrophysics Data System (ADS)
Haack, Jeffrey R.; Gamba, Irene M.
2012-11-01
We present new results building on the conservative deterministic spectral method for the space homogeneous Boltzmann equation developed by Gamba and Tharkabhushaman. This approach is a two-step process that acts on the weak form of the Boltzmann equation, and uses the machinery of the Fourier transform to reformulate the collisional integral into a weighted convolution in Fourier space. A constrained optimization problem is solved to preserve the mass, momentum, and energy of the resulting distribution. Within this framework we have extended the formulation to the case of more general case of collision operators with anisotropic scattering mechanisms, which requires a new formulation of the convolution weights. We also derive the grazing collisions limit for the method, and show that it is consistent with the Fokker-Planck-Landau equations as the grazing collisions parameter goes to zero.