Deterministic models for traffic jams
NASA Astrophysics Data System (ADS)
Nagel, Kai; Herrmann, Hans J.
1993-10-01
We study several deterministic one-dimensional traffic models. For integer positions and velocities we find the typical high and low density phases separated by a simple transition. If positions and velocities are continuous variables the model shows self-organized critically driven by the slowest car.
Traffic chaotic dynamics modeling and analysis of deterministic network
NASA Astrophysics Data System (ADS)
Wu, Weiqiang; Huang, Ning; Wu, Zhitao
2016-07-01
Network traffic is an important and direct acting factor of network reliability and performance. To understand the behaviors of network traffic, chaotic dynamics models were proposed and helped to analyze nondeterministic network a lot. The previous research thought that the chaotic dynamics behavior was caused by random factors, and the deterministic networks would not exhibit chaotic dynamics behavior because of lacking of random factors. In this paper, we first adopted chaos theory to analyze traffic data collected from a typical deterministic network testbed — avionics full duplex switched Ethernet (AFDX, a typical deterministic network) testbed, and found that the chaotic dynamics behavior also existed in deterministic network. Then in order to explore the chaos generating mechanism, we applied the mean field theory to construct the traffic dynamics equation (TDE) for deterministic network traffic modeling without any network random factors. Through studying the derived TDE, we proposed that chaotic dynamics was one of the nature properties of network traffic, and it also could be looked as the action effect of TDE control parameters. A network simulation was performed and the results verified that the network congestion resulted in the chaotic dynamics for a deterministic network, which was identical with expectation of TDE. Our research will be helpful to analyze the traffic complicated dynamics behavior for deterministic network and contribute to network reliability designing and analysis.
Classification and unification of the microscopic deterministic traffic models
NASA Astrophysics Data System (ADS)
Yang, Bo; Monterola, Christopher
2015-10-01
We identify a universal mathematical structure in microscopic deterministic traffic models (with identical drivers), and thus we show that all such existing models in the literature, including both the two-phase and three-phase models, can be understood as special cases of a master model by expansion around a set of well-defined ground states. This allows any two traffic models to be properly compared and identified. The three-phase models are characterized by the vanishing of leading orders of expansion within a certain density range, and as an example the popular intelligent driver model is shown to be equivalent to a generalized optimal velocity (OV) model. We also explore the diverse solutions of the generalized OV model that can be important both for understanding human driving behaviors and algorithms for autonomous driverless vehicles.
Deterministic generation of multiparticle entanglement by quantum Zeno dynamics.
Barontini, Giovanni; Hohmann, Leander; Haas, Florian; Estève, Jérôme; Reichel, Jakob
2015-09-18
Multiparticle entangled quantum states, a key resource in quantum-enhanced metrology and computing, are usually generated by coherent operations exclusively. However, unusual forms of quantum dynamics can be obtained when environment coupling is used as part of the state generation. In this work, we used quantum Zeno dynamics (QZD), based on nondestructive measurement with an optical microcavity, to deterministically generate different multiparticle entangled states in an ensemble of 36 qubit atoms in less than 5 microseconds. We characterized the resulting states by performing quantum tomography, yielding a time-resolved account of the entanglement generation. In addition, we studied the dependence of quantum states on measurement strength and quantified the depth of entanglement. Our results show that QZD is a versatile tool for fast and deterministic entanglement generation in quantum engineering applications.
Deterministic generation of remote entanglement with active quantum feedback
Martin, Leigh; Motzoi, Felix; Li, Hanhan; Sarovar, Mohan; Whaley, K. Birgitta
2015-12-10
We develop and study protocols for deterministic remote entanglement generation using quantum feedback, without relying on an entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can bemore » modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Lastly, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.« less
Deterministic generation of remote entanglement with active quantum feedback
Martin, Leigh; Motzoi, Felix; Li, Hanhan; Sarovar, Mohan; Whaley, K. Birgitta
2015-12-10
We develop and study protocols for deterministic remote entanglement generation using quantum feedback, without relying on an entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can be modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Lastly, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.
Deterministic generation of remote entanglement with active quantum feedback
NASA Astrophysics Data System (ADS)
Martin, Leigh; Motzoi, Felix; Li, Hanhan; Sarovar, Mohan; Whaley, K. Birgitta
2015-12-01
We consider the task of deterministically entangling two remote qubits using joint measurement and feedback, but no directly entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can be modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Finally, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.
Condition for generating the same scattered spectral density by random and deterministic media.
Wang, Tao; Ding, Yi; Ji, Xiaoling; Zhao, Daomu
2015-02-01
We present a condition for generating the same scattered spectral density by random and deterministic media. Examples of light waves on scattering from a Gaussian-centered deterministic medium and a Gaussian-correlated quasi-homogeneous random medium are discussed. It is shown that the normalized far-zone scattered spectral density produced by a Gaussian-centered deterministic medium and by a Gaussian-correlated quasi-homogeneous random medium will be identical provided that the square of the effective width of normalized correlation coefficient of the quasi-homogeneous random medium is twice the square of the effective width of scattering potential of the determinate medium.
Deterministic photonic cluster state generation from quantum dot molecules
NASA Astrophysics Data System (ADS)
Economou, Sophia; Gimeno-Segovia, Mercedes; Rudolph, Terry
2014-03-01
Currently, the most promising approach for photon-based quantum information processing is measurement-based, or one-way, quantum computing. In this scheme, a large entangled state of photons is prepared upfront and the computation is implemented with single-qubit measurements alone. Available approaches to generating the cluster state are probabilistic, which makes scalability challenging. We propose to generate the cluster state using a quantum dot molecule with one electron spin per quantum dot. The two spins are coupled by exchange interaction and are periodically pulsed to produce photons. We show that the entanglement created by free evolution between the spins is transferred to the emitted photons, and thus a 2D photonic ladder can be created. Our scheme only utilizes single-spin gates and measurement, and is thus fully consistent with available technology.
Mesh generation and energy group condensation studies for the jaguar deterministic transport code
Kennedy, R. A.; Watson, A. M.; Iwueke, C. I.; Edwards, E. J.
2012-07-01
The deterministic transport code Jaguar is introduced, and the modeling process for Jaguar is demonstrated using a two-dimensional assembly model of the Hoogenboom-Martin Performance Benchmark Problem. This single assembly model is being used to test and analyze optimal modeling methodologies and techniques for Jaguar. This paper focuses on spatial mesh generation and energy condensation techniques. In this summary, the models and processes are defined as well as thermal flux solution comparisons with the Monte Carlo code MC21. (authors)
Transforming the NAS: The Next Generation Air Traffic Control System
NASA Technical Reports Server (NTRS)
Erzberger, Heinz
2004-01-01
The next-generation air traffic control system must be designed to safely and efficiently accommodate the large growth of traffic expected in the near future. It should be sufficiently scalable to contend with the factor of 2 or more increase in demand expected by the year 2020. Analysis has shown that the current method of controlling air traffic cannot be scaled up to provide such levels of capacity. Therefore, to achieve a large increase in capacity while also giving pilots increased freedom to optimize their flight trajectories requires a fundamental change in the way air traffic is controlled. The key to achieving a factor of 2 or more increase in airspace capacity is to automate separation monitoring and control and to use an air-ground data link to send trajectories and clearances directly between ground-based and airborne systems. In addition to increasing capacity and offering greater flexibility in the selection of trajectories, this approach also has the potential to increase safety by reducing controller and pilot errors that occur in routine monitoring and voice communication tasks.
Deterministic generation of multiparticle entanglement in a coupled cavity-fiber system.
Li, Peng-Bo; Li, Fu-Li
2011-01-17
We develop a one-step scheme for generating multiparticle entangled states between two cold atomic clouds in distant cavities coupled by an optical fiber. We show that, through suitably choosing the intensities and detunings of the fields and precisely tuning the time evolution of the system, multiparticle entanglement between the separated atomic clouds can be engineered deterministically, in which quantum manipulations are insensitive to the states of the cavity and losses of the fiber. The experimental feasibility of this scheme is analyzed based on recent experimental advances in the realization of strong coupling between cold 87Rb clouds and fiber-based cavity. This scheme may open up promising perspectives for implementing quantum communication and networking with coupled cavities connected by optical fibers.
Deterministic generation of many-photon GHZ states using quantum dots in a cavity
NASA Astrophysics Data System (ADS)
Leuenberger, Michael N.; Erementchouk, Mikhail
2014-05-01
Compared to classical light sources, quantum sources based on N00N states consisting of N photons achieve an N-times higher phase sensitivity, giving rise to super-resolution.1, 2, 3 N00N-state creation schemes based on linear optics and projective measurements only have a success probability p that decreases exponentially with N,4, 5, 6 e.g. p = 4.4x10-14 for N = 20.7 Feed-forward improves the scaling but N fluctuates nondeterministically in each attempt.8, 9 Schemes based on parametric down-conversion suffer from low production efficiency and low fidelity.9 A recent scheme based on atoms in a cavity combines deterministic time evolution, local unitary operations, and projective measurements.10 Here we propose a novel scheme based on the off-resonant interaction of N photons with four semiconductor quantum dots (QDs) in a cavity to create GHZ states, also called polarization N00N states, deterministically with p = 1 and fidelity above 90% for N<= 60, without the need of any projective measurement or local unitary operation. Using our measure we obtain maximum N-photon entanglement EN = 1 for arbitrary N. Our method paves the way to the miniaturization of N00N and GHZ-state sources to the nanoscale regime, with the possibility to integrate them on a computer chip based on semiconductor materials.
Network Traffic Generator for Low-rate Small Network Equipment Software
Lanzisera, Steven
2013-05-28
Application that uses the Python low-level socket interface to pass network traffic between devices on the local side of a NAT router and the WAN side of the NAT router. This application is designed to generate traffic that complies with the Energy Star Small Network Equipment Test Method.
MMPP Traffic Generator for the Testing of the SCAR 2 Fast Packet Switch
NASA Technical Reports Server (NTRS)
Chren, William A., Jr.
1995-01-01
A prototype MWP Traffic Generator (TG) has been designed for testing of the COMSAT-supplied SCAR II Fast Packet Switch. By generating packets distributed according to a Markov-Modulated Poisson Process (MMPP) model. it allows the assessment of the switch performance under traffic conditions that are more realistic than could be generated using the COMSAT-supplied Traffic Generator Module. The MMPP model is widely believed to model accurately real-world superimposed voice and data communications traffic. The TG was designed to be as much as possible of a "drop-in" replacement for the COMSAT Traffic Generator Module. The latter fit on two Altera EPM7256EGC 192-pin CPLDs and produced traffic for one switch input port. No board changes are necessary because it has been partitioned to use the existing board traces. The TG, consisting of parts "TGDATPROC" and "TGRAMCTL" must merely be reprogrammed into the Altera devices of the same name. However, the 040 controller software must be modified to provide TG initialization data. This data will be given in Section II.
Generation of deterministic tsunami hazard maps in the Bay of Cadiz, south-west Spain
NASA Astrophysics Data System (ADS)
Álvarez-Gómez, J. A.; Otero, L.; Olabarrieta, M.; González, M.; Carreño, E.; Baptista, M. A.; Miranda, J. M.; Medina, R.; Lima, V.
2009-04-01
free surface elevation, maximum water depth, maximum current speed, maximum Froude number and maximum impact forces (hydrostatic and dynamic forces). The fault rupture and sea bottom displacement has been computed by means of the Okada equations. As result, a set of more than 100 deterministic thematic maps have been created in a GIS environment incorporating geographical data and high resolution orthorectified satellite images. These thematic maps form an atlas of inundation maps that will be distributed to different government authorities and civil protection and emergency agencies. The authors gratefully acknowledge the financial support provided by the EU under the frame of the European Project TRANSFER (Tsunami Risk And Strategies For the European Region), 6th Framework Programme.
Dual-monitor deterministic hardware for visual stimuli generation in neuroscience experiments.
Gazziro, Mario; Almeida, Lirio
2010-01-01
This article describes the development of a dual-monitor visual stimulus generator that is used in neuroscience experiments with invertebrates such as flies. The experiment consists in the visualization of two fixed images that are displaced horizontally according to the stimulus data. The system was developed using off-the-shelf FPGA kits and it is capable of displaying 640x480 pixels with 256 intensity levels at 200 frames per second (FPS) on each monitor. A Raster plot of the experiment with the superimposed stimuli was generated as the result of this work. A novel architecture was developed, using the same DOT Clock for both monitors, and its implementation generates a perfect synchronism in both devices.
Dual-monitor deterministic hardware for visual stimuli generation in neuroscience experiments.
Gazziro, Mario; Almeida, Lirio
2010-01-01
This article describes the development of a dual-monitor visual stimulus generator that is used in neuroscience experiments with invertebrates such as flies. The experiment consists in the visualization of two fixed images that are displaced horizontally according to the stimulus data. The system was developed using off-the-shelf FPGA kits and it is capable of displaying 640x480 pixels with 256 intensity levels at 200 frames per second (FPS) on each monitor. A Raster plot of the experiment with the superimposed stimuli was generated as the result of this work. A novel architecture was developed, using the same DOT Clock for both monitors, and its implementation generates a perfect synchronism in both devices. PMID:21096378
Deterministic generation of a three-dimensional entangled state via quantum Zeno dynamics
Li Wenan; Huang Guangyao
2011-02-15
A scheme is proposed for the generation of a three-dimensional entangled state for two atoms trapped in a cavity via quantum Zeno dynamics. Because the scheme is based on the resonant interaction, the time required to produce entanglement is very short compared with the dispersive protocols. We show that the resulting effective dynamics allows for the creation of robust qutrit-qutrit entanglement. The influence of various decoherence processes such as spontaneous emission and photon loss on the fidelity of the entangled state is investigated. Numerical results show that the scheme is robust against the cavity decay since the evolution of the system is restricted to a subspace with null-excitation cavity fields. Furthermore, the present scheme has been generalized to realize N-dimensional entanglement for two atoms.
Yang, W. S.; Lee, C. H.
2008-05-16
Under the fast reactor simulation program launched in April 2007, development of an advanced multigroup cross section generation code was initiated in July 2007, in conjunction with the development of the high-fidelity deterministic neutron transport code UNIC. The general objectives are to simplify the existing multi-step schemes and to improve the resolved and unresolved resonance treatments. Based on the review results of current methods and the fact that they have been applied successfully to fast critical experiment analyses and fast reactor designs for last three decades, the methodologies of the ETOE-2/MC{sup 2}-2/SDX code system were selected as the starting set of methodologies for multigroup cross section generation for fast reactor analysis. As the first step for coupling with the UNIC code and use in a parallel computing environment, the MC{sup 2}-2 code was updated by modernizing the memory structure and replacing old data management package subroutines and functions with FORTRAN 90 based routines. Various modifications were also made in the ETOE-2 and MC{sup 2}-2 codes to process the ENDF/B-VII.0 data properly. Using the updated ETOE-2/MC{sup 2}-2 code system, the ENDF/B-VII.0 data was successfully processed for major heavy and intermediate nuclides employed in sodium-cooled fast reactors. Initial verification tests of the MC{sup 2}-2 libraries generated from ENDF/B-VII.0 data were performed by inter-comparison of twenty-one group infinite dilute total cross sections obtained from MC{sup 2}-2, VIM, and NJOY. For almost all nuclides considered, MC{sup 2}-2 cross sections agreed very well with those from VIM and NJOY. Preliminary validation tests of the ENDF/B-VII.0 libraries of MC{sup 2}-2 were also performed using a set of sixteen fast critical benchmark problems. The deterministic results based on MC{sup 2}-2/TWODANT calculations were in good agreement with MCNP solutions within {approx}0.25% {Delta}{rho}, except a few small LANL fast assemblies
Lehr, D.; Dietrich, K.; Siefke, T.; Kley, E.-B.; Alaee, R.; Filter, R.; Lederer, F.; Rockstuhl, C.; Tünnermann, A.
2014-10-06
A double-patterning process for scalable, efficient, and deterministic nanoring array fabrication is presented. It enables gaps and features below a size of 20 nm. A writing time of 3 min/cm{sup 2} makes this process extremely appealing for scientific and industrial applications. Numerical simulations are in agreement with experimentally measured optical spectra. Therefore, a platform and a design tool for upcoming next generation plasmonic devices like hybrid plasmonic quantum systems are delivered.
Trajectory Assessment and Modification Tools for Next Generation Air Traffic Management Operations
NASA Technical Reports Server (NTRS)
Brasil, Connie; Lee, Paul; Mainini, Matthew; Lee, Homola; Lee, Hwasoo; Prevot, Thomas; Smith, Nancy
2011-01-01
This paper reviews three Next Generation Air Transportation System (NextGen) based high fidelity air traffic control human-in-the-loop (HITL) simulations, with a focus on the expected requirement of enhanced automated trajectory assessment and modification tools to support future air traffic flow management (ATFM) planning positions. The simulations were conducted at the National Aeronautics and Space Administration (NASA) Ames Research Centers Airspace Operations Laboratory (AOL) in 2009 and 2010. The test airspace for all three simulations assumed the mid-term NextGenEn-Route high altitude environment utilizing high altitude sectors from the Kansas City and Memphis Air Route Traffic Control Centers. Trajectory assessment, modification and coordination decision support tools were developed at the AOL in order to perform future ATFM tasks. Overall tool usage results and user acceptability ratings were collected across three areas of NextGen operatoins to evaluate the tools. In addition to the usefulness and usability feedback, feasibility issues, benefits, and future requirements were also addressed. Overall, the tool sets were rated very useful and usable, and many elements of the tools received high scores and were used frequently and successfully. Tool utilization results in all three HITLs showed both user and system benefits including better airspace throughput, reduced controller workload, and highly effective communication protocols in both full Data Comm and mixed-equipage environments.
Barellini, A; Bogi, L; Licitra, G; Silvi, A M; Zari, A
2009-12-01
Air traffic control (ATC) primary radars are 'classical' radars that use echoes of radiofrequency (RF) pulses from aircraft to determine their position. High-power RF pulses radiated from radar antennas may produce high electromagnetic field levels in the surrounding area. Measurement of electromagnetic fields produced by RF-pulsed radar by means of a swept-tuned spectrum analyser are investigated here. Measurements have been carried out both in the laboratory and in situ on signals generated by an ATC primary radar. PMID:19864331
NASA Technical Reports Server (NTRS)
Khambatta, Cyrus F.
2007-01-01
A technique for automated development of scenarios for use in the Multi-Center Traffic Management Advisor (McTMA) software simulations is described. The resulting software is designed and implemented to automate the generation of simulation scenarios with the intent of reducing the time it currently takes using an observational approach. The software program is effective in achieving this goal. The scenarios created for use in the McTMA simulations are based on data taken from data files from the McTMA system, and were manually edited before incorporation into the simulations to ensure accuracy. Despite the software s overall favorable performance, several key software issues are identified. Proposed solutions to these issues are discussed. Future enhancements to the scenario generator software may address the limitations identified in this paper.
Wei, Yu-Jia; He, Yu-Ming; Chen, Ming-Cheng; Hu, Yi-Nan; He, Yu; Wu, Dian; Schneider, Christian; Kamp, Martin; Höfling, Sven; Lu, Chao-Yang; Pan, Jian-Wei
2014-11-12
Single photons are attractive candidates of quantum bits (qubits) for quantum computation and are the best messengers in quantum networks. Future scalable, fault-tolerant photonic quantum technologies demand both stringently high levels of photon indistinguishability and generation efficiency. Here, we demonstrate deterministic and robust generation of pulsed resonance fluorescence single photons from a single semiconductor quantum dot using adiabatic rapid passage, a method robust against fluctuation of driving pulse area and dipole moments of solid-state emitters. The emitted photons are background-free, have a vanishing two-photon emission probability of 0.3% and a raw (corrected) two-photon Hong-Ou-Mandel interference visibility of 97.9% (99.5%), reaching a precision that places single photons at the threshold for fault-tolerant surface-code quantum computing. This single-photon source can be readily scaled up to multiphoton entanglement and used for quantum metrology, boson sampling, and linear optical quantum computing.
Scripted drives: A robust protocol for generating exposures to traffic-related air pollution
NASA Astrophysics Data System (ADS)
Patton, Allison P.; Laumbach, Robert; Ohman-Strickland, Pamela; Black, Kathy; Alimokhtari, Shahnaz; Lioy, Paul J.; Kipen, Howard M.
2016-10-01
Commuting in automobiles can contribute substantially to total traffic-related air pollution (TRAP) exposure, yet measuring commuting exposures for studies of health outcomes remains challenging. To estimate real-world TRAP exposures, we developed and evaluated the robustness of a scripted drive protocol on the NJ Turnpike and local roads between April 2007 and October 2014. Study participants were driven in a car with closed windows and open vents during morning rush hours on 190 days. Real-time measurements of PM2.5, PNC, CO, and BC, and integrated samples of NO2, were made in the car cabin. Exposure measures included in-vehicle concentrations on the NJ Turnpike and local roads and the differences and ratios of these concentrations. Median in-cabin concentrations were 11 μg/m3 PM2.5, 40 000 particles/cm3, 0.3 ppm CO, 4 μg/m3 BC, and 20.6 ppb NO2. In-cabin concentrations on the NJ Turnpike were higher than in-cabin concentrations on local roads by a factor of 1.4 for PM2.5, 3.5 for PNC, 1.0 for CO, and 4 for BC. Median concentrations of NO2 for full rides were 2.4 times higher than ambient concentrations. Results were generally robust relative to season, traffic congestion, ventilation setting, and study year, except for PNC and PM2.5, which had secular and seasonal trends. Ratios of concentrations were more stable than differences or absolute concentrations. Scripted drives can be used to generate reasonably consistent in-cabin increments of exposure to traffic-related air pollution.
Characterization of highway traffic noise generated by rigid pavement contraction joints
NASA Astrophysics Data System (ADS)
Ellis, Lawrin T.; Niezrecki, Christopher; Bloomquist, David
2003-04-01
Contraction joints in rigid (concrete) pavements are required to permit expansion of each monolithic section of roadway. At higher speeds, the major source of highway noise is attributed to vehicle tire/roadway interaction. Current concerns about limiting the impact of highway traffic noise has forced transportation agencies to consider strategies to control noise generated by tire/roadway interaction. Within this work the difference in noise generated by 1/4- vs 3/8-in. joint widths is conducted. The study focuses on passenger vehicles including a sedan and a light duty van/truck. Both vehicle in-cabin and roadside noise levels are measured for vehicle speeds of 50, 60, and 70 miles per hour. For the sedan, the minimum and maximum observed in-cabin differences were determined to be 1.08 and 1.82 dB(A), respectively. Minimum and maximum observed roadside differences are 1.19 and 2.58 dB(A), respectively. Van tests resulted in minimum and maximum observed in-cabin differences of 0.60 and 1.09 dB(A) and minimum and maximum observed roadside differences of 1.05 and 3.18 dB(A), respectively. This paper contains details of reference standards, test methods, and the results obtained.
Williams, K.A.; Delene, J.G.; Fuller, L.C.; Bowers, H.I.
1987-06-01
The total busbar electric generating costs were estimated for locations in ten regions of the United States for base-load nuclear and coal-fired power plants with a startup date of January 2000. For the Midwest region a complete data set that specifies each parameter used to obtain the comparative results is supplied. When based on the reference set of input variables, the comparison of power generation costs is found to favor nuclear in most regions of the country. Nuclear power is most favored in the northeast and western regions where coal must be transported over long distances; however, coal-fired generation is most competitive in the north central region where large reserves of cheaply mineable coal exist. In several regions small changes in the reference variables could cause either option to be preferred. The reference data set reflects the better of recent electric utility construction cost experience (BE) for nuclear plants. This study assumes as its reference case a stable regulatory environment and improved planning and construction practices, resulting in nuclear plants typically built at the present BE costs. Today's BE nuclear-plant capital investment cost model is then being used as a surrogate for projected costs for the next generation of light-water reactor plants. An alternative analysis based on today's median experience (ME) nuclear-plant construction cost experience is also included. In this case, coal is favored in all ten regions, implying that typical nuclear capital investment costs must improve for nuclear to be competitive.
NASA Astrophysics Data System (ADS)
Oh, Seok-Geun; Suh, Myoung-Seok
2016-03-01
The projection skills of five ensemble methods were analyzed according to simulation skills, training period, and ensemble members, using 198 sets of pseudo-simulation data (PSD) produced by random number generation assuming the simulated temperature of regional climate models. The PSD sets were classified into 18 categories according to the relative magnitude of bias, variance ratio, and correlation coefficient, where each category had 11 sets (including 1 truth set) with 50 samples. The ensemble methods used were as follows: equal weighted averaging without bias correction (EWA_NBC), EWA with bias correction (EWA_WBC), weighted ensemble averaging based on root mean square errors and correlation (WEA_RAC), WEA based on the Taylor score (WEA_Tay), and multivariate linear regression (Mul_Reg). The projection skills of the ensemble methods improved generally as compared with the best member for each category. However, their projection skills are significantly affected by the simulation skills of the ensemble member. The weighted ensemble methods showed better projection skills than non-weighted methods, in particular, for the PSD categories having systematic biases and various correlation coefficients. The EWA_NBC showed considerably lower projection skills than the other methods, in particular, for the PSD categories with systematic biases. Although Mul_Reg showed relatively good skills, it showed strong sensitivity to the PSD categories, training periods, and number of members. On the other hand, the WEA_Tay and WEA_RAC showed relatively superior skills in both the accuracy and reliability for all the sensitivity experiments. This indicates that WEA_Tay and WEA_RAC are applicable even for simulation data with systematic biases, a short training period, and a small number of ensemble members.
Traffic-generated changes in the chemical characteristics of size-segregated urban aerosols.
Rogula-Kozłowska, Wioletta
2014-10-01
The road traffic impact on the concentrations of 13 fractions of particulate matter (PM) and their components was assessed. PM was sampled at two points in Katowice (southern Poland), a background point beyond the effects of road traffic, and a near-highway traffic point. The samples were analyzed for organic and elemental carbon, 8 water-soluble ions, 24 elements, and 16 polycyclic aromatic hydrocarbons (PAHs). The traffic emissions (mainly particles from car exhaust) enriched the ultrafine, submicron, and fine PM particles with elemental carbon. The traffic-caused re-suspension of the road and soil dust affecting the concentrations and chemical composition of the coarse PM fraction. However, for each PM fraction, the carcinogenic equivalent ratios, assumed as a measure of the hazard from 16 PAHs in this paper, were similar at the two sampling points. The traffic emissions from the highway appeared to have a weaker influence on the concentrations and chemical composition of PM in a typical urban area of southern Poland than elsewhere in Europe.
Deterministic Walks with Choice
Beeler, Katy E.; Berenhaut, Kenneth S.; Cooper, Joshua N.; Hunter, Meagan N.; Barr, Peter S.
2014-01-10
This paper studies deterministic movement over toroidal grids, integrating local information, bounded memory and choice at individual nodes. The research is motivated by recent work on deterministic random walks, and applications in multi-agent systems. Several results regarding passing tokens through toroidal grids are discussed, as well as some open questions.
Deterministic relativistic quantum bit commitment
NASA Astrophysics Data System (ADS)
Adlam, Emily; Kent, Adrian
2015-06-01
We describe new unconditionally secure bit commitment schemes whose security is based on Minkowski causality and the monogamy of quantum entanglement. We first describe an ideal scheme that is purely deterministic, in the sense that neither party needs to generate any secret randomness at any stage. We also describe a variant that allows the committer to proceed deterministically, requires only local randomness generation from the receiver, and allows the commitment to be verified in the neighborhood of the unveiling point. We show that these schemes still offer near-perfect security in the presence of losses and errors, which can be made perfect if the committer uses an extra single random secret bit. We discuss scenarios where these advantages are significant.
NASA Astrophysics Data System (ADS)
Hazelhoff, Lykele; Creusen, Ivo M.; Woudsma, Thomas; de With, Peter H. N.
2015-11-01
Combined databases of road markings and traffic signs provide a complete and full description of the present traffic legislation and instructions. Such databases contribute to efficient signage maintenance, improve navigation, and benefit autonomous driving vehicles. A system is presented for the automated creation of such combined databases, which additionally investigates the benefit of this combination for automated contextual placement analysis. This analysis involves verification of the co-occurrence of traffic signs and road markings to retrieve a list of potentially incorrectly signaled (and thus potentially unsafe) road situations. This co-occurrence verification is specifically explored for both pedestrian crossings and yield situations. Evaluations on 420 km of road have shown that individual detection of traffic signs and road markings denoting these road situations can be performed with accuracies of 98% and 85%, respectively. Combining both approaches shows that over 95% of the pedestrian crossings and give-way situations can be identified. An exploration toward additional co-occurrence analysis of signs and markings shows that inconsistently signaled situations can successfully be extracted, such that specific safety actions can be directed toward cases lacking signs or markings, while most consistently signaled situations can be omitted from this analysis.
A queuing model for road traffic simulation
Guerrouahane, N.; Aissani, D.; Bouallouche-Medjkoune, L.; Farhi, N.
2015-03-10
We present in this article a stochastic queuing model for the raod traffic. The model is based on the M/G/c/c state dependent queuing model, and is inspired from the deterministic Godunov scheme for the road traffic simulation. We first propose a variant of M/G/c/c state dependent model that works with density-flow fundamental diagrams rather than density-speed relationships. We then extend this model in order to consider upstream traffic demand as well as downstream traffic supply. Finally, we show how to model a whole raod by concatenating raod sections as in the deterministic Godunov scheme.
Deterministic uncertainty analysis
Worley, B.A.
1987-01-01
Uncertainties of computer results are of primary interest in applications such as high-level waste (HLW) repository performance assessment in which experimental validation is not possible or practical. This work presents an alternate deterministic approach for calculating uncertainties that has the potential to significantly reduce the number of computer runs required for conventional statistical analysis. 7 refs., 1 fig.
Deterministic hierarchical networks
NASA Astrophysics Data System (ADS)
Barrière, L.; Comellas, F.; Dalfó, C.; Fiol, M. A.
2016-06-01
It has been shown that many networks associated with complex systems are small-world (they have both a large local clustering coefficient and a small diameter) and also scale-free (the degrees are distributed according to a power law). Moreover, these networks are very often hierarchical, as they describe the modularity of the systems that are modeled. Most of the studies for complex networks are based on stochastic methods. However, a deterministic method, with an exact determination of the main relevant parameters of the networks, has proven useful. Indeed, this approach complements and enhances the probabilistic and simulation techniques and, therefore, it provides a better understanding of the modeled systems. In this paper we find the radius, diameter, clustering coefficient and degree distribution of a generic family of deterministic hierarchical small-world scale-free networks that has been considered for modeling real-life complex systems.
Basic model for traffic interweave
NASA Astrophysics Data System (ADS)
Huang, Ding-wei
2015-09-01
We propose a three-parameter traffic model. The system consists of a loop with two junctions. The three parameters control the inflow, the outflow (from the junctions,) and the interweave (in the loop.) The dynamics is deterministic. The boundary conditions are stochastic. We present preliminary results for a complete phase diagram and all possible phase transitions. We observe four distinct traffic phases: free flow, congestion, bottleneck, and gridlock. The proposed model is able to present economically a clear perspective to these four different phases. Free flow and congestion are caused by the traffic conditions in the junctions. Both bottleneck and gridlock are caused by the traffic interweave in the loop. Instead of directly related to conventional congestion, gridlock can be taken as an extreme limit of bottleneck. This model can be useful to clarify the characteristics of traffic phases. This model can also be extended for practical applications.
Chengjiang Mao
1996-12-31
In typical AI systems, we employ so-called non-deterministic reasoning (NDR), which resorts to some systematic search with backtracking in the search spaces defined by knowledge bases (KBs). An eminent property of NDR is that it facilitates programming, especially programming for those difficult AI problems such as natural language processing for which it is difficult to find algorithms to tell computers what to do at every step. However, poor efficiency of NDR is still an open problem. Our work aims at overcoming this efficiency problem.
NASA Technical Reports Server (NTRS)
Roske-Hofstrand, Renate J.
1990-01-01
The man-machine interface and its influence on the characteristics of computer displays in automated air traffic is discussed. The graphical presentation of spatial relationships and the problems it poses for air traffic control, and the solution of such problems are addressed. Psychological factors involved in the man-machine interface are stressed.
The Deterministic Information Bottleneck
NASA Astrophysics Data System (ADS)
Strouse, D. J.; Schwab, David
2015-03-01
A fundamental and ubiquitous task that all organisms face is prediction of the future based on past sensory experience. Since an individual's memory resources are limited and costly, however, there is a tradeoff between memory cost and predictive payoff. The information bottleneck (IB) method (Tishby, Pereira, & Bialek 2000) formulates this tradeoff as a mathematical optimization problem using an information theoretic cost function. IB encourages storing as few bits of past sensory input as possible while selectively preserving the bits that are most predictive of the future. Here we introduce an alternative formulation of the IB method, which we call the deterministic information bottleneck (DIB). First, we argue for an alternative cost function, which better represents the biologically-motivated goal of minimizing required memory resources. Then, we show that this seemingly minor change has the dramatic effect of converting the optimal memory encoder from stochastic to deterministic. Next, we propose an iterative algorithm for solving the DIB problem. Additionally, we compare the IB and DIB methods on a variety of synthetic datasets, and examine the performance of retinal ganglion cell populations relative to the optimal encoding strategy for each problem.
Prediction of interior noise in buildings generated by underground rail traffic
NASA Astrophysics Data System (ADS)
Nagy, A. B.; Fiala, P.; Márki, F.; Augusztinovicz, F.; Degrande, G.; Jacobs, S.; Brassenx, D.
2006-06-01
The prediction of sound field in cavities surrounded by vibrating walls is a simple task nowadays, provided that the velocity distribution along the walls is known in sufficient detail. This information can be obtained from a structural finite element (FE) calculation of the building and the results can be fed directly into a conventional boundary element (BE) analysis. Though methodically simple, it is not an attractive way of prediction from the practical point of view: the size of the matrices needed for BE calculation is too large, thus their inversion is very cumbersome and computationally intensive. The paper introduces a modified numerical calculation method appropriate for practical calculations without the need to construct and invert large matrices. The suggested method is based on the Rayleigh radiation integral and some standard direct (collocational) BE techniques, where the necessary input data are generated from measured or calculated velocity values at just a few points. The technique has been compared and validated on the basis of an extensive measurement series, performed in a reinforced concrete frame building close to a tunnel of line RER B of the underground railway network in Paris.
Near real-time traffic routing
NASA Technical Reports Server (NTRS)
Yang, Chaowei (Inventor); Cao, Ying (Inventor); Xie, Jibo (Inventor); Zhou, Bin (Inventor)
2012-01-01
A near real-time physical transportation network routing system comprising: a traffic simulation computing grid and a dynamic traffic routing service computing grid. The traffic simulator produces traffic network travel time predictions for a physical transportation network using a traffic simulation model and common input data. The physical transportation network is divided into a multiple sections. Each section has a primary zone and a buffer zone. The traffic simulation computing grid includes multiple of traffic simulation computing nodes. The common input data includes static network characteristics, an origin-destination data table, dynamic traffic information data and historical traffic data. The dynamic traffic routing service computing grid includes multiple dynamic traffic routing computing nodes and generates traffic route(s) using the traffic network travel time predictions.
The Traffic Management Advisor
NASA Technical Reports Server (NTRS)
Nedell, William; Erzberger, Heinz; Neuman, Frank
1990-01-01
The traffic management advisor (TMA) is comprised of algorithms, a graphical interface, and interactive tools for controlling the flow of air traffic into the terminal area. The primary algorithm incorporated in it is a real-time scheduler which generates efficient landing sequences and landing times for arrivals within about 200 n.m. from touchdown. A unique feature of the TMA is its graphical interface that allows the traffic manager to modify the computer-generated schedules for specific aircraft while allowing the automatic scheduler to continue generating schedules for all other aircraft. The graphical interface also provides convenient methods for monitoring the traffic flow and changing scheduling parameters during real-time operation.
Visualization of Traffic Accidents
NASA Technical Reports Server (NTRS)
Wang, Jie; Shen, Yuzhong; Khattak, Asad
2010-01-01
Traffic accidents have tremendous impact on society. Annually approximately 6.4 million vehicle accidents are reported by police in the US and nearly half of them result in catastrophic injuries. Visualizations of traffic accidents using geographic information systems (GIS) greatly facilitate handling and analysis of traffic accidents in many aspects. Environmental Systems Research Institute (ESRI), Inc. is the world leader in GIS research and development. ArcGIS, a software package developed by ESRI, has the capabilities to display events associated with a road network, such as accident locations, and pavement quality. But when event locations related to a road network are processed, the existing algorithm used by ArcGIS does not utilize all the information related to the routes of the road network and produces erroneous visualization results of event locations. This software bug causes serious problems for applications in which accurate location information is critical for emergency responses, such as traffic accidents. This paper aims to address this problem and proposes an improved method that utilizes all relevant information of traffic accidents, namely, route number, direction, and mile post, and extracts correct event locations for accurate traffic accident visualization and analysis. The proposed method generates a new shape file for traffic accidents and displays them on top of the existing road network in ArcGIS. Visualization of traffic accidents along Hampton Roads Bridge Tunnel is included to demonstrate the effectiveness of the proposed method.
Self-stabilizing Deterministic Gathering
NASA Astrophysics Data System (ADS)
Dieudonné, Yoann; Petit, Franck
In this paper, we investigate the possibility to deterministically solve the gathering problem (GP) with weak robots (anonymous, autonomous, disoriented, oblivious, deaf, and dumb). We introduce strong multiplicity detection as the ability for the robots to detect the exact number of robots located at a given position. We show that with strong multiplicity detection, there exists a deterministic self-stabilizing algorithm solving GP for n robots if, and only if, n is odd.
NASA Astrophysics Data System (ADS)
Merkisz, Jerzy; Fuc, Pawel; Lijewski, Piotr; Ziolkowski, Andrzej; Wojciechowski, Krzysztof T.
2015-06-01
We present an analysis of thermal energy recovery through a proprietary thermoelectric generator (TEG) in an actual vehicle driving cycle reproduced on a dynamic engine test bed. The tests were performed on a 1.3-L 66-kW diesel engine. The TEG was fitted in the vehicle exhaust system. In order to assess the thermal energy losses in the exhaust system, advanced portable emission measurement system research tools were used, such as Semtech DS by Sensors. Aside from the exhaust emissions, the said analyzer measures the exhaust mass flow and exhaust temperature, vehicle driving parameters and reads and records the engine parameters. The difficulty related to the energy recovery measurements under actual traffic conditions, particularly when passenger vehicles and TEGs are used, spurred the authors to develop a proprietary method of transposing the actual driving cycle as a function V = f( t) onto the engine test bed, opn which the driving profile, previously recorded in the city traffic, was reproduced. The length of the cycle was 12.6 km. Along with the motion parameters, the authors reproduced the parameters of the vehicle and its transmission. The adopted methodology enabled high repeatability of the research trials while still ensuring engine dynamic states occurring in the city traffic.
Semiautomated Management Of Arriving Air Traffic
NASA Technical Reports Server (NTRS)
Erzberger, Heinz; Nedell, William
1992-01-01
System of computers, graphical workstations, and computer programs developed for semiautomated management of approach and arrival of numerous aircraft at airport. System comprises three subsystems: traffic-management advisor, used for controlling traffic into terminal area; descent advisor generates information integrated into plan-view display of traffic on monitor; and final-approach-spacing tool used to merge traffic converging on final approach path while making sure aircraft are properly spaced. Not intended to restrict decisions of air-traffic controllers.
Deterministic multidimensional nonuniform gap sampling.
Worley, Bradley; Powers, Robert
2015-12-01
Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities.
Deterministic multidimensional nonuniform gap sampling
NASA Astrophysics Data System (ADS)
Worley, Bradley; Powers, Robert
2015-12-01
Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities.
Research on characterization of wireless LANs traffic
NASA Astrophysics Data System (ADS)
Feng, Huifang; Shu, Yantai; Yang, Oliver W. W.
2011-08-01
In this paper, we employ actual wireless data that draw from well known archives of network traffic traces and investigate the characterization of the wireless LANs traffic. Firstly, useful preliminary information regarding the general wireless traffic dynamics is obtained using one standard statistical technique named Fourier power spectrum. Then the estimation of the parameters, such as the correlation dimension, the largest Lyapunov exponent and the principal components analysis indicate the existence of low-dimensional deterministic chaos in wireless traffic time series. Our results also show that the parameters selection of the phase space reconstruction influence the value of the correlation dimension and the largest Lyapunov exponent, but can not influence on diagnosis of chaotic nature of wireless traffic.
NASA Technical Reports Server (NTRS)
1992-01-01
Mestech's X-15 "Eye in the Sky," a traffic monitoring system, incorporates NASA imaging and robotic vision technology. A camera or "sensor box" is mounted in a housing. The sensor detects vehicles approaching an intersection and sends the information to a computer, which controls the traffic light according to the traffic rate. Jet Propulsion Laboratory technical support packages aided in the company's development of the system. The X-15's "smart highway" can also be used to count vehicles on a highway and compute the number in each lane and their speeds, important information for freeway control engineers. Additional applications are in airport and railroad operations. The system is intended to replace loop-type traffic detectors.
NASA Astrophysics Data System (ADS)
Treiber, Martin; Kesting, Arne; Helbing, Dirk
2006-07-01
We investigate the adaptation of the time headways in car-following models as a function of the local velocity variance, which is a measure of the inhomogeneity of traffic flow. We apply this mechanism to several car-following models and simulate traffic breakdowns in open systems with an on-ramp as bottleneck and in a closed ring road. Single-vehicle data and one-minute aggregated data generated by several virtual detectors show a semiquantitative agreement with microscopic and flow-density data from the Dutch freeway A9. This includes the observed distributions of the net time headways for free and congested traffic, the velocity variance as a function of density, and the fundamental diagram. The modal value of the time headway distribution is shifted by a factor of about 2 under congested conditions. Macroscopically, this corresponds to the capacity drop at the transition from free to congested traffic. The simulated fundamental diagram shows free, synchronized, and jammed traffic, and a wide scattering in the congested traffic regime. We explain this by a self-organized variance-driven process that leads to the spontaneous formation and decay of long-lived platoons even for a deterministic dynamics on a single lane.
NASA Astrophysics Data System (ADS)
Nuckelt, J.; Schack, M.; Kürner, T.
2011-08-01
This paper presents a physical (PHY) layer simulator of the IEEE 802.11p standard for Wireless Access in Vehicular Environments (WAVE). This simulator allows the emulation of data transmission via different radio channels as well as the analysis of the resulting system behavior. The PHY layer simulator is part of an integrated simulation platform including a traffic model to generate realistic mobility of vehicles and a 3D ray-optical model to calculate the multipath propagation channel between transmitter and receiver. Besides deterministic channel modeling by means of ray-optical modeling, the simulator can also be used with stochastic channel models of typical vehicular scenarios. With the aid of this PHY layer simulator and the integrated channel models, the resulting performance of the system in terms of bit and packet error rates of different receiver designs can be analyzed in order to achieve a robust data transmission.
Analysis of FBC deterministic chaos
Daw, C.S.
1996-06-01
It has recently been discovered that the performance of a number of fossil energy conversion devices such as fluidized beds, pulsed combustors, steady combustors, and internal combustion engines are affected by deterministic chaos. It is now recognized that understanding and controlling the chaotic elements of these devices can lead to significantly improved energy efficiency and reduced emissions. Application of these techniques to key fossil energy processes are expected to provide important competitive advantages for U.S. industry.
State Traffic Data: Traffic Safety Facts, 2001.
ERIC Educational Resources Information Center
National Center for Statistics and Analysis (NHTSA), Washington, DC.
This brief provides statistical information on U.S. traffic accidents delineated by state. A map details the 2001 traffic fatalities by state and the percent change from 2000. Data tables include: (1) traffic fatalities and fatality rates, 2001; (2) traffic fatalities and percent change, 1975-2001; (3) alcohol involvement in fatal traffic crashes,…
Deterministic implementation of weak quantum cubic nonlinearity
Marek, Petr; Filip, Radim; Furusawa, Akira
2011-11-15
We propose a deterministic implementation of weak cubic nonlinearity, which is a basic building block of a full-scale continuous-variable quantum computation. Our proposal relies on preparation of a specific ancillary state and transferring its nonlinear properties onto the desired target by means of deterministic Gaussian operations and feed forward. We show that, despite the imperfections arising from the deterministic nature of the operation, the weak quantum nonlinearity can be implemented and verified with the current level of technology.
Survivability of Deterministic Dynamical Systems
NASA Astrophysics Data System (ADS)
Hellmann, Frank; Schultz, Paul; Grabow, Carsten; Heitzig, Jobst; Kurths, Jürgen
2016-07-01
The notion of a part of phase space containing desired (or allowed) states of a dynamical system is important in a wide range of complex systems research. It has been called the safe operating space, the viability kernel or the sunny region. In this paper we define the notion of survivability: Given a random initial condition, what is the likelihood that the transient behaviour of a deterministic system does not leave a region of desirable states. We demonstrate the utility of this novel stability measure by considering models from climate science, neuronal networks and power grids. We also show that a semi-analytic lower bound for the survivability of linear systems allows a numerically very efficient survivability analysis in realistic models of power grids. Our numerical and semi-analytic work underlines that the type of stability measured by survivability is not captured by common asymptotic stability measures.
Survivability of Deterministic Dynamical Systems.
Hellmann, Frank; Schultz, Paul; Grabow, Carsten; Heitzig, Jobst; Kurths, Jürgen
2016-01-01
The notion of a part of phase space containing desired (or allowed) states of a dynamical system is important in a wide range of complex systems research. It has been called the safe operating space, the viability kernel or the sunny region. In this paper we define the notion of survivability: Given a random initial condition, what is the likelihood that the transient behaviour of a deterministic system does not leave a region of desirable states. We demonstrate the utility of this novel stability measure by considering models from climate science, neuronal networks and power grids. We also show that a semi-analytic lower bound for the survivability of linear systems allows a numerically very efficient survivability analysis in realistic models of power grids. Our numerical and semi-analytic work underlines that the type of stability measured by survivability is not captured by common asymptotic stability measures. PMID:27405955
Survivability of Deterministic Dynamical Systems
Hellmann, Frank; Schultz, Paul; Grabow, Carsten; Heitzig, Jobst; Kurths, Jürgen
2016-01-01
The notion of a part of phase space containing desired (or allowed) states of a dynamical system is important in a wide range of complex systems research. It has been called the safe operating space, the viability kernel or the sunny region. In this paper we define the notion of survivability: Given a random initial condition, what is the likelihood that the transient behaviour of a deterministic system does not leave a region of desirable states. We demonstrate the utility of this novel stability measure by considering models from climate science, neuronal networks and power grids. We also show that a semi-analytic lower bound for the survivability of linear systems allows a numerically very efficient survivability analysis in realistic models of power grids. Our numerical and semi-analytic work underlines that the type of stability measured by survivability is not captured by common asymptotic stability measures. PMID:27405955
Deterministic weak localization in periodic structures.
Tian, C; Larkin, A
2005-12-01
In some perfect periodic structures classical motion exhibits deterministic diffusion. For such systems we present the weak localization theory. As a manifestation for the velocity autocorrelation function a universal power law decay is predicted to appear at four Ehrenfest times. This deterministic weak localization is robust against weak quenched disorders, which may be confirmed by coherent backscattering measurements of periodic photonic crystals.
Sensitivity analysis in a Lassa fever deterministic mathematical model
NASA Astrophysics Data System (ADS)
Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman
2015-05-01
Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.
Analysis of Malicious Traffic in Modbus/TCP Communications
NASA Astrophysics Data System (ADS)
Kobayashi, Tiago H.; Batista, Aguinaldo B.; Medeiros, João Paulo S.; Filho, José Macedo F.; Brito, Agostinho M.; Pires, Paulo S. Motta
This paper presents the results of our analysis about the influence of Information Technology (IT) malicious traffic on an IP-based automation environment. We utilized a traffic generator, called MACE (Malicious trAffic Composition Environment), to inject malicious traffic in a Modbus/TCP communication system and a sniffer to capture and analyze network traffic. The realized tests show that malicious traffic represents a serious risk to critical information infrastructures. We show that this kind of traffic can increase latency of Modbus/TCP communication and that, in some cases, can put Modbus/TCP devices out of communication.
Deterministic quantum teleportation with atoms.
Riebe, M; Häffner, H; Roos, C F; Hänsel, W; Benhelm, J; Lancaster, G P T; Körber, T W; Becher, C; Schmidt-Kaler, F; James, D F V; Blatt, R
2004-06-17
Teleportation of a quantum state encompasses the complete transfer of information from one particle to another. The complete specification of the quantum state of a system generally requires an infinite amount of information, even for simple two-level systems (qubits). Moreover, the principles of quantum mechanics dictate that any measurement on a system immediately alters its state, while yielding at most one bit of information. The transfer of a state from one system to another (by performing measurements on the first and operations on the second) might therefore appear impossible. However, it has been shown that the entangling properties of quantum mechanics, in combination with classical communication, allow quantum-state teleportation to be performed. Teleportation using pairs of entangled photons has been demonstrated, but such techniques are probabilistic, requiring post-selection of measured photons. Here, we report deterministic quantum-state teleportation between a pair of trapped calcium ions. Following closely the original proposal, we create a highly entangled pair of ions and perform a complete Bell-state measurement involving one ion from this pair and a third source ion. State reconstruction conditioned on this measurement is then performed on the other half of the entangled pair. The measured fidelity is 75%, demonstrating unequivocally the quantum nature of the process.
Connecting deterministic and stochastic metapopulation models.
Barbour, A D; McVinish, R; Pollett, P K
2015-12-01
In this paper, we study the relationship between certain stochastic and deterministic versions of Hanski's incidence function model and the spatially realistic Levins model. We show that the stochastic version can be well approximated in a certain sense by the deterministic version when the number of habitat patches is large, provided that the presence or absence of individuals in a given patch is influenced by a large number of other patches. Explicit bounds on the deviation between the stochastic and deterministic models are given. PMID:25735440
Human gait recognition via deterministic learning.
Zeng, Wei; Wang, Cong
2012-11-01
Recognition of temporal/dynamical patterns is among the most difficult pattern recognition tasks. Human gait recognition is a typical difficulty in the area of dynamical pattern recognition. It classifies and identifies individuals by their time-varying gait signature data. Recently, a new dynamical pattern recognition method based on deterministic learning theory was presented, in which a time-varying dynamical pattern can be effectively represented in a time-invariant manner and can be rapidly recognized. In this paper, we present a new model-based approach for human gait recognition via the aforementioned method, specifically for recognizing people by gait. The approach consists of two phases: a training (learning) phase and a test (recognition) phase. In the training phase, side silhouette lower limb joint angles and angular velocities are selected as gait features. A five-link biped model for human gait locomotion is employed to demonstrate that functions containing joint angle and angular velocity state vectors characterize the gait system dynamics. Due to the quasi-periodic and symmetrical characteristics of human gait, the gait system dynamics can be simplified to be described by functions of joint angles and angular velocities of one side of the human body, thus the feature dimension is effectively reduced. Locally-accurate identification of the gait system dynamics is achieved by using radial basis function (RBF) neural networks (NNs) through deterministic learning. The obtained knowledge of the approximated gait system dynamics is stored in constant RBF networks. A gait signature is then derived from the extracted gait system dynamics along the phase portrait of joint angles versus angular velocities. A bank of estimators is constructed using constant RBF networks to represent the training gait patterns. In the test phase, by comparing the set of estimators with the test gait pattern, a set of recognition errors are generated, and the average L(1) norms
NASA Astrophysics Data System (ADS)
Bukowiecki, N.; Lienemann, P.; Hill, M.; Furger, M.; Richard, A.; Amato, F.; Prévôt, A. S. H.; Baltensperger, U.; Buchmann, B.; Gehrig, R.
2010-06-01
Recent studies have shown clear contributions of non-exhaust emissions to the traffic related PM10 load of the ambient air. These emissions consist of particles produced by abrasion from brakes, road wear, tire wear, as well as vehicle induced resuspension of deposited road dust. The main scope of the presented work was to identify and quantify the non-exhaust fraction of traffic related PM10 for two roadside locations in Switzerland with different traffic regimes. The two investigated locations, an urban street canyon with heavily congested traffic and an interurban freeway, are considered as being typical for Central Europe. Mass-relevant contributions from abrasion particles and resuspended road dust mainly originated from particles in the size range 1-10 μm. The results showed a major influence of vehicle induced resuspension of road dust. In the street canyon, the traffic related PM10 emissions (LDV: 24 ± 8 mg km -1 vehicle -1, HDV: 498 ± 86 mg km -1 vehicle -1) were assigned to 21% brake wear, 38% resuspended road dust and 41% exhaust emissions. Along the freeway (LDV: 50 ± 13 mg km -1 vehicle -1, HDV: 288 ± 72 mg km -1 vehicle -1), respective contributions were 3% brake wear, 56% resuspended road dust and 41% exhaust emissions. There was no indication for relevant contributions from tire wear and abrasion from undamaged pavements.
Deterministic algorithm with agglomerative heuristic for location problems
NASA Astrophysics Data System (ADS)
Kazakovtsev, L.; Stupina, A.
2015-10-01
Authors consider the clustering problem solved with the k-means method and p-median problem with various distance metrics. The p-median problem and the k-means problem as its special case are most popular models of the location theory. They are implemented for solving problems of clustering and many practically important logistic problems such as optimal factory or warehouse location, oil or gas wells, optimal drilling for oil offshore, steam generators in heavy oil fields. Authors propose new deterministic heuristic algorithm based on ideas of the Information Bottleneck Clustering and genetic algorithms with greedy heuristic. In this paper, results of running new algorithm on various data sets are given in comparison with known deterministic and stochastic methods. New algorithm is shown to be significantly faster than the Information Bottleneck Clustering method having analogous preciseness.
From deterministic dynamics to probabilistic descriptions
Misra, B.; Prigogine, I.; Courbage, M.
1979-01-01
The present work is devoted to the following question: What is the relationship between the deterministic laws of dynamics and probabilistic description of physical processes? It is generally accepted that probabilistic processes can arise from deterministic dynamics only through a process of “coarse graining” or “contraction of description” that inevitably involves a loss of information. In this work we present an alternative point of view toward the relationship between deterministic dynamics and probabilistic descriptions. Speaking in general terms, we demonstrate the possibility of obtaining (stochastic) Markov processes from deterministic dynamics simply through a “change of representation” that involves no loss of information provided the dynamical system under consideration has a suitably high degree of instability of motion. The fundamental implications of this finding for statistical mechanics and other areas of physics are discussed. From a mathematical point of view, the theory we present is a theory of invertible, positivity-preserving, and necessarily nonunitary similarity transformations that convert the unitary groups associated with deterministic dynamics to contraction semigroups associated with stochastic Markov processes. We explicitly construct such similarity transformations for the so-called Bernoulli systems. This construction illustrates also the construction of the so-called Lyapounov variables and the operator of “internal time,” which play an important role in our approach to the problem of irreversibility. The theory we present can also be viewed as a theory of entropy-increasing evolutions and their relationship to deterministic dynamics. PMID:16592691
Jamitons: Phantom Traffic Jams
ERIC Educational Resources Information Center
Kowszun, Jorj
2013-01-01
Traffic on motorways can slow down for no apparent reason. Sudden changes in speed by one or two drivers can create a chain reaction that causes a traffic jam for the vehicles that are following. This kind of phantom traffic jam is called a "jamiton" and the article discusses some of the ways in which traffic engineers produce…
Deterministic Superreplication of One-Parameter Unitary Transformations
NASA Astrophysics Data System (ADS)
Dür, W.; Sekatski, P.; Skotiniotis, M.
2015-03-01
We show that one can deterministically generate, out of N copies of an unknown unitary operation, up to N2 almost perfect copies. The result holds for all operations generated by a Hamiltonian with an unknown interaction strength. This generalizes a similar result in the context of phase-covariant cloning where, however, superreplication comes at the price of an exponentially reduced probability of success. We also show that multiple copies of unitary operations can be emulated by operations acting on a much smaller space, e.g., a magnetic field acting on a single n -level system allows one to emulate the action of the field on n2 qubits.
George, L.L.
1988-09-16
The Federal Aviation Administration plans to consolidate several hundred air traffic control centers and TRACONs into area control facilities while maintaining air traffic coverage. This paper defines air traffic coverage, a performance measure of the air traffic control system. Air traffic coverage measures performance without controversy regarding delay and collision probabilities and costs. Coverage measures help evaluate alternative facility architectures and help schedule consolidation. Coverage measures also help evaluate protocols for handling one facility's air traffic to another facility in case of facility failure. Coverage measures help evaluate radar, communications and other air traffic control systems and procedures. 4 refs., 2 figs.,
Deterministic phase retrieval employing spherical illumination
NASA Astrophysics Data System (ADS)
Martínez-Carranza, J.; Falaggis, K.; Kozacki, T.
2015-05-01
Deterministic Phase Retrieval techniques (DPRTs) employ a series of paraxial beam intensities in order to recover the phase of a complex field. These paraxial intensities are usually generated in systems that employ plane-wave illumination. This type of illumination allows a direct processing of the captured intensities with DPRTs for recovering the phase. Furthermore, it has been shown that intensities for DPRTs can be acquired from systems that use spherical illumination as well. However, this type of illumination presents a major setback for DPRTs: the captured intensities change their size for each position of the detector on the propagation axis. In order to apply the DPRTs, reescalation of the captured intensities has to be applied. This condition can increase the error sensitivity of the final phase result if it is not carried out properly. In this work, we introduce a novel system based on a Phase Light Modulator (PLM) for capturing the intensities when employing spherical illumination. The proposed optical system enables us to capture the diffraction pattern of under, in, and over-focus intensities. The employment of the PLM allows capturing the corresponding intensities without displacing the detector. Moreover, with the proposed optical system we can control accurately the magnification of the captured intensities. Thus, the stack of captured intensities can be used in DPRTs, overcoming the problems related with the resizing of the images. In order to prove our claims, the corresponding numerical experiments will be carried out. These simulations will show that the retrieved phases with spherical illumination are accurate and can be compared with those that employ plane wave illumination. We demonstrate that with the employment of the PLM, the proposed optical system has several advantages as: the optical system is compact, the beam size on the detector plane is controlled accurately, and the errors coming from mechanical motion can be suppressed easily.
Cellular automata for traffic flow modeling. Final report
Benjaafar, S.; Dooley, K.; Setyawan, W.
1997-12-01
In this paper, the authors explore the usefulness of cellular automata to traffic flow modeling. The authors extend some of the existing CA models to capture characteristics of traffic flow that have not been possible to model using either conventional analytical models or existing simulation techniques. In particular, the authors examine higher moments of traffic flow and evaluate their effect on overall traffic performance. The behavior of these higher moments is found to be surprising, somewhat counter-intuitive, and to have important implications for design and control of traffic systems. For example, the authors show that the density of maximum throughput is near the density of maximum speed variance. Contrary to current practice, traffic should, therefore, be steered away from this density region. For deterministic systems the authors found traffic flow to possess a finite period which is highly sensitive to density in a non-monotonic fashion. The authors show that knowledge of this periodic behavior to be very useful in designing and controlling automated systems. These results are obtained for both single and two lane systems. For two lane systems, the authors also examine the relationship between lane changing behavior and flow performance. The authors show that the density of maximum land changing frequency occurs past the density of maximum throughput. Therefore, traffic should also be steered away from this density region.
Effect of Uncertainty on Deterministic Runway Scheduling
NASA Technical Reports Server (NTRS)
Gupta, Gautam; Malik, Waqar; Jung, Yoon C.
2012-01-01
Active runway scheduling involves scheduling departures for takeoffs and arrivals for runway crossing subject to numerous constraints. This paper evaluates the effect of uncertainty on a deterministic runway scheduler. The evaluation is done against a first-come- first-serve scheme. In particular, the sequence from a deterministic scheduler is frozen and the times adjusted to satisfy all separation criteria; this approach is tested against FCFS. The comparison is done for both system performance (throughput and system delay) and predictability, and varying levels of congestion are considered. The modeling of uncertainty is done in two ways: as equal uncertainty in availability at the runway as for all aircraft, and as increasing uncertainty for later aircraft. Results indicate that the deterministic approach consistently performs better than first-come-first-serve in both system performance and predictability.
Stochastic search with Poisson and deterministic resetting
NASA Astrophysics Data System (ADS)
Bhat, Uttam; De Bacco, Caterina; Redner, S.
2016-08-01
We investigate a stochastic search process in one, two, and three dimensions in which N diffusing searchers that all start at x 0 seek a target at the origin. Each of the searchers is also reset to its starting point, either with rate r, or deterministically, with a reset time T. In one dimension and for a small number of searchers, the search time and the search cost are minimized at a non-zero optimal reset rate (or time), while for sufficiently large N, resetting always hinders the search. In general, a single searcher leads to the minimum search cost in one, two, and three dimensions. When the resetting is deterministic, several unexpected feature arise for N searchers, including the search time being independent of T for 1/T\\to 0 and the search cost being independent of N over a suitable range of N. Moreover, deterministic resetting typically leads to a lower search cost than in Poisson resetting.
Deterministic dense coding with partially entangled states
Mozes, Shay; Reznik, Benni; Oppenheim, Jonathan
2005-01-01
The utilization of a d-level partially entangled state, shared by two parties wishing to communicate classical information without errors over a noiseless quantum channel, is discussed. We analytically construct deterministic dense coding schemes for certain classes of nonmaximally entangled states, and numerically obtain schemes in the general case. We study the dependency of the maximal alphabet size of such schemes on the partially entangled state shared by the two parties. Surprisingly, for d>2 it is possible to have deterministic dense coding with less than one ebit. In this case the number of alphabet letters that can be communicated by a single particle is between d and 2d. In general, we numerically find that the maximal alphabet size is any integer in the range [d,d{sup 2}] with the possible exception of d{sup 2}-1. We also find that states with less entanglement can have a greater deterministic communication capacity than other more entangled states.
Optimal partial deterministic quantum teleportation of qubits
Mista, Ladislav Jr.; Filip, Radim
2005-02-01
We propose a protocol implementing optimal partial deterministic quantum teleportation for qubits. This is a teleportation scheme realizing deterministically an optimal 1{yields}2 asymmetric universal cloning where one imperfect copy of the input state emerges at the sender's station while the other copy emerges at receiver's possibly distant station. The optimality means that the fidelities of the copies saturate the asymmetric cloning inequality. The performance of the protocol relies on the partial deterministic nondemolition Bell measurement that allows us to continuously control the flow of information among the outgoing qubits. We also demonstrate that the measurement is optimal two-qubit operation in the sense of the trade-off between the state disturbance and the information gain.
Nine challenges for deterministic epidemic models.
Roberts, Mick; Andreasen, Viggo; Lloyd, Alun; Pellis, Lorenzo
2015-03-01
Deterministic models have a long history of being applied to the study of infectious disease epidemiology. We highlight and discuss nine challenges in this area. The first two concern the endemic equilibrium and its stability. We indicate the need for models that describe multi-strain infections, infections with time-varying infectivity, and those where superinfection is possible. We then consider the need for advances in spatial epidemic models, and draw attention to the lack of models that explore the relationship between communicable and non-communicable diseases. The final two challenges concern the uses and limitations of deterministic models as approximations to stochastic systems.
Deterministic Single-Phonon Source Triggered by a Single Photon.
Söllner, Immo; Midolo, Leonardo; Lodahl, Peter
2016-06-10
We propose a scheme that enables the deterministic generation of single phonons at gigahertz frequencies triggered by single photons in the near infrared. This process is mediated by a quantum dot embedded on chip in an optomechanical circuit, which allows for the simultaneous control of the relevant photonic and phononic frequencies. We devise new optomechanical circuit elements that constitute the necessary building blocks for the proposed scheme and are readily implementable within the current state-of-the-art of nanofabrication. This will open new avenues for implementing quantum functionalities based on phonons as an on-chip quantum bus.
Deterministic Single-Phonon Source Triggered by a Single Photon
NASA Astrophysics Data System (ADS)
Söllner, Immo; Midolo, Leonardo; Lodahl, Peter
2016-06-01
We propose a scheme that enables the deterministic generation of single phonons at gigahertz frequencies triggered by single photons in the near infrared. This process is mediated by a quantum dot embedded on chip in an optomechanical circuit, which allows for the simultaneous control of the relevant photonic and phononic frequencies. We devise new optomechanical circuit elements that constitute the necessary building blocks for the proposed scheme and are readily implementable within the current state-of-the-art of nanofabrication. This will open new avenues for implementing quantum functionalities based on phonons as an on-chip quantum bus.
Deterministic Single-Phonon Source Triggered by a Single Photon.
Söllner, Immo; Midolo, Leonardo; Lodahl, Peter
2016-06-10
We propose a scheme that enables the deterministic generation of single phonons at gigahertz frequencies triggered by single photons in the near infrared. This process is mediated by a quantum dot embedded on chip in an optomechanical circuit, which allows for the simultaneous control of the relevant photonic and phononic frequencies. We devise new optomechanical circuit elements that constitute the necessary building blocks for the proposed scheme and are readily implementable within the current state-of-the-art of nanofabrication. This will open new avenues for implementing quantum functionalities based on phonons as an on-chip quantum bus. PMID:27341236
ERIC Educational Resources Information Center
Hart, Vincent G.
1981-01-01
Two examples are given of ways traffic engineers estimate traffic flow. The first, Floating Car Method, involves some basic ideas and the notion of relative velocity. The second, Maximum Traffic Flow, is viewed to involve simple applications of calculus. The material provides insight into specialized applications of mathematics. (MP)
Deterministic Quantization by Dynamical Boundary Conditions
Dolce, Donatello
2010-06-15
We propose an unexplored quantization method. It is based on the assumption of dynamical space-time intrinsic periodicities for relativistic fields, which in turn can be regarded as dual to extra-dimensional fields. As a consequence we obtain a unified and consistent interpretation of Special Relativity and Quantum Mechanics in terms of Deterministic Geometrodynamics.
A deterministic discrete ordinates transport proxy application
2014-06-03
Kripke is a simple 3D deterministic discrete ordinates (Sn) particle transport code that maintains the computational load and communications pattern of a real transport code. It is intended to be a research tool to explore different data layouts, new programming paradigms and computer architectures.
Linear Deterministic Accumulator Models of Simple Choice
Heathcote, Andrew; Love, Jonathon
2012-01-01
We examine theories of simple choice as a race among evidence accumulation processes. We focus on the class of deterministic race models, which assume that the effects of fluctuations in the parameters of the accumulation processes between-choice trials (between-choice noise) dominate the effects of fluctuations occurring while making a choice (within-choice noise) in behavioral data (i.e., response times and choices). The latter deterministic approximation, when combined with the assumption that accumulation is linear, leads to a class of models that can be readily applied to simple-choice behavior because they are computationally tractable. We develop a new and mathematically simple exemplar within the class of linear deterministic models, the Lognormal race (LNR). We then examine how the LNR, and another widely applied linear deterministic model, Brown and Heathcote’s (2008) LBA, account for a range of benchmark simple-choice effects in lexical-decision task data reported by Wagenmakers et al. (2008). Our results indicate that the LNR provides an accurate description of this data. Although the LBA model provides a slightly better account, both models support similar psychological conclusions. PMID:22936920
Creighton, H. ); Allen, R.; Stewart, S.; Hayto, S. )
1990-11-01
The traffic congestion on our roads today is becoming a critical problem. There is increased fuel consumption as cars wait along poorly timed arterials. Safety is threatened as poor traffic flow leads to collisions. This paper reports that Transport Canada and the Ministry of Transportation Ontario has developed an integrated traffic system (ITS). The system is designed to enable the optimization of traffic flow on existing roadways. The ITS system contains a data-base management system for traffic data (including accidents, roadway volumes, and signal timing details) and links this data base to the traffic analysis programs. This will ease the data management situation within the municipalities and standardize the traffic operations and reduce duplication of computerization development efforts.
McQuinn, Ian H; Lesage, Véronique; Carrier, Dominic; Larrivée, Geneviève; Samson, Yves; Chartrand, Sylvain; Michaud, Robert; Theriault, James
2011-12-01
The threatened resident beluga population of the St. Lawrence Estuary shares the Saguenay-St. Lawrence Marine Park with significant anthropogenic noise sources, including marine commercial traffic and a well-established, vessel-based whale-watching industry. Frequency-dependent (FD) weighting was used to approximate beluga hearing sensitivity to determine how noise exposure varied in time and space at six sites of high beluga summer residency. The relative contribution of each source to acoustic habitat degradation was estimated by measuring noise levels throughout the summer and noise signatures of typical vessel classes with respect to traffic volume and sound propagation characteristics. Rigid-hulled inflatable boats were the dominant noise source with respect to estimated beluga hearing sensitivity in the studied habitats due to their high occurrence and proximity, high correlation with site-specific FD-weighted sound levels, and the dominance of mid-frequencies (0.3-23 kHz) in their noise signatures. Median C-weighted sound pressure level (SPL(RMS)) had a range of 19 dB re 1 μPa between the noisiest and quietest sites. Broadband SPL(RMS) exceeded 120 dB re 1 μPa 8-32% of the time depending on the site. Impacts of these noise levels on St. Lawrence beluga will depend on exposure recurrence and individual responsiveness.
McQuinn, Ian H; Lesage, Véronique; Carrier, Dominic; Larrivée, Geneviève; Samson, Yves; Chartrand, Sylvain; Michaud, Robert; Theriault, James
2011-12-01
The threatened resident beluga population of the St. Lawrence Estuary shares the Saguenay-St. Lawrence Marine Park with significant anthropogenic noise sources, including marine commercial traffic and a well-established, vessel-based whale-watching industry. Frequency-dependent (FD) weighting was used to approximate beluga hearing sensitivity to determine how noise exposure varied in time and space at six sites of high beluga summer residency. The relative contribution of each source to acoustic habitat degradation was estimated by measuring noise levels throughout the summer and noise signatures of typical vessel classes with respect to traffic volume and sound propagation characteristics. Rigid-hulled inflatable boats were the dominant noise source with respect to estimated beluga hearing sensitivity in the studied habitats due to their high occurrence and proximity, high correlation with site-specific FD-weighted sound levels, and the dominance of mid-frequencies (0.3-23 kHz) in their noise signatures. Median C-weighted sound pressure level (SPL(RMS)) had a range of 19 dB re 1 μPa between the noisiest and quietest sites. Broadband SPL(RMS) exceeded 120 dB re 1 μPa 8-32% of the time depending on the site. Impacts of these noise levels on St. Lawrence beluga will depend on exposure recurrence and individual responsiveness. PMID:22225023
Deterministic dynamics in the minority game
NASA Astrophysics Data System (ADS)
Jefferies, P.; Hart, M. L.; Johnson, N. F.
2002-01-01
The minority game (MG) behaves as a stochastically disturbed deterministic system due to the coin toss invoked to resolve tied strategies. Averaging over this stochasticity yields a description of the MG's deterministic dynamics via mapping equations for the strategy score and global information. The strategy-score map contains both restoring-force and bias terms, whose magnitudes depend on the game's quenched disorder. Approximate analytical expressions are obtained and the effect of ``market impact'' is discussed. The global-information map represents a trajectory on a de Bruijn graph. For small quenched disorder, a Eulerian trail represents a stable attractor. It is shown analytically how antipersistence arises. The response to perturbations and different initial conditions is also discussed.
Bayesian Uncertainty Analyses Via Deterministic Model
NASA Astrophysics Data System (ADS)
Krzysztofowicz, R.
2001-05-01
Rational decision-making requires that the total uncertainty about a variate of interest (a predictand) be quantified in terms of a probability distribution, conditional on all available information and knowledge. Suppose the state-of-knowledge is embodied in a deterministic model, which is imperfect and outputs only an estimate of the predictand. Fundamentals are presented of three Bayesian approaches to producing a probability distribution of the predictand via any deterministic model. The Bayesian Processor of Output (BPO) quantifies the total uncertainty in terms of a posterior distribution, conditional on model output. The Bayesian Processor of Ensemble (BPE) quantifies the total uncertainty in terms of a posterior distribution, conditional on an ensemble of model output. The Bayesian Forecasting System (BFS) decomposes the total uncertainty into input uncertainty and model uncertainty, which are characterized independently and then integrated into a predictive distribution.
Deterministic signal associated with a random field.
Kim, Taewoo; Zhu, Ruoyu; Nguyen, Tan H; Zhou, Renjie; Edwards, Chris; Goddard, Lynford L; Popescu, Gabriel
2013-09-01
Stochastic fields do not generally possess a Fourier transform. This makes the second-order statistics calculation very difficult, as it requires solving a fourth-order stochastic wave equation. This problem was alleviated by Wolf who introduced the coherent mode decomposition and, as a result, space-frequency statistics propagation of wide-sense stationary fields. In this paper we show that if, in addition to wide-sense stationarity, the fields are also wide-sense statistically homogeneous, then monochromatic plane waves can be used as an eigenfunction basis for the cross spectral density. Furthermore, the eigenvalue associated with a plane wave, exp[i(k · r-ωt)], is given by the spatiotemporal power spectrum evaluated at the frequency (k, ω). We show that the second-order statistics of these fields is fully described by the spatiotemporal power spectrum, a real, positive function. Thus, the second-order statistics can be efficiently propagated in the wavevector-frequency representation using a new framework of deterministic signals associated with random fields. Analogous to the complex analytic signal representation of a field, the deterministic signal is a mathematical construct meant to simplify calculations. Specifically, the deterministic signal associated with a random field is defined such that it has the identical autocorrelation as the actual random field. Calculations for propagating spatial and temporal correlations are simplified greatly because one only needs to solve a deterministic wave equation of second order. We illustrate the power of the wavevector-frequency representation with calculations of spatial coherence in the far zone of an incoherent source, as well as coherence effects induced by biological tissues.
Shape-Controlled Deterministic Assembly of Nanowires.
Zhao, Yunlong; Yao, Jun; Xu, Lin; Mankin, Max N; Zhu, Yinbo; Wu, Hengan; Mai, Liqiang; Zhang, Qingjie; Lieber, Charles M
2016-04-13
Large-scale, deterministic assembly of nanowires and nanotubes with rationally controlled geometries could expand the potential applications of one-dimensional nanomaterials in bottom-up integrated nanodevice arrays and circuits. Control of the positions of straight nanowires and nanotubes has been achieved using several assembly methods, although simultaneous control of position and geometry has not been realized. Here, we demonstrate a new concept combining simultaneous assembly and guided shaping to achieve large-scale, high-precision shape controlled deterministic assembly of nanowires. We lithographically pattern U-shaped trenches and then shear transfer nanowires to the patterned substrate wafers, where the trenches serve to define the positions and shapes of transferred nanowires. Studies using semicircular trenches defined by electron-beam lithography yielded U-shaped nanowires with radii of curvature defined by inner surface of the trenches. Wafer-scale deterministic assembly produced U-shaped nanowires for >430,000 sites with a yield of ∼90%. In addition, mechanistic studies and simulations demonstrate that shaping results in primarily elastic deformation of the nanowires and show clearly the diameter-dependent limits achievable for accessible forces. Last, this approach was used to assemble U-shaped three-dimensional nanowire field-effect transistor bioprobe arrays containing 200 individually addressable nanodevices. By combining the strengths of wafer-scale top-down fabrication with diverse and tunable properties of one-dimensional building blocks in novel structural configurations, shape-controlled deterministic nanowire assembly is expected to enable new applications in many areas including nanobioelectronics and nanophotonics. PMID:26999059
Ada programming guidelines for deterministic storage management
NASA Technical Reports Server (NTRS)
Auty, David
1988-01-01
Previous reports have established that a program can be written in the Ada language such that the program's storage management requirements are determinable prior to its execution. Specific guidelines for ensuring such deterministic usage of Ada dynamic storage requirements are described. Because requirements may vary from one application to another, guidelines are presented in a most-restrictive to least-restrictive fashion to allow the reader to match appropriate restrictions to the particular application area under investigation.
Improving ground-penetrating radar data in sedimentary rocks using deterministic deconvolution
Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.; Byrnes, A.P.
2003-01-01
Resolution is key to confidently identifying unique geologic features using ground-penetrating radar (GPR) data. Source wavelet "ringing" (related to bandwidth) in a GPR section limits resolution because of wavelet interference, and can smear reflections in time and/or space. The resultant potential for misinterpretation limits the usefulness of GPR. Deconvolution offers the ability to compress the source wavelet and improve temporal resolution. Unlike statistical deconvolution, deterministic deconvolution is mathematically simple and stable while providing the highest possible resolution because it uses the source wavelet unique to the specific radar equipment. Source wavelets generated in, transmitted through and acquired from air allow successful application of deterministic approaches to wavelet suppression. We demonstrate the validity of using a source wavelet acquired in air as the operator for deterministic deconvolution in a field application using "400-MHz" antennas at a quarry site characterized by interbedded carbonates with shale partings. We collected GPR data on a bench adjacent to cleanly exposed quarry faces in which we placed conductive rods to provide conclusive groundtruth for this approach to deconvolution. The best deconvolution results, which are confirmed by the conductive rods for the 400-MHz antenna tests, were observed for wavelets acquired when the transmitter and receiver were separated by 0.3 m. Applying deterministic deconvolution to GPR data collected in sedimentary strata at our study site resulted in an improvement in resolution (50%) and improved spatial location (0.10-0.15 m) of geologic features compared to the same data processed without deterministic deconvolution. The effectiveness of deterministic deconvolution for increased resolution and spatial accuracy of specific geologic features is further demonstrated by comparing results of deconvolved data with nondeconvolved data acquired along a 30-m transect immediately adjacent
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul
2016-05-03
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d
An efficient method to detect periodic behavior in botnet traffic by analyzing control plane traffic
AsSadhan, Basil; Moura, José M.F.
2013-01-01
Botnets are large networks of bots (compromised machines) that are under the control of a small number of bot masters. They pose a significant threat to Internet’s communications and applications. A botnet relies on command and control (C2) communications channels traffic between its members for its attack execution. C2 traffic occurs prior to any attack; hence, the detection of botnet’s C2 traffic enables the detection of members of the botnet before any real harm happens. We analyze C2 traffic and find that it exhibits a periodic behavior. This is due to the pre-programmed behavior of bots that check for updates to download them every T seconds. We exploit this periodic behavior to detect C2 traffic. The detection involves evaluating the periodogram of the monitored traffic. Then applying Walker’s large sample test to the periodogram’s maximum ordinate in order to determine if it is due to a periodic component or not. If the periodogram of the monitored traffic contains a periodic component, then it is highly likely that it is due to a bot’s C2 traffic. The test looks only at aggregate control plane traffic behavior, which makes it more scalable than techniques that involve deep packet inspection (DPI) or tracking the communication flows of different hosts. We apply the test to two types of botnet, tinyP2P and IRC that are generated by SLINGbot. We verify the periodic behavior of their C2 traffic and compare it to the results we get on real traffic that is obtained from a secured enterprise network. We further study the characteristics of the test in the presence of injected HTTP background traffic and the effect of the duty cycle on the periodic behavior. PMID:25685512
Ibrahim, Ahmad M; Wilson, P.; Sawan, M.; Mosher, Scott W; Peplow, Douglas E.; Grove, Robert E
2013-01-01
Three mesh adaptivity algorithms were developed to facilitate and expedite the use of the CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques in accurate full-scale neutronics simulations of fusion energy systems with immense sizes and complicated geometries. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility and resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation. Additionally, because of the significant increase in the efficiency of FW-CADIS simulations, the three algorithms enabled this difficult calculation to be accurately solved on a regular computer cluster, eliminating the need for a world-class super computer.
ERIC Educational Resources Information Center
Roman, Harry T.
2014-01-01
Traffic lights are an important part of the transportation infrastructure, regulating traffic flow and maintaining safety when crossing busy streets. When they go awry or become nonfunctional, a great deal of havoc and danger can be present. During power outages, the street lights go out all over the affected area. It would be good to be able to…
Computers in Traffic Education.
ERIC Educational Resources Information Center
Alexander, O. P.
1983-01-01
Traffic education covers basic road skills, legal/insurance aspects, highway code, accident causation/prevention, and vehicle maintenance. Microcomputer applications to traffic education are outlined, followed by a selected example of programs currently available (focusing on drill/practice, simulation, problem-solving, data manipulation, games,…
NASA Technical Reports Server (NTRS)
DEVALUEZ
1922-01-01
The ways in which the international and internal French air traffic accords interact with each other is outlined in this report. The principal questions covered by the present legislation are as follows: 1) Conditions of safety which must be fulfilled by aircraft; 2) Licenses for members of the crew; 3) Traffic rules to be observed by French and foreign aircraft.
NASA Technical Reports Server (NTRS)
1973-01-01
The traffic analyses and system requirements data generated in the study resulted in the development of two traffic models; the baseline traffic model and the new traffic model. The baseline traffic model provides traceability between the numbers and types of geosynchronous missions considered in the study and the entire spectrum of missions foreseen in the total national space program. The information presented pertaining to the baseline traffic model includes: (1) definition of the baseline traffic model, including identification of specific geosynchronous missions and their payload delivery schedules through 1990; (2) Satellite location criteria, including the resulting distribution of the satellite population; (3) Geosynchronous orbit saturation analyses, including the effects of satellite physical proximity and potential electromagnetic interference; and (4) Platform system requirements analyses, including satellite and mission equipment descriptions, the options and limitations in grouping satellites, and on-orbit servicing criteria (both remotely controlled and man-attended).
Trafficability and workability of soils
Technology Transfer Automated Retrieval System (TEKTRAN)
Trafficability and workability are soil capabilities supporting operations of agricultural machinery. Trafficability is a soil's capability to support agricultural traffic without degrading soils and ecosystems. Workability is a soil capability supporting tillage. Agriculture is associated with mech...
Deterministic photon bias in speckle imaging
NASA Technical Reports Server (NTRS)
Beletic, James W.
1989-01-01
A method for determining photo bias terms in speckle imaging is presented, and photon bias is shown to be a deterministic quantity that can be calculated without the use of the expectation operator. The quantities obtained are found to be identical to previous results. The present results have extended photon bias calculations to the important case of the bispectrum where photon events are assigned different weights, in which regime the bias is a frequency dependent complex quantity that must be calculated for each frame.
Deterministic Switching in Bismuth Ferrite Nanoislands.
Morelli, Alessio; Johann, Florian; Burns, Stuart R; Douglas, Alan; Gregg, J Marty
2016-08-10
We report deterministic selection of polarization variant in bismuth BiFeO3 nanoislands via a two-step scanning probe microscopy procedure. The polarization orientation in a nanoisland is toggled to the desired variant after a reset operation by scanning a conductive atomic force probe in contact over the surface while a bias is applied. The final polarization variant is determined by the direction of the inhomogeneous in-plane trailing field associated with the moving probe tip. This work provides the framework for better control of switching in rhombohedral ferroelectrics and for a deeper understanding of exchange coupling in multiferroic nanoscale heterostructures toward the realization of magnetoelectric devices. PMID:27454612
Minimal Deterministic Physicality Applied to Cosmology
NASA Astrophysics Data System (ADS)
Valentine, John S.
This report summarizes ongoing research and development since our 2012 foundation paper, including the emergent effects of a deterministic mechanism for fermion interactions: (1) the coherence of black holes and particles using a quantum chaotic model; (2) wide-scale (anti)matter prevalence from exclusion and weak interaction during the fermion reconstitution process; and (3) red-shift due to variations of vacuum energy density. We provide a context for Standard Model fields, and show how gravitation can be accountably unified in the same mechanism, but not as a unified field.
Deterministic quantum computation with one photonic qubit
NASA Astrophysics Data System (ADS)
Hor-Meyll, M.; Tasca, D. S.; Walborn, S. P.; Ribeiro, P. H. Souto; Santos, M. M.; Duzzioni, E. I.
2015-07-01
We show that deterministic quantum computing with one qubit (DQC1) can be experimentally implemented with a spatial light modulator, using the polarization and the transverse spatial degrees of freedom of light. The scheme allows the computation of the trace of a high-dimension matrix, being limited by the resolution of the modulator panel and the technical imperfections. In order to illustrate the method, we compute the normalized trace of unitary matrices and implement the Deutsch-Jozsa algorithm. The largest matrix that can be manipulated with our setup is 1080 ×1920 , which is able to represent a system with approximately 21 qubits.
Advanced information feedback in intelligent traffic systems.
Wang, Wen-Xu; Wang, Bing-Hong; Zheng, Wen-Chen; Yin, Chuan-Yang; Zhou, Tao
2005-12-01
The optimal information feedback is very important to many socioeconomic systems like stock market and traffic systems aiming to make full use of resources. As to traffic flow, a reasonable real-time information feedback can improve the urban traffic condition by providing route guidance. In this paper, the influence of a feedback strategy named congestion coefficient feedback strategy is introduced, based on a two-route scenario in which dynamic information can be generated and displayed on the board to guide road users to make a choice. Simulation results adopting this optimal information feedback strategy have demonstrated high efficiency in controlling spatial distribution of traffic patterns compared with the other two information feedback strategies, i.e., travel time and mean velocity.
Prediction feedback in intelligent traffic systems
NASA Astrophysics Data System (ADS)
Dong, Chuan-Fei; Ma, Xu; Wang, Guan-Wen; Sun, Xiao-Yan; Wang, Bing-Hong
2009-11-01
The optimal information feedback has a significant effect on many socioeconomic systems like stock market and traffic systems aiming to make full use of resources. In this paper, we studied dynamics of traffic flow with real-time information provided and the influence of a feedback strategy named prediction feedback strategy is introduced, based on a two-route scenario in which dynamic information can be generated and displayed on the board to guide road users to make a choice. Our model incorporates the effects of adaptability into the cellular automaton models of traffic flow and simulation results adopting this optimal information feedback strategy have demonstrated high efficiency in controlling spatial distribution of traffic patterns compared with the other three information feedback strategies, i.e., vehicle number and flux.
Advanced information feedback in intelligent traffic systems.
Wang, Wen-Xu; Wang, Bing-Hong; Zheng, Wen-Chen; Yin, Chuan-Yang; Zhou, Tao
2005-12-01
The optimal information feedback is very important to many socioeconomic systems like stock market and traffic systems aiming to make full use of resources. As to traffic flow, a reasonable real-time information feedback can improve the urban traffic condition by providing route guidance. In this paper, the influence of a feedback strategy named congestion coefficient feedback strategy is introduced, based on a two-route scenario in which dynamic information can be generated and displayed on the board to guide road users to make a choice. Simulation results adopting this optimal information feedback strategy have demonstrated high efficiency in controlling spatial distribution of traffic patterns compared with the other two information feedback strategies, i.e., travel time and mean velocity. PMID:16486093
Advanced information feedback in intelligent traffic systems
NASA Astrophysics Data System (ADS)
Wang, Wen-Xu; Wang, Bing-Hong; Zheng, Wen-Chen; Yin, Chuan-Yang; Zhou, Tao
2005-12-01
The optimal information feedback is very important to many socioeconomic systems like stock market and traffic systems aiming to make full use of resources. As to traffic flow, a reasonable real-time information feedback can improve the urban traffic condition by providing route guidance. In this paper, the influence of a feedback strategy named congestion coefficient feedback strategy is introduced, based on a two-route scenario in which dynamic information can be generated and displayed on the board to guide road users to make a choice. Simulation results adopting this optimal information feedback strategy have demonstrated high efficiency in controlling spatial distribution of traffic patterns compared with the other two information feedback strategies, i.e., travel time and mean velocity.
CHAOS AND STOCHASTICITY IN DETERMINISTICALLY GENERATED MULTIFRACTAL MEASURES. (R824780)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
More on exact state reconstruction in deterministic digital control systems
NASA Technical Reports Server (NTRS)
Polites, Michael E.
1988-01-01
Presented is a special form of the Ideal State Reconstructor for deterministic digital control systems which is simpler to implement than the most general form. The Ideal State Reconstructor is so named because, if the plant parameters are known exactly, its output will exactly equal, not just approximate, the true state of the plant and accomplish this without any knowledge of the plant's initial state. Besides this, it adds no new states or eigenvalues to the system. Nor does it affect the plant equation for the system in any way; it affects the measurement equation only. It is characterized by the fact that discrete measurements are generated every T/N seconds and input into a multi-input/multi-output moving-average (MA) process. The output of this process is sampled every T seconds and utilized in reconstructing the state of the system.
Additivity principle in high-dimensional deterministic systems.
Saito, Keiji; Dhar, Abhishek
2011-12-16
The additivity principle (AP), conjectured by Bodineau and Derrida [Phys. Rev. Lett. 92, 180601 (2004)], is discussed for the case of heat conduction in three-dimensional disordered harmonic lattices to consider the effects of deterministic dynamics, higher dimensionality, and different transport regimes, i.e., ballistic, diffusive, and anomalous transport. The cumulant generating function (CGF) for heat transfer is accurately calculated and compared with the one given by the AP. In the diffusive regime, we find a clear agreement with the conjecture even if the system is high dimensional. Surprisingly, even in the anomalous regime the CGF is also well fitted by the AP. Lower-dimensional systems are also studied and the importance of three dimensionality for the validity is stressed. PMID:22243060
Connection between stochastic and deterministic modelling of microbial growth.
Kutalik, Zoltán; Razaz, Moe; Baranyi, József
2005-01-21
We present in this paper various links between individual and population cell growth. Deterministic models of the lag and subsequent growth of a bacterial population and their connection with stochastic models for the lag and subsequent generation times of individual cells are analysed. We derived the individual lag time distribution inherent in population growth models, which shows that the Baranyi model allows a wide range of shapes for individual lag time distribution. We demonstrate that individual cell lag time distributions cannot be retrieved from population growth data. We also present the results of our investigation on the effect of the mean and variance of the individual lag time and the initial cell number on the mean and variance of the population lag time. These relationships are analysed theoretically, and their consequence for predictive microbiology research is discussed.
Deterministic nonclassicality for quantum-mechanical oscillators in thermal states
NASA Astrophysics Data System (ADS)
Marek, Petr; Lachman, Lukáš; Slodička, Lukáš; Filip, Radim
2016-07-01
Quantum nonclassicality is the basic building stone for the vast majority of quantum information applications and methods of its generation are at the forefront of research. One of the obstacles any method needs to clear is the looming presence of decoherence and noise which act against the nonclassicality and often erase it completely. In this paper we show that nonclassical states of a quantum harmonic oscillator initially in thermal equilibrium states can be deterministically created by coupling it to a single two-level system. This can be achieved even in the absorption regime in which the two-level system is initially in the ground state. The method is resilient to noise and it may actually benefit from it, as witnessed by the systems with higher thermal energy producing more nonclassical states.
NASA Astrophysics Data System (ADS)
Itoh, Kosuke; Nakada, Tsutomu
2013-04-01
Deterministic nonlinear dynamical processes are ubiquitous in nature. Chaotic sounds generated by such processes may appear irregular and random in waveform, but these sounds are mathematically distinguished from random stochastic sounds in that they contain deterministic short-time predictability in their temporal fine structures. We show that the human brain distinguishes deterministic chaotic sounds from spectrally matched stochastic sounds in neural processing and perception. Deterministic chaotic sounds, even without being attended to, elicited greater cerebral cortical responses than the surrogate control sounds after about 150 ms in latency after sound onset. Listeners also clearly discriminated these sounds in perception. The results support the hypothesis that the human auditory system is sensitive to the subtle short-time predictability embedded in the temporal fine structure of sounds.
Discrete Deterministic and Stochastic Petri Nets
NASA Technical Reports Server (NTRS)
Zijal, Robert; Ciardo, Gianfranco
1996-01-01
Petri nets augmented with timing specifications gained a wide acceptance in the area of performance and reliability evaluation of complex systems exhibiting concurrency, synchronization, and conflicts. The state space of time-extended Petri nets is mapped onto its basic underlying stochastic process, which can be shown to be Markovian under the assumption of exponentially distributed firing times. The integration of exponentially and non-exponentially distributed timing is still one of the major problems for the analysis and was first attacked for continuous time Petri nets at the cost of structural or analytical restrictions. We propose a discrete deterministic and stochastic Petri net (DDSPN) formalism with no imposed structural or analytical restrictions where transitions can fire either in zero time or according to arbitrary firing times that can be represented as the time to absorption in a finite absorbing discrete time Markov chain (DTMC). Exponentially distributed firing times are then approximated arbitrarily well by geometric distributions. Deterministic firing times are a special case of the geometric distribution. The underlying stochastic process of a DDSPN is then also a DTMC, from which the transient and stationary solution can be obtained by standard techniques. A comprehensive algorithm and some state space reduction techniques for the analysis of DDSPNs are presented comprising the automatic detection of conflicts and confusions, which removes a major obstacle for the analysis of discrete time models.
Deterministic magnetorheological finishing of optical aspheric mirrors
NASA Astrophysics Data System (ADS)
Song, Ci; Dai, Yifan; Peng, Xiaoqiang; Li, Shengyi; Shi, Feng
2009-05-01
A new method magnetorheological finishing (MRF) used for deterministical finishing of optical aspheric mirrors is applied to overcome some disadvantages including low finishing efficiency, long iterative time and unstable convergence in the process of conventional polishing. Based on the introduction of the basic principle of MRF, the key techniques to implement deterministical MRF are also discussed. To demonstrate it, a 200 mm diameter K9 class concave asphere with a vertex radius of 640mm was figured on MRF polish tool developed by ourselves. Through one process about two hours, the surface accuracy peak-to-valley (PV) is improved from initial 0.216λ to final 0.179λ and root-mean-square (RMS) is improved from 0.027λ to 0.017λ (λ = 0.6328um ). High-precision and high-efficiency convergence of optical aspheric surface error shows that MRF is an advanced optical manufacturing method that owns high convergence ratio of surface figure, high precision of optical surfacing, stabile and controllable finishing process. Therefore, utilizing MRF to finish optical aspheric mirrors determinately is credible and stabile; its advantages can be also used for finishing optical elements on varieties of types such as plane mirrors and spherical mirrors.
Deterministic forward scatter from surface gravity waves.
Deane, Grant B; Preisig, James C; Tindle, Chris T; Lavery, Andone; Stokes, M Dale
2012-12-01
Deterministic structures in sound reflected by gravity waves, such as focused arrivals and Doppler shifts, have implications for underwater acoustics and sonar, and the performance of underwater acoustic communications systems. A stationary phase analysis of the Helmholtz-Kirchhoff scattering integral yields the trajectory of focused arrivals and their relationship to the curvature of the surface wave field. Deterministic effects along paths up to 70 water depths long are observed in shallow water measurements of surface-scattered sound at the Martha's Vineyard Coastal Observatory. The arrival time and amplitude of surface-scattered pulses are reconciled with model calculations using measurements of surface waves made with an upward-looking sonar mounted mid-way along the propagation path. The root mean square difference between the modeled and observed pulse arrival amplitude and delay, respectively, normalized by the maximum range of amplitudes and delays, is found to be 0.2 or less for the observation periods analyzed. Cross-correlation coefficients for modeled and observed pulse arrival delays varied from 0.83 to 0.16 depending on surface conditions. Cross-correlation coefficients for normalized pulse energy for the same conditions were small and varied from 0.16 to 0.06. In contrast, the modeled and observed pulse arrival delay and amplitude statistics were in good agreement.
Deterministic prediction of surface wind speed variations
NASA Astrophysics Data System (ADS)
Drisya, G. V.; Kiplangat, D. C.; Asokan, K.; Satheesh Kumar, K.
2014-11-01
Accurate prediction of wind speed is an important aspect of various tasks related to wind energy management such as wind turbine predictive control and wind power scheduling. The most typical characteristic of wind speed data is its persistent temporal variations. Most of the techniques reported in the literature for prediction of wind speed and power are based on statistical methods or probabilistic distribution of wind speed data. In this paper we demonstrate that deterministic forecasting methods can make accurate short-term predictions of wind speed using past data, at locations where the wind dynamics exhibit chaotic behaviour. The predictions are remarkably accurate up to 1 h with a normalised RMSE (root mean square error) of less than 0.02 and reasonably accurate up to 3 h with an error of less than 0.06. Repeated application of these methods at 234 different geographical locations for predicting wind speeds at 30-day intervals for 3 years reveals that the accuracy of prediction is more or less the same across all locations and time periods. Comparison of the results with f-ARIMA model predictions shows that the deterministic models with suitable parameters are capable of returning improved prediction accuracy and capturing the dynamical variations of the actual time series more faithfully. These methods are simple and computationally efficient and require only records of past data for making short-term wind speed forecasts within practically tolerable margin of errors.
Deterministic Creation of Macroscopic Cat States
Lombardo, Daniel; Twamley, Jason
2015-01-01
Despite current technological advances, observing quantum mechanical effects outside of the nanoscopic realm is extremely challenging. For this reason, the observation of such effects on larger scale systems is currently one of the most attractive goals in quantum science. Many experimental protocols have been proposed for both the creation and observation of quantum states on macroscopic scales, in particular, in the field of optomechanics. The majority of these proposals, however, rely on performing measurements, making them probabilistic. In this work we develop a completely deterministic method of macroscopic quantum state creation. We study the prototypical optomechanical Membrane In The Middle model and show that by controlling the membrane’s opacity, and through careful choice of the optical cavity initial state, we can deterministically create and grow the spatial extent of the membrane’s position into a large cat state. It is found that by using a Bose-Einstein condensate as a membrane high fidelity cat states with spatial separations of up to ∼300 nm can be achieved. PMID:26345157
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.
Bagieński, Zbigniew
2015-02-01
Vehicle emissions are responsible for a considerable share of urban air pollution concentrations. The traffic air quality index (TAQI) is proposed as a useful tool for evaluating air quality near roadways. The TAQI associates air quality with the equivalent emission from traffic sources and with street structure (roadway structure) as anthropogenic factors. The paper presents a method of determining the TAQI and defines the degrees of harmfulness of emitted pollution. It proposes a classification specifying a potential threat to human health based on the TAQI value and shows an example of calculating the TAQI value for real urban streets. It also considers the role that car traffic plays in creating a local UHI.
Software for Simulating Air Traffic
NASA Technical Reports Server (NTRS)
Sridhar, Banavar; Bilimoria, Karl; Grabbe, Shon; Chatterji, Gano; Sheth, Kapil; Mulfinger, Daniel
2006-01-01
Future Air Traffic Management Concepts Evaluation Tool (FACET) is a system of software for performing computational simulations for evaluating advanced concepts of advanced air-traffic management. FACET includes a program that generates a graphical user interface plus programs and databases that implement computational models of weather, airspace, airports, navigation aids, aircraft performance, and aircraft trajectories. Examples of concepts studied by use of FACET include aircraft self-separation for free flight; prediction of air-traffic-controller workload; decision support for direct routing; integration of spacecraft-launch operations into the U.S. national airspace system; and traffic- flow-management using rerouting, metering, and ground delays. Aircraft can be modeled as flying along either flight-plan routes or great-circle routes as they climb, cruise, and descend according to their individual performance models. The FACET software is modular and is written in the Java and C programming languages. The architecture of FACET strikes a balance between flexibility and fidelity; as a consequence, FACET can be used to model systemwide airspace operations over the contiguous U.S., involving as many as 10,000 aircraft, all on a single desktop or laptop computer running any of a variety of operating systems. Two notable applications of FACET include: (1) reroute conformance monitoring algorithms that have been implemented in one of the Federal Aviation Administration s nationally deployed, real-time, operational systems; and (2) the licensing and integration of FACET with the commercially available Flight Explorer, which is an Internet- based, real-time flight-tracking system.
Statistical properties of deterministic Bernoulli flows
Radunskaya, A.E.
1992-12-31
This thesis presents several new theorems about the stability and the statistical properties of deterministic chaotic flows. Many concrete systems known to exhibit deterministic chaos have so far been shown to be of a class known as Bernoulli Flows. This class of flows is characterized by the Finitely Determined property, which can be checked in specific cases. The first theorem says that these flows can be modeled arbitrarily well for all time by continuous-time finite state Markov processes. In other words it is theoretically possible to model the flow arbitrarily well by a computer equipped with a roulette wheel. There follows a stability result, which says that one can distort the measurements made on the processes without affecting the approximation. These results are than applied to the problem of distinguishing deterministic chaos from stochastic processes in the analysis of time series. The second part of the thesis deals with a specific set of examples. Although it has been possible to analyze specific systems to determine whether they lie in the class of Bernoulli systems, the standard techniques rely on the construction of expanding and contracting fibers in the phase space of the system. These fibers are then used to coordinatize the phase space and to prove the existence of a hyperbolic structure. Unfortunately such methods may fail in the general case, where smoothness conditions and a small singular set cannot be assumed. For example, suppose the standard billiard flow on a square table with a perfectly round obstacle, which is known to be Bernoulli, is replaced by a similar flow on a table with a bumpy fractal-like obstacle: a model perhaps closer to nature. It is shown that these fibers no longer exist and hence cannot be used in the standard manner to prove Bernoulliness or ergodicity. But, one can use the fact that the class of Bernoulli flows is closed in the d-bar metric to show that this billard flow with a bumpy obstacle is in fact Bernoulli.
Deterministic, Nanoscale Fabrication of Mesoscale Objects
Jr., R M; Gilmer, J; Rubenchik, A; Shirk, M
2004-12-08
Neither LLNL nor any other organization has the capability to perform deterministic fabrication of mm-sized objects with arbitrary, {micro}m-sized, 3-D features and with 100-nm-scale accuracy and smoothness. This is particularly true for materials such as high explosives and low-density aerogels, as well as materials such as diamond and vanadium. The motivation for this project was to investigate the physics and chemistry that control the interactions of solid surfaces with laser beams and ion beams, with a view towards their applicability to the desired deterministic fabrication processes. As part of this LDRD project, one of our goals was to advance the state of the art for experimental work, but, in order to create ultimately a deterministic capability for such precision micromachining, another goal was to form a new modeling/simulation capability that could also extend the state of the art in this field. We have achieved both goals. In this project, we have, for the first time, combined a 1-D hydrocode (''HYADES'') with a 3-D molecular dynamics simulator (''MDCASK'') in our modeling studies. In FY02 and FY03, we investigated the ablation/surface-modification processes that occur on copper, gold, and nickel substrates with the use of sub-ps laser pulses. In FY04, we investigated laser ablation of carbon, including laser-enhanced chemical reaction on the carbon surface for both vitreous carbon and carbon aerogels. Both experimental and modeling results will be presented in the report that follows. The immediate impact of our investigation was a much better understanding of the chemical and physical processes that ensure when solid materials are exposed to femtosecond laser pulses. More broadly, we have better positioned LLNL to design a cluster tool for fabricating mesoscale objects utilizing laser pulses and ion-beams as well as more traditional machining/manufacturing techniques for applications such as components in NIF targets, remote sensors, including
Deterministic approaches to coherent diffractive imaging
NASA Astrophysics Data System (ADS)
Allen, L. J.; D'Alfonso, A. J.; Martin, A. V.; Morgan, A. J.; Quiney, H. M.
2016-01-01
In this review we will consider the retrieval of the wave at the exit surface of an object illuminated by a coherent probe from one or more measured diffraction patterns. These patterns may be taken in the near-field (often referred to as images) or in the far field (the Fraunhofer diffraction pattern, where the wave is the Fourier transform of that at the exit surface). The retrieval of the exit surface wave from such data is an inverse scattering problem. This inverse problem has historically been solved using nonlinear iterative methods, which suffer from convergence and uniqueness issues. Here we review deterministic approaches to obtaining the exit surface wave which ameliorate those problems.
Deterministic polishing from theory to practice
NASA Astrophysics Data System (ADS)
Hooper, Abigail R.; Hoffmann, Nathan N.; Sarkas, Harry W.; Escolas, John; Hobbs, Zachary
2015-10-01
Improving predictability in optical fabrication can go a long way towards increasing profit margins and maintaining a competitive edge in an economic environment where pressure is mounting for optical manufacturers to cut costs. A major source of hidden cost is rework - the share of production that does not meet specification in the first pass through the polishing equipment. Rework substantially adds to the part's processing and labor costs as well as bottlenecks in production lines and frustration for managers, operators and customers. The polishing process consists of several interacting variables including: glass type, polishing pads, machine type, RPM, downforce, slurry type, baume level and even the operators themselves. Adjusting the process to get every variable under control while operating in a robust space can not only provide a deterministic polishing process which improves profitability but also produces a higher quality optic.
Deterministic-random separation in nonstationary regime
NASA Astrophysics Data System (ADS)
Abboud, D.; Antoni, J.; Sieg-Zieba, S.; Eltabach, M.
2016-02-01
In rotating machinery vibration analysis, the synchronous average is perhaps the most widely used technique for extracting periodic components. Periodic components are typically related to gear vibrations, misalignments, unbalances, blade rotations, reciprocating forces, etc. Their separation from other random components is essential in vibration-based diagnosis in order to discriminate useful information from masking noise. However, synchronous averaging theoretically requires the machine to operate under stationary regime (i.e. the related vibration signals are cyclostationary) and is otherwise jeopardized by the presence of amplitude and phase modulations. A first object of this paper is to investigate the nature of the nonstationarity induced by the response of a linear time-invariant system subjected to speed varying excitation. For this purpose, the concept of a cyclo-non-stationary signal is introduced, which extends the class of cyclostationary signals to speed-varying regimes. Next, a "generalized synchronous average'' is designed to extract the deterministic part of a cyclo-non-stationary vibration signal-i.e. the analog of the periodic part of a cyclostationary signal. Two estimators of the GSA have been proposed. The first one returns the synchronous average of the signal at predefined discrete operating speeds. A brief statistical study of it is performed, aiming to provide the user with confidence intervals that reflect the "quality" of the estimator according to the SNR and the estimated speed. The second estimator returns a smoothed version of the former by enforcing continuity over the speed axis. It helps to reconstruct the deterministic component by tracking a specific trajectory dictated by the speed profile (assumed to be known a priori).The proposed method is validated first on synthetic signals and then on actual industrial signals. The usefulness of the approach is demonstrated on envelope-based diagnosis of bearings in variable
Fourcassié, Vincent; Dussutour, Audrey; Deneubourg, Jean-Louis
2010-07-15
Many animals take part in flow-like collective movements. In most species, however, the flow is unidirectional. Ants are one of the rare group of organisms in which flow-like movements are predominantly bidirectional. This adds to the difficulty of the task of maintaining a smooth, efficient movement. Yet, ants seem to fare well at this task. Do they really? And if so, how do such simple organisms succeed in maintaining a smooth traffic flow, when even humans experience trouble with this task? How does traffic in ants compare with that in human pedestrians or vehicles? The experimental study of ant traffic is only a few years old but it has already provided interesting insights into traffic organization and regulation in animals, showing in particular that an ant colony as a whole can be considered as a typical self-organized adaptive system. In this review we will show that the study of ant traffic can not only uncover basic principles of behavioral ecology and evolution in social insects but also provide new insights into the study of traffic systems in general. PMID:20581264
Al-Shargabi, Mohammed A; Shaikh, Asadullah; Ismail, Abdulsamad S
2016-01-01
Optical burst switching (OBS) networks have been attracting much consideration as a promising approach to build the next generation optical Internet. A solution for enhancing the Quality of Service (QoS) for high priority real time traffic over OBS with the fairness among the traffic types is absent in current OBS' QoS schemes. In this paper we present a novel Real Time Quality of Service with Fairness Ratio (RT-QoSFR) scheme that can adapt the burst assembly parameters according to the traffic QoS needs in order to enhance the real time traffic QoS requirements and to ensure the fairness for other traffic. The results show that RT-QoSFR scheme is able to fulfill the real time traffic requirements (end to end delay, and loss rate) ensuring the fairness for other traffics under various conditions such as the type of real time traffic and traffic load. RT-QoSFR can guarantee that the delay of the real time traffic packets does not exceed the maximum packets transfer delay value. Furthermore, it can reduce the real time traffic packets loss, at the same time guarantee the fairness for non real time traffic packets by determining the ratio of real time traffic inside the burst to be 50-60%, 30-40%, and 10-20% for high, normal, and low traffic loads respectively. PMID:27583557
Al-Shargabi, Mohammed A.; Ismail, Abdulsamad S.
2016-01-01
Optical burst switching (OBS) networks have been attracting much consideration as a promising approach to build the next generation optical Internet. A solution for enhancing the Quality of Service (QoS) for high priority real time traffic over OBS with the fairness among the traffic types is absent in current OBS’ QoS schemes. In this paper we present a novel Real Time Quality of Service with Fairness Ratio (RT-QoSFR) scheme that can adapt the burst assembly parameters according to the traffic QoS needs in order to enhance the real time traffic QoS requirements and to ensure the fairness for other traffic. The results show that RT-QoSFR scheme is able to fulfill the real time traffic requirements (end to end delay, and loss rate) ensuring the fairness for other traffics under various conditions such as the type of real time traffic and traffic load. RT-QoSFR can guarantee that the delay of the real time traffic packets does not exceed the maximum packets transfer delay value. Furthermore, it can reduce the real time traffic packets loss, at the same time guarantee the fairness for non real time traffic packets by determining the ratio of real time traffic inside the burst to be 50–60%, 30–40%, and 10–20% for high, normal, and low traffic loads respectively. PMID:27583557
Simulating traffic flow with Lotus 1-2-3
Snelting, D.T.
1986-07-01
This article discusses the use of spreadsheet software in simulating traffic flow on an approach to a pretimed signalized intersection. Such a simulation model would serve the following purposes: 1. It could help traffic engineers realize the types of applications that are possible with spreadsheets or expand their current thinking in this area. 2. It should provide traffic engineers and transportation planners with a relatively simple tool for obtaining a ''feel'' for traffic flow characteristics. 3. Delay and stopping data generated from the model could be used to verify other research data and actual field data.
A superstatistical model of vehicular traffic flow
NASA Astrophysics Data System (ADS)
Kosun, Caglar; Ozdemir, Serhan
2016-02-01
In the analysis of vehicular traffic flow, a myriad of techniques have been implemented. In this study, superstatistics is used in modeling the traffic flow on a highway segment. Traffic variables such as vehicular speeds, volume, and headway were collected for three days. For the superstatistical approach, at least two distinct time scales must exist, so that a superposition of nonequilibrium systems assumption could hold. When the slow dynamics of the vehicle speeds exhibit a Gaussian distribution in between the fluctuations of the system at large, one speaks of a relaxation to a local equilibrium. These Gaussian distributions are found with corresponding standard deviations 1 /√{ β }. This translates into a series of fluctuating beta values, hence the statistics of statistics, superstatistics. The traffic flow model has generated an inverse temperature parameter (beta) distribution as well as the speed distribution. This beta distribution has shown that the fluctuations in beta are distributed with respect to a chi-square distribution. It must be mentioned that two distinct Tsallis q values are specified: one is time-dependent and the other is independent. A ramification of these q values is that the highway segment and the traffic flow generate separate characteristics. This highway segment in question is not only nonadditive in nature, but a nonequilibrium driven system, with frequent relaxations to a Gaussian.
Calculation of photon pulse height distribution using deterministic and Monte Carlo methods
NASA Astrophysics Data System (ADS)
Akhavan, Azadeh; Vosoughi, Naser
2015-12-01
Radiation transport techniques which are used in radiation detection systems comprise one of two categories namely probabilistic and deterministic. However, probabilistic methods are typically used in pulse height distribution simulation by recreating the behavior of each individual particle, the deterministic approach, which approximates the macroscopic behavior of particles by solution of Boltzmann transport equation, is being developed because of its potential advantages in computational efficiency for complex radiation detection problems. In current work linear transport equation is solved using two methods including collided components of the scalar flux algorithm which is applied by iterating on the scattering source and ANISN deterministic computer code. This approach is presented in one dimension with anisotropic scattering orders up to P8 and angular quadrature orders up to S16. Also, multi-group gamma cross-section library required for this numerical transport simulation is generated in a discrete appropriate form. Finally, photon pulse height distributions are indirectly calculated by deterministic methods that approvingly compare with those from Monte Carlo based codes namely MCNPX and FLUKA.
Benke, G. |; Brandt, J.; Chen, H.; Dastangoo, S.; Miller, G.J.
1996-05-01
Recent empirical studies of traffic measurements of packet switched networks have demonstrated that actual network traffic is self-similar, or long range dependent, in nature. That is, the measured traffic is bursty over a wide range of time intervals. Furthermore, the emergence of high-speed network backbones demands the study of accurate models of aggregated traffic to assess network performance. This paper provides a method for generation of self-similar traffic, which can be used to drive network simulation models. The authors present the results of a simulation study of a two-node ATM network configuration that supports the ATM Forum`s Available Bit Rate (ABR) service. In this study, the authors compare the state of the queue at the source router at the edge of the ATM network under both Poisson and self-similar traffic loading. These findings indicate an order of magnitude increase in queue length for self-similar traffic loading as compared to Poisson loading. Moreover, when background VBR traffic is present, self-similar ABR traffic causes more congestion at the ATM switches than does Poisson traffic.
Virginia's traffic management system
Morris, J.; Marber, S. )
1992-07-01
This paper reports that Northern Virginia, like most other urban areas, faces the challenge of moving more and more vehicles on roads that are already overloaded. Traffic in Northern Virginia is continually increasing, but the development surrounding Interstate 395, 495, and 66 makes little room available for roadway expansion. Even if land were unlimited, the strict requirement of the Clean Air Act make building roads difficult. This paper reports that ensuring the most efficient use of the interstate highways is the goal of the Virginia Department of Transportation's (VDOT's) traffic management system (TMS). TMS is a computerized highway surveillance and control system that monitors 30 interstate miles on I-395, I-495, and I-66. The system helps squeeze the most use from these interstates by detecting and helping clear accidents or disabled vehicles and by smoothing traffic flow. TMS spots and helps clear an average of two incidents a day and prevents accidents caused by erratic traffic flow from ramps onto the main line. For motorists, these TMS functions translate into decreased travel time, vehicle operating costs, and air pollution. VDOT's TMS is the foundation for the intelligent vehicle-highway systems of tomorrow. It employs several elements that work together to improve traffic flow.
Theory and Simulation for Traffic Characteristics on the Highway with a Slowdown Section
Xu, Dejie; Mao, Baohua; Rong, Yaping; Wei, Wei
2015-01-01
We study the traffic characteristics on a single-lane highway with a slowdown section using the deterministic cellular automaton (CA) model. Based on the theoretical analysis, the relationships among local mean densities, velocities, traffic fluxes, and global densities are derived. The results show that two critical densities exist in the evolutionary process of traffic state, and they are significant demarcation points for traffic phase transition. Furthermore, the changing laws of the two critical densities with different length of limit section are also investigated. It is shown that only one critical density appears if a highway is not slowdown section; nevertheless, with the growing length of slowdown section, one critical density separates into two critical densities; if the entire highway is slowdown section, they finally merge into one. The contrastive analysis proves that the analytical results are consistent with the numerical ones. PMID:26089864
Bagieński, Zbigniew
2015-02-01
Vehicle emissions are responsible for a considerable share of urban air pollution concentrations. The traffic air quality index (TAQI) is proposed as a useful tool for evaluating air quality near roadways. The TAQI associates air quality with the equivalent emission from traffic sources and with street structure (roadway structure) as anthropogenic factors. The paper presents a method of determining the TAQI and defines the degrees of harmfulness of emitted pollution. It proposes a classification specifying a potential threat to human health based on the TAQI value and shows an example of calculating the TAQI value for real urban streets. It also considers the role that car traffic plays in creating a local UHI. PMID:25461063
Curved paths in raptor flight: Deterministic models.
Lorimer, John W
2006-10-21
Two deterministic models for flight of Peregrine Falcons and possibly other raptors as they approach their prey are examined mathematically. Both models make two assumptions. The first, applicable to both models, is that the angle of sight between falcon and prey is constant, consistent with observations that the falcon keeps its head straight during flight and keeps on course by use of the deep foveal region in its eye which allows maximum acuity at an angle of sight of about 45 degrees . The second assumption for the first model (conical spiral), is that the initial direction of flight determines the overall path. For the second model (flight constrained to a tilted plane), a parameter that fixes the orientation of the plane is required. A variational calculation also shows that the tilted plane flight path is the shortest total path, and, consequently, the conical spiral is another shortest total path. Numerical calculations indicate that the flight paths for the two models are very similar for the experimental conditions under which observations have been made. However, the angles of flight and bank differ significantly. More observations are needed to investigate the applicability of the two models.
Quality control in a deterministic manufacturing environment
Barkman, W.E.; Babelay, E.F.; De Mint, P.D.; Lewis, J.C.; Woodard, L.M.
1985-01-24
An approach for establishing quality control in processes which exhibit undesired continual or intermittent excursions in key process parameters is discussed. The method is called deterministic manufacturing, and it is designed to employ automatic monitoring of the key process variables for process certification, but utilizes only sample certification of the process output to verify the validity of the measurement process. The system utilizes a local minicomputer to sample the appropriate process parameters that describe the condition of the machine tool, the cutting process, and the computer numerical control system. Sampled data are pre-processed by the minicomputer and then sent to a host computer that maintains a permanent data base describing the manufacturing conditions for each work piece. Parts are accepted if the various parameters remain within the required limits during the machining cycle. The need for additional actions is flagged if limits are exceeded. With this system it is possible to retrospectively examine the process status just prior to the occurrence of a problem. (LEW)
Deterministic particle transport in a ratchet flow
NASA Astrophysics Data System (ADS)
Beltrame, Philippe; Makhoul, Mounia; Joelson, Maminirina
2016-01-01
This study is motivated by the issue of the pumping of particle through a periodic modulated channel. We focus on a simplified deterministic model of small inertia particles within the Stokes flow framework that we call "ratchet flow." A path-following method is employed in the parameter space in order to retrace the scenario which from bounded periodic solutions leads to particle transport. Depending on whether the magnitude of the particle drag is moderate or large, two main transport mechanisms are identified in which the role of the parity symmetry of the flow differs. For large drag, transport is induced by flow asymmetry, while for moderate drag, since the full transport solution bifurcation structure already exists for symmetric settings, flow asymmetry only makes the transport effective. We analyzed the scenarios of current reversals for each mechanism as well as the role of synchronization. In particular we show that, for large drag, the particle drift is similar to phase slip in a synchronization problem.
Electromagnetic field enhancement and light localization in deterministic aperiodic nanostructures
NASA Astrophysics Data System (ADS)
Gopinath, Ashwin
The control of light matter interaction in periodic and random media has been investigated in depth during the last few decades, yet structures with controlled degree of disorder such as Deterministic Aperiodic Nano Structures (DANS) have been relatively unexplored. DANS are characterized by non-periodic yet long-range correlated (deterministic) morphologies and can be generated by the mathematical rules of symbolic dynamics and number theory. In this thesis, I have experimentally investigated the unique light transport and localization properties in planar dielectric and metal (plasmonics) DANS. In particular, I have focused on the design, nanofabrication and optical characterization of DANS, formed by arranging metal/dielectric nanoparticles in an aperiodic lattice. This effort is directed towards development of on-chip nanophotonic applications with emphasis on label-free bio-sensing and enhanced light emission. The DANS designed as Surface Enhanced Raman Scattering (SERS) substrate is composed of multi-scale aperiodic nanoparticle arrays fabricated by e-beam lithography and are capable of reproducibly demonstrating enhancement factors as high as ˜107. Further improvement of SERS efficiency is achieved by combining DANS formed by top-down approach with bottom-up reduction of gold nanoparticles, to fabricate novel nanostructures called plasmonic "nano-galaxies" which increases the SERS enhancement factors by 2--3 orders of magnitude while preserving the reproducibility. In this thesis, along with presenting details of fabrication and SERS characterization of these "rationally designed" SERS substrates, I will also present results on using these substrates for detection of DNA nucleobases, as well as reproducible label-free detection of pathogenic bacteria with species specificity. In addition to biochemical detection, the combination of broadband light scattering behavior and the ability for the generation of reproducible high fields in DANS make these
Stochastic and Deterministic Assembly Processes in Subsurface Microbial Communities
Stegen, James C.; Lin, Xueju; Konopka, Allan; Fredrickson, Jim K.
2012-03-29
A major goal of microbial community ecology is to understand the forces that structure community composition. Deterministic selection by specific environmental factors is sometimes important, but in other cases stochastic or ecologically neutral processes dominate. Lacking is a unified conceptual framework aiming to understand why deterministic processes dominate in some contexts but not others. Here we work towards such a framework. By testing predictions derived from general ecological theory we aim to uncover factors that govern the relative influences of deterministic and stochastic processes. We couple spatiotemporal data on subsurface microbial communities and environmental parameters with metrics and null models of within and between community phylogenetic composition. Testing for phylogenetic signal in organismal niches showed that more closely related taxa have more similar habitat associations. Community phylogenetic analyses further showed that ecologically similar taxa coexist to a greater degree than expected by chance. Environmental filtering thus deterministically governs subsurface microbial community composition. More importantly, the influence of deterministic environmental filtering relative to stochastic factors was maximized at both ends of an environmental variation gradient. A stronger role of stochastic factors was, however, supported through analyses of phylogenetic temporal turnover. While phylogenetic turnover was on average faster than expected, most pairwise comparisons were not themselves significantly non-random. The relative influence of deterministic environmental filtering over community dynamics was elevated, however, in the most temporally and spatially variable environments. Our results point to general rules governing the relative influences of stochastic and deterministic processes across micro- and macro-organisms.
Surface plasmon field enhancements in deterministic aperiodic structures.
Shugayev, Roman
2010-11-22
In this paper we analyze optical properties and plasmonic field enhancements in large aperiodic nanostructures. We introduce extension of Generalized Ohm's Law approach to estimate electromagnetic properties of Fibonacci, Rudin-Shapiro, cluster-cluster aggregate and random deterministic clusters. Our results suggest that deterministic aperiodic structures produce field enhancements comparable to random morphologies while offering better understanding of field localizations and improved substrate design controllability. Generalized Ohm's law results for deterministic aperiodic structures are in good agreement with simulations obtained using discrete dipole method.
Nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates
Melechko, Anatoli V.; McKnight, Timothy E. , Guillorn, Michael A.; Ilic, Bojan; Merkulov, Vladimir I.; Doktycz, Mitchel J.; Lowndes, Douglas H.; Simpson, Michael L.
2011-05-17
Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. A method includes depositing a catalyst particle on a surface of a substrate to define a deterministically located position; growing an aligned elongated nanostructure on the substrate, an end of the aligned elongated nanostructure coupled to the substrate at the deterministically located position; coating the aligned elongated nanostructure with a conduit material; removing a portion of the conduit material to expose the catalyst particle; removing the catalyst particle; and removing the elongated nanostructure to define a nanoconduit.
Simulation laboratory for evaluating dynamic traffic management systems
Ban-Akiva, M.E.; Mishalani, R.G.; Yang, Q.; Koutsopoulos, H.N.
1997-08-01
This paper presents a simulation laboratory for performance evaluation and design refinement of dynamic traffic management systems. The laboratory consists of four integrated components: (1) a traffic management simulator, which mimics the generation of route guidance and operations of traffic signals and signs; (2) a traffic flow simulator, which models individual vehicle movements and drivers` route choice decisions in the presence of real-time traffic information; (3) a surveillance system module, which collects real-time traffic data from sensors and probe vehicles in the simulated network; and (4) a control device module, which implements control strategies and route guidance generated by the traffic management system under evaluation. The simulation laboratory has been implemented in C++ using object-oriented programming and a distributed environment. It features a graphical user interface that allows users to visualize the simulation process, including animation of vehicle movements, state of surveillance sensors, traffic signals, signs, and so on. This modeling system provides a unique tool for evaluating integrated ATIS and ATMS applications in a computer-based laboratory environment.
Deterministic, Nanoscale Fabrication of Mesoscale Objects
Jr., R M; Shirk, M; Gilmer, G; Rubenchik, A
2004-09-24
Neither LLNL nor any other organization has the capability to perform deterministic fabrication of mm-sized objects with arbitrary, {micro}m-sized, 3-dimensional features with 20-nm-scale accuracy and smoothness. This is particularly true for materials such as high explosives and low-density aerogels. For deterministic fabrication of high energy-density physics (HEDP) targets, it will be necessary both to fabricate features in a wide variety of materials as well as to understand and simulate the fabrication process. We continue to investigate, both in experiment and in modeling, the ablation/surface-modification processes that occur with the use of laser pulses that are near the ablation threshold fluence. During the first two years, we studied ablation of metals, and we used sub-ps laser pulses, because pulses shorter than the electron-phonon relaxation time offered the most precise control of the energy that can be deposited into a metal surface. The use of sub-ps laser pulses also allowed a decoupling of the energy-deposition process from the ensuing movement/ablation of the atoms from the solid, which simplified the modeling. We investigated the ablation of material from copper, gold, and nickel substrates. We combined the power of the 1-D hydrocode ''HYADES'' with the state-of-the-art, 3-D molecular dynamics simulations ''MDCASK'' in our studies. For FY04, we have stretched ourselves to investigate laser ablation of carbon, including chemically-assisted processes. We undertook this research, because the energy deposition that is required to perform direct sublimation of carbon is much higher than that to stimulate the reaction 2C + O{sub 2} => 2CO. Thus, extremely fragile carbon aerogels might survive the chemically-assisted process more readily than ablation via direct laser sublimation. We had planned to start by studying vitreous carbon and move onto carbon aerogels. We were able to obtain flat, high-quality vitreous carbon, which was easy to work on
Surface Traffic Management Research
NASA Technical Reports Server (NTRS)
Jung, Yoo Chul
2012-01-01
This presentation discusses an overview of the surface traffic management research conducted by NASA Ames. The concept and human-in-the-loop simulation of the Spot and Runway Departure Advisor (SARDA), an integrated decision support tool for the tower controllers and airline ramp operators, is also discussed.
Reproducible and deterministic production of aspheres
NASA Astrophysics Data System (ADS)
Leitz, Ernst Michael; Stroh, Carsten; Schwalb, Fabian
2015-10-01
Aspheric lenses are ground in a single point cutting mode. Subsequently different iterative polishing methods are applied followed by aberration measurements on external metrology instruments. For an economical production, metrology and correction steps need to be reduced. More deterministic grinding and polishing is mandatory. Single point grinding is a path-controlled process. The quality of a ground asphere is mainly influenced by the accuracy of the machine. Machine improvements must focus on path accuracy and thermal expansion. Optimized design, materials and thermal management reduce thermal expansion. The path accuracy can be improved using ISO 230-2 standardized measurements. Repeated interferometric measurements over the total travel of all CNC axes in both directions are recorded. Position deviations evaluated in correction tables improve the path accuracy and that of the ground surface. Aspheric polishing using a sub-aperture flexible polishing tool is a dwell time controlled process. For plano and spherical polishing the amount of material removal during polishing is proportional to pressure, relative velocity and time (Preston). For the use of flexible tools on aspheres or freeform surfaces additional non-linear components are necessary. Satisloh ADAPT calculates a predicted removal function from lens geometry, tool geometry and process parameters with FEM. Additionally the tooĺs local removal characteristics is determined in a simple test. By oscillating the tool on a plano or spherical sample of the same lens material, a trench is created. Its 3-D profile is measured to calibrate the removal simulation. Remaining aberrations of the desired lens shape can be predicted, reducing iteration and metrology steps.
Understanding Vertical Jump Potentiation: A Deterministic Model.
Suchomel, Timothy J; Lamont, Hugh S; Moir, Gavin L
2016-06-01
This review article discusses previous postactivation potentiation (PAP) literature and provides a deterministic model for vertical jump (i.e., squat jump, countermovement jump, and drop/depth jump) potentiation. There are a number of factors that must be considered when designing an effective strength-power potentiation complex (SPPC) focused on vertical jump potentiation. Sport scientists and practitioners must consider the characteristics of the subject being tested and the design of the SPPC itself. Subject characteristics that must be considered when designing an SPPC focused on vertical jump potentiation include the individual's relative strength, sex, muscle characteristics, neuromuscular characteristics, current fatigue state, and training background. Aspects of the SPPC that must be considered for vertical jump potentiation include the potentiating exercise, level and rate of muscle activation, volume load completed, the ballistic or non-ballistic nature of the potentiating exercise, and the rest interval(s) used following the potentiating exercise. Sport scientists and practitioners should design and seek SPPCs that are practical in nature regarding the equipment needed and the rest interval required for a potentiated performance. If practitioners would like to incorporate PAP as a training tool, they must take the athlete training time restrictions into account as a number of previous SPPCs have been shown to require long rest periods before potentiation can be realized. Thus, practitioners should seek SPPCs that may be effectively implemented in training and that do not require excessive rest intervals that may take away from valuable training time. Practitioners may decrease the necessary time needed to realize potentiation by improving their subject's relative strength. PMID:26712510
Traffic monitoring and reporting system
Madnick, P.A.; Sherwood, R.W.
1988-12-20
This patent describes a traffic monitoring and reporting system comprising: sensors, each sensor located at a designated location and designed to produce an output based upon traffic conditions at its designated location; an information receiving and analyzing computer. The output of each sensor to be transmitted to and received by the information receiving and analyzing computer, the information receiving and analyzing computer to generate results based on the output of each sensor, the results being organized into a plurality of different zones within an overall geographical area; a message synthesis computer to receive the results of the information receiving and analyzing computer, the message synthesis computer to produce different messages, each message to be specially oriented to one of the zones; transmitting of the output of the message synthesis computer to a broadcasting means, the broadcasting means for transmitting of the different messages by radio waves; and receivers, each receiver to be adapted to be located within a vehicle with therebeing a plurality of vehicles, each receiver having means to individually select and announce any one of the messages.
The Total Exposure Model (TEM) uses deterministic and stochastic methods to estimate the exposure of a person performing daily activities of eating, drinking, showering, and bathing. There were 250 time histories generated, by subject with activities, for the three exposure ro...
Are deterministic expert systems for computer-assisted structure elucidation obsolete?
Elyashberg, Mikhail E; Blinov, Kirill A; Williams, Antony J; Molodtsov, Sergey G; Martin, Gary E
2006-01-01
Expert systems for spectroscopic molecular structure elucidation have been developed since the mid-1960s. Algorithms associated with the structure generation process within these systems are deterministic; that is, they are based on graph theory and combinatorial analysis. A series of expert systems utilizing 2D NMR spectra have been described in the literature and are capable of determining the molecular structures of large organic molecules including complex natural products. Recently, an opinion was expressed in the literature that these systems would fail when elucidating structures containing more than 30 heavy atoms. A suggestion was put forward that stochastic algorithms for structure generation would be necessary to overcome this shortcoming. In this article, we describe a comprehensive investigation of the capabilities of the deterministic expert system Structure Elucidator. The results of performing the structure elucidation of 250 complex natural products with this program were studied and generalized. The conclusion is that 2D NMR deterministic expert systems are certainly capable of elucidating large structures (up to about 100 heavy atoms) and can deal with the complexities associated with both poor and contradictory spectral data.
Chang, T; Schiff, S J; Sauer, T; Gossard, J P; Burke, R E
1994-01-01
Long time series of monosynaptic Ia-afferent to alpha-motoneuron reflexes were recorded in the L7 or S1 ventral roots in the cat. Time series were collected before and after spinalization at T13 during constant amplitude stimulations of group Ia muscle afferents in the triceps surae muscle nerves. Using autocorrelation to analyze the linear correlation in the time series demonstrated oscillations in the decerebrate state (4/4) that were eliminated after spinalization (5/5). Three tests for determinism were applied to these series: 1) local flow, 2) local dispersion, and 3) nonlinear prediction. These algorithms were validated with time series generated from known deterministic equations. For each experimental and theoretical time series used, matched time-series of stochastic surrogate data were generated to serve as mathematical and statistical controls. Two of the time series collected in the decerebrate state (2/4) demonstrated evidence for deterministic structure. This structure could not be accounted for by the autocorrelation in the data, and was abolished following spinalization. None of the time series collected in the spinalized state (0/5) demonstrated evidence of determinism. Although monosynaptic reflex variability is generally stochastic in the spinalized state, this simple driven system may display deterministic behavior in the decerebrate state. Images FIGURE 1 PMID:7948680
Improving the realism of deterministic multi-strain models: implications for modelling influenza A.
Minayev, Pavlo; Ferguson, Neil
2009-06-01
Understanding the interaction between epidemiological and evolutionary dynamics for antigenically variable pathogens remains a challenge, particularly if analytical insight is wanted. In particular, while a variety of relatively complex simulation models have reproduced the evolutionary dynamics of influenza, simpler models have given less satisfying descriptions of the patterns seen in data. Here, we develop a set of relatively simple deterministic models of the transmission dynamics of multi-strain pathogens which give increased biological realism compared with past work. We allow the intensity of cross-immunity generated against one strain given exposure to a different strain to depend on the extent of genetic difference between the strains. We show that the dynamics of this model are determined by the interplay of parameters defining the cross-immune response function and can include fully symmetric equilibria, self-organized strain structures, regular periodic and chaotic regimes. We then extend the model by incorporating transient strain-transcending immunity that acts as a density-dependent mechanism to lower overall infection prevalence and thus pathogen diversity. We conclude that while some aspects of the evolution of influenza can be captured by deterministic models, overall, the description obtainable using a purely deterministic framework is unsatisfactory, implying that stochasticity of strain generation (via mutation) and extinction needs to be captured to appropriately capture influenza dynamics.
A deterministic method for transient, three-dimensional neutron transport
NASA Astrophysics Data System (ADS)
Goluoglu, Sedat
A deterministic method for solving the time-dependent, three-dimensional Boltzmann transport equation with explicit representation of delayed neutrons has been developed and evaluated. The methodology used in this study for the time variable is the improved quasi-static (IQS) method. The position, energy, and angle variables of the neutron flux are computed using the three-dimensional (3-D) discrete ordinates code TORT. The resulting time-dependent, 3-D code is called TDTORT. The flux shape calculated by TORT is used to compute the point kinetics parameters (e.g., reactivity, generation time, etc.). The amplitude function is calculated by solving the point kinetics equations using LSODE (Livermore Solver of Ordinary differential Equations). Several transient 1-D, 2-D, and 3-D benchmark problems are used to verify TDTORT. The results show that methodology and code developed in this work have sufficient accuracy and speed to serve as a benchmarking tool for other less accurate models and codes. More importantly, a new computational tool based on transport theory now exists for analyzing the dynamic behavior of complex neutronic systems.
Deterministic phase encoding encryption in single shot digital holography
NASA Astrophysics Data System (ADS)
Chen, G.-L.; Yang, W.-K.; Wang, J. C.; Chang, C.-C.
2008-11-01
We demonstrate a deterministic phase-encoded encryption system based on the digital holography and adopted a lenticular lens array (LLA) sheet as a phase modulator. In the proposed scheme the holographic patterns of encrypted images are captured digitally by a digital CCD. This work also adopt a novel, simple and effective technique that is used to suppress numerically the major blurring caused by the zero-order image in the numerical reconstruction. The decryption key is acquired as a digital hologram, called the key hologram. Therefore, the retrieval of the original information can be achieved by multiplying the encrypted hologram with a numerical generated phase-encoded wave. The storage and transmission of all holograms can be carried out by all-digital means. Simulation and experimental results demonstrate that the proposed approach can be operated in single procedure only and represent the satisfactory decrypted image. Finally, rotating and shifting the LLA is applied to investigate the tolerance of decryption to demonstrate the feasibility in the holographic encryption, as well as can also be used to provide the higher security.
Entrepreneurs, chance, and the deterministic concentration of wealth.
Fargione, Joseph E; Lehman, Clarence; Polasky, Stephen
2011-01-01
In many economies, wealth is strikingly concentrated. Entrepreneurs--individuals with ownership in for-profit enterprises--comprise a large portion of the wealthiest individuals, and their behavior may help explain patterns in the national distribution of wealth. Entrepreneurs are less diversified and more heavily invested in their own companies than is commonly assumed in economic models. We present an intentionally simplified individual-based model of wealth generation among entrepreneurs to assess the role of chance and determinism in the distribution of wealth. We demonstrate that chance alone, combined with the deterministic effects of compounding returns, can lead to unlimited concentration of wealth, such that the percentage of all wealth owned by a few entrepreneurs eventually approaches 100%. Specifically, concentration of wealth results when the rate of return on investment varies by entrepreneur and by time. This result is robust to inclusion of realities such as differing skill among entrepreneurs. The most likely overall growth rate of the economy decreases as businesses become less diverse, suggesting that high concentrations of wealth may adversely affect a country's economic growth. We show that a tax on large inherited fortunes, applied to a small portion of the most fortunate in the population, can efficiently arrest the concentration of wealth at intermediate levels.
Entrepreneurs, chance, and the deterministic concentration of wealth.
Fargione, Joseph E; Lehman, Clarence; Polasky, Stephen
2011-01-01
In many economies, wealth is strikingly concentrated. Entrepreneurs--individuals with ownership in for-profit enterprises--comprise a large portion of the wealthiest individuals, and their behavior may help explain patterns in the national distribution of wealth. Entrepreneurs are less diversified and more heavily invested in their own companies than is commonly assumed in economic models. We present an intentionally simplified individual-based model of wealth generation among entrepreneurs to assess the role of chance and determinism in the distribution of wealth. We demonstrate that chance alone, combined with the deterministic effects of compounding returns, can lead to unlimited concentration of wealth, such that the percentage of all wealth owned by a few entrepreneurs eventually approaches 100%. Specifically, concentration of wealth results when the rate of return on investment varies by entrepreneur and by time. This result is robust to inclusion of realities such as differing skill among entrepreneurs. The most likely overall growth rate of the economy decreases as businesses become less diverse, suggesting that high concentrations of wealth may adversely affect a country's economic growth. We show that a tax on large inherited fortunes, applied to a small portion of the most fortunate in the population, can efficiently arrest the concentration of wealth at intermediate levels. PMID:21814540
Traffic flow theory and characteristics
Hauer, E.; Pagitsas, E.; Shin, B.T.; Maze, T.H.; Hurley, J.W. Jr.
1981-01-01
Estimation of turning flows from automatic counts; a probabilistic model of gap acceptance behavior; sensitivity of fuel-consumption and delay values from traffic simulation; traffic data acquisition from small-format photography; decentralized control of congested street networks; improved estimation of traffic flow for real-time control; Maxband, a program for setting signals on arteries and triangular networks are discussed.
Identification of the FitzHugh-Nagumo Model Dynamics via Deterministic Learning
NASA Astrophysics Data System (ADS)
Dong, Xunde; Wang, Cong
In this paper, a new method is proposed for the identification of the FitzHugh-Nagumo (FHN) model dynamics via deterministic learning. The FHN model is a classic and simple model for studying spiral waves in excitable media, such as the cardiac tissue, biological neural networks. Firstly, the FHN model described by partial differential equations (PDEs) is transformed into a set of ordinary differential equations (ODEs) by using finite difference method. Secondly, the dynamics of the ODEs is identified using the deterministic learning theory. It is shown that, for the spiral waves generated by the FHN model, the dynamics underlying the recurrent trajectory corresponding to any spatial point can be accurately identified by using the proposed approach. Numerical experiments are included to demonstrate the effectiveness of the proposed method.
Deterministic coupling of delta-doped nitrogen vacancy centers to a nanobeam photonic crystal cavity
Lee, Jonathan C.; Cui, Shanying; Zhang, Xingyu; Russell, Kasey J.; Magyar, Andrew P.; Hu, Evelyn L.; Bracher, David O.; Ohno, Kenichi; McLellan, Claire A.; Alemán, Benjamin; Bleszynski Jayich, Ania; Andrich, Paolo; Awschalom, David; Aharonovich, Igor
2014-12-29
The negatively charged nitrogen vacancy center (NV) in diamond has generated significant interest as a platform for quantum information processing and sensing in the solid state. For most applications, high quality optical cavities are required to enhance the NV zero-phonon line (ZPL) emission. An outstanding challenge in maximizing the degree of NV-cavity coupling is the deterministic placement of NVs within the cavity. Here, we report photonic crystal nanobeam cavities coupled to NVs incorporated by a delta-doping technique that allows nanometer-scale vertical positioning of the emitters. We demonstrate cavities with Q up to ∼24 000 and mode volume V ∼ 0.47(λ/n){sup 3} as well as resonant enhancement of the ZPL of an NV ensemble with Purcell factor of ∼20. Our fabrication technique provides a first step towards deterministic NV-cavity coupling using spatial control of the emitters.
On a class of quantum Turing machine halting deterministically
NASA Astrophysics Data System (ADS)
Liang, Min; Yang, Li
2013-05-01
We consider a subclass of quantum Turing machines (QTM), named stationary rotational quantum Turing machine (SR-QTM), which halts deterministically and has deterministic tape head position. A quantum state transition diagram (QSTD) is proposed to describe SR-QTM. With QSTD, we construct a SR-QTM which is universal for all near-trivial transformations. This indicates there exists a QTM which is universal for the above subclass. Finally we show that SR-QTM is computational equivalent with ordinary QTM in the bounded error setting. It can be seen that SR-QTMs have deterministic tape head position and halt deterministically, and thus the halting scheme problem will not exist for this class of QTMs.
Nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates
Melechko, Anatoli V.; McKnight, Timothy E.; Guillorn, Michael A.; Ilic, Bojan; Merkulov, Vladimir I.; Doktycz, Mitchel J.; Lowndes, Douglas H.; Simpson, Michael L.
2011-08-23
Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoreplicant structure coupled to a surface of the substrate.
Stochastic Model of Traffic Jam and Traffic Signal Control
NASA Astrophysics Data System (ADS)
Shin, Ji-Sun; Cui, Cheng-You; Lee, Tae-Hong; Lee, Hee-Hyol
Traffic signal control is an effective method to solve the traffic jam. and forecasting traffic density has been known as an important part of the Intelligent Transportation System (ITS). The several methods of the traffic signal control are known such as random walk method, Neuron Network method, Bayesian Network method, and so on. In this paper, we propose a new method of a traffic signal control using a predicted distribution of traffic jam based on a Dynamic Bayesian Network model. First, a forecasting model to predict a probabilistic distribution of the traffic jam during each period of traffic lights is built. As the forecasting model, the Dynamic Bayesian Network is used to predict the probabilistic distribution of a density of the traffic jam. According to measurement of two crossing points for each cycle, the inflow and outflow of each direction and the number of standing vehicles at former cycle are obtained. The number of standing vehicle at k-th cycle will be calculated synchronously. Next, the probabilistic distribution of the density of standing vehicle in each cycle will be predicted using the Dynamic Bayesian Network constructed for the traffic jam. And then a control rule to adjust the split and the cycle to increase the probability between a lower limit and ceiling of the standing vehicles is deduced. As the results of the simulation using the actual traffic data of Kitakyushu city, the effectiveness of the method is shown.
Identifying MMORPG Bots: A Traffic Analysis Approach
NASA Astrophysics Data System (ADS)
Chen, Kuan-Ta; Jiang, Jhih-Wei; Huang, Polly; Chu, Hao-Hua; Lei, Chin-Laung; Chen, Wen-Chin
2008-12-01
Massively multiplayer online role playing games (MMORPGs) have become extremely popular among network gamers. Despite their success, one of MMORPG's greatest challenges is the increasing use of game bots, that is, autoplaying game clients. The use of game bots is considered unsportsmanlike and is therefore forbidden. To keep games in order, game police, played by actual human players, often patrol game zones and question suspicious players. This practice, however, is labor-intensive and ineffective. To address this problem, we analyze the traffic generated by human players versus game bots and propose general solutions to identify game bots. Taking Ragnarok Online as our subject, we study the traffic generated by human players and game bots. We find that their traffic is distinguishable by 1) the regularity in the release time of client commands, 2) the trend and magnitude of traffic burstiness in multiple time scales, and 3) the sensitivity to different network conditions. Based on these findings, we propose four strategies and two ensemble schemes to identify bots. Finally, we discuss the robustness of the proposed methods against countermeasures of bot developers, and consider a number of possible ways to manage the increasingly serious bot problem.
Expanding Regional Airport Usage to Accommodate Increased Air Traffic Demand
NASA Technical Reports Server (NTRS)
Russell, Carl R.
2009-01-01
Small regional airports present an underutilized source of capacity in the national air transportation system. This study sought to determine whether a 50 percent increase in national operations could be achieved by limiting demand growth at large hub airports and instead growing traffic levels at the surrounding regional airports. This demand scenario for future air traffic in the United States was generated and used as input to a 24-hour simulation of the national airspace system. Results of the demand generation process and metrics predicting the simulation results are presented, in addition to the actual simulation results. The demand generation process showed that sufficient runway capacity exists at regional airports to offload a significant portion of traffic from hub airports. Predictive metrics forecast a large reduction of delays at most major airports when demand is shifted. The simulation results then show that offloading hub traffic can significantly reduce nationwide delays.
Structural deterministic safety factors selection criteria and verification
NASA Technical Reports Server (NTRS)
Verderaime, V.
1992-01-01
Though current deterministic safety factors are arbitrarily and unaccountably specified, its ratio is rooted in resistive and applied stress probability distributions. This study approached the deterministic method from a probabilistic concept leading to a more systematic and coherent philosophy and criterion for designing more uniform and reliable high-performance structures. The deterministic method was noted to consist of three safety factors: a standard deviation multiplier of the applied stress distribution; a K-factor for the A- or B-basis material ultimate stress; and the conventional safety factor to ensure that the applied stress does not operate in the inelastic zone of metallic materials. The conventional safety factor is specifically defined as the ratio of ultimate-to-yield stresses. A deterministic safety index of the combined safety factors was derived from which the corresponding reliability proved the deterministic method is not reliability sensitive. The bases for selecting safety factors are presented and verification requirements are discussed. The suggested deterministic approach is applicable to all NASA, DOD, and commercial high-performance structures under static stresses.
Single Ion Implantation and Deterministic Doping
Schenkel, Thomas
2010-06-11
The presence of single atoms, e.g. dopant atoms, in sub-100 nm scale electronic devices can affect the device characteristics, such as the threshold voltage of transistors, or the sub-threshold currents. Fluctuations of the number of dopant atoms thus poses a complication for transistor scaling. In a complementary view, new opportunities emerge when novel functionality can be implemented in devices deterministically doped with single atoms. The grand price of the latter might be a large scale quantum computer, where quantum bits (qubits) are encoded e.g. in the spin states of electrons and nuclei of single dopant atoms in silicon, or in color centers in diamond. Both the possible detrimental effects of dopant fluctuations and single atom device ideas motivate the development of reliable single atom doping techniques which are the subject of this chapter. Single atom doping can be approached with top down and bottom up techniques. Top down refers to the placement of dopant atoms into a more or less structured matrix environment, like a transistor in silicon. Bottom up refers to approaches to introduce single dopant atoms during the growth of the host matrix e.g. by directed self-assembly and scanning probe assisted lithography. Bottom up approaches are discussed in Chapter XYZ. Since the late 1960's, ion implantation has been a widely used technique to introduce dopant atoms into silicon and other materials in order to modify their electronic properties. It works particularly well in silicon since the damage to the crystal lattice that is induced by ion implantation can be repaired by thermal annealing. In addition, the introduced dopant atoms can be incorporated with high efficiency into lattice position in the silicon host crystal which makes them electrically active. This is not the case for e.g. diamond, which makes ion implantation doping to engineer the electrical properties of diamond, especially for n-type doping much harder then for silicon. Ion
Traffic and emission simulation in China based on statistical methodology
NASA Astrophysics Data System (ADS)
Liu, Huan; He, Kebin; Barth, Matthew
2011-02-01
To better understand how the traffic control can affect vehicle emissions, a novel TRaffic And Vehicle Emission Linkage (TRAVEL) approach was developed based on local traffic activity and emission data. This approach consists of a two-stage mapping from general traffic information to traffic flow patterns, and then to the aggregated emission rates. 39 traffic flow patterns and corresponding emission rates for light-duty and heavy-duty vehicles considering emission standards classification are generated. As a case study, vehicle activity and emissions during the Beijing Olympics were simulated and compared to BAU scenario. Approximately 42-65% of the gaseous pollutants and 24% of the particle pollutants from cars, taxies and buses were reduced. These results are validated by traffic and air quality monitoring data during the Olympics, as well as other emission inventory studies. This approach improves the ability to fast predict emission variation from traffic control measurements in several typical Chinese cities. Comments related to application of this approach with both advantages and limitations are included.
NASA Astrophysics Data System (ADS)
Beckenbauer, Thomas
Road traffic is the most interfering noise source in developed countries. According to a publication of the European Union (EU) at the end of the twentieth century [1], about 40% of the population in 15 EU member states is exposed to road traffic noise at mean levels exceeding 55 dB(A). Nearly 80 million people, 20% of the population, are exposed to levels exceeding 65 dB(A) during daytime and more than 30% of the population is exposed to levels exceeding 55 dB(A) during night time. Such high noise levels cause health risks and social disorders (aggressiveness, protest, and helplessness), interference of communication and disturbance of sleep; the long- and short-term consequences cause adverse cardiovascular effects, detrimental hormonal responses (stress hormones), and possible disturbance of the human metabolism (nutrition) and the immune system. Even performance at work and school could be impaired.
NASA Astrophysics Data System (ADS)
Davis, L. C.
2015-03-01
The Texas A&M Transportation Institute estimated that traffic congestion cost the United States 121 billion in 2011 (the latest data available). The cost is due to wasted time and fuel. In addition to accidents and road construction, factors contributing to congestion include large demand, instability of high-density free flow and selfish behavior of drivers, which produces self-organized traffic bottlenecks. Extensive data collected on instrumented highways in various countries have led to a better understanding of traffic dynamics. From these measurements, Boris Kerner and colleagues developed a new theory called three-phase theory. They identified three major phases of flow observed in the data: free flow, synchronous flow and wide moving jams. The intermediate phase is called synchronous because vehicles in different lanes tend to have similar velocities. This congested phase, characterized by lower velocities yet modestly high throughput, frequently occurs near on-ramps and lane reductions. At present there are only two widely used methods of congestion mitigation: ramp metering and the display of current travel-time information to drivers. To find more effective methods to reduce congestion, researchers perform large-scale simulations using models based on the new theories. An algorithm has been proposed to realize Wardrop equilibria with real-time route information. Such equilibria have equal travel time on alternative routes between a given origin and destination. An active area of current research is the dynamics of connected vehicles, which communicate wirelessly with other vehicles and the surrounding infrastructure. These systems show great promise for improving traffic flow and safety.
NASA Astrophysics Data System (ADS)
Davis, L. Craig
2006-03-01
Congestion in freeway traffic is an example of self-organization in the language of complexity theory. Nonequilibrium, first-order phase transitions from free flow cause complex spatiotemporal patterns. Two distinct phases of congestion are observed in empirical traffic data--wide moving jams and synchronous flow. Wide moving jams are characterized by stopped or slowly moving vehicles within the jammed region, which widens and moves upstream at 15-20 km/h. Above a critical density of vehicles, a sudden decrease in the velocity of a lead vehicle can initiate a transition from metastable states to this phase. Human behaviors, especially delayed reactions, are implicated in the formation of jams. The synchronous flow phase results from a bottleneck such as an on-ramp. Thus, in contrast to a jam, the downstream front is pinned at a fixed location. The name of the phase comes from the equilibration (or synchronization) of speed and flow rate across all lanes caused by frequent vehicle lane changes. Synchronous flow occurs when the mainline flow and the rate of merging from an on-ramp are sufficiently large. Large-scale simulations using car-following models reproduce the physical phenomena occurring in traffic and suggest methods to improve flow and mediate congestion.
Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.
2004-01-01
We successfully applied deterministic deconvolution to real ground-penetrating radar (GPR) data by using the source wavelet that was generated in and transmitted through air as the operator. The GPR data were collected with 400-MHz antennas on a bench adjacent to a cleanly exposed quarry face. The quarry site is characterized by horizontally bedded carbonate strata with shale partings. In order to provide groundtruth for this deconvolution approach, 23 conductive rods were drilled into the quarry face at key locations. The steel rods provided critical information for: (1) correlation between reflections on GPR data and geologic features exposed in the quarry face, (2) GPR resolution limits, (3) accuracy of velocities calculated from common midpoint data and (4) identifying any multiples. Comparing the results of deconvolved data with non-deconvolved data demonstrates the effectiveness of deterministic deconvolution in low dielectric-loss media for increased accuracy of velocity models (improved at least 10-15% in our study after deterministic deconvolution), increased vertical and horizontal resolution of specific geologic features and more accurate representation of geologic features as confirmed from detailed study of the adjacent quarry wall. ?? 2004 Elsevier B.V. All rights reserved.
Non-deterministic analysis of a liquid polymeric-film drying process
Chen, K.S.; Cairncross, R.A.
1997-04-01
In this study the authors employed the Monte Carlo/Latin Hypercube sampling technique to generate input parameters for a liquid polymeric-film drying model with prescribed uncertainty distributions. The one-dimensional drying model employed in this study was that developed by Cairncross et al. They found that the non-deterministic analysis with Monte Carlo/Latin Hypercube sampling provides a useful tool for characterizing the two responses (residual solvent volume and the maximum solvent partial vapor pressure) of a liquid polymeric-film drying process. More precisely, they found that the non-deterministic analysis via Monte Carlo/Latin Hypercube sampling not only provides estimates of statistical variations of the response variables but also yields more realistic estimates of mean values, which can differ significantly from those calculated using deterministic simulation. For input-parameter uncertainties in the range from 2 to 10% of their respective means, variations of response variables were found to be comparable to the mean values.
A deterministic, gigabit serial timing, synchronization and data link for the RHIC LLRF
Hayes, T.; Smith, K.S.; Severino, F.
2011-03-28
A critical capability of the new RHIC low level rf (LLRF) system is the ability to synchronize signals across multiple locations. The 'Update Link' provides this functionality. The 'Update Link' is a deterministic serial data link based on the Xilinx RocketIO protocol that is broadcast over fiber optic cable at 1 gigabit per second (Gbps). The link provides timing events and data packets as well as time stamp information for synchronizing diagnostic data from multiple sources. The new RHIC LLRF was designed to be a flexible, modular system. The system is constructed of numerous independent RF Controller chassis. To provide synchronization among all of these chassis, the Update Link system was designed. The Update Link system provides a low latency, deterministic data path to broadcast information to all receivers in the system. The Update Link system is based on a central hub, the Update Link Master (ULM), which generates the data stream that is distributed via fiber optic links. Downstream chassis have non-deterministic connections back to the ULM that allow any chassis to provide data that is broadcast globally.
Urban daytime traffic noise prediction models.
da Paz, Elaine Carvalho; Zannin, Paulo Henrique Trombetta
2010-04-01
An evaluation was made of the acoustic environment generated by an urban highway using in situ measurements. Based on the data collected, a mathematical model was designed for the main sound levels (L (eq), L (10), L (50), and L (90)) as a function of the correlation between sound levels and between the equivalent sound pressure level and traffic variables. Four valid groups of mathematical models were generated to calculate daytime sound levels, which were statistically validated. It was found that the new models can be considered as accurate as other models presented in the literature to assess and predict daytime traffic noise, and that they stand out and differ from the existing models described in the literature thanks to two characteristics, namely, their linearity and the application of class intervals.
NASA Astrophysics Data System (ADS)
Muthalif, Asan G. A.; Wahid, Azni N.; Nor, Khairul A. M.
2014-02-01
Engineering systems such as aircraft, ships and automotive are considered built-up structures. Dynamically they are taught of as being fabricated from many components that are classified as 'deterministic subsystems' (DS) and 'non-deterministic subsystems' (Non-DS). Structures' response of the DS is deterministic in nature and analysed using deterministic modelling methods such as finite element (FE) method. The response of Non-DS is statistical in nature and estimated using statistical modelling technique such as statistical energy analysis (SEA). SEA method uses power balance equation, in which any external input to the subsystem must be represented in terms of power. Often, input force is taken as point force and ensemble average power delivered by point force is already well-established. However, the external input can also be applied in the form of moments exerted by a piezoelectric (PZT) patch actuator. In order to be able to apply SEA method for input moments, a mathematical representation for moment generated by PZT patch in the form of average power is needed, which is attempted in this paper. A simply-supported plate with attached PZT patch is taken as a benchmark model. Analytical solution to estimate average power is derived using mobility approach. Ensemble average of power given by the PZT patch actuator to the benchmark model when subjected to structural uncertainties is also simulated using Lagrangian method and FEA software. The analytical estimation is compared with the Lagrangian model and FE method for validation. The effects of size and location of the PZT actuators on the power delivered to the plate are later investigated.
Chambers, David W
2005-01-01
Groups naturally promote their strengths and prefer values and rules that give them an identity and an advantage. This shows up as generational tensions across cohorts who share common experiences, including common elders. Dramatic cultural events in America since 1925 can help create an understanding of the differing value structures of the Silents, the Boomers, Gen Xers, and the Millennials. Differences in how these generations see motivation and values, fundamental reality, relations with others, and work are presented, as are some applications of these differences to the dental profession. PMID:16623137
Deterministic Modeling of the High Temperature Test Reactor
Ortensi, J.; Cogliati, J. J.; Pope, M. A.; Ferrer, R. M.; Ougouag, A. M.
2010-06-01
Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL’s current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is used in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green’s Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2–3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the
Observations on traffic flow patterns and traffic engineering practice
NASA Astrophysics Data System (ADS)
Wang, Feng; Gao, Lixin
2002-07-01
Border Gateway Protocol allows ASs to apply diverse routing policies for selecting routes and propagating reachability information to other ASs. This enables network operators to configure routing policies so as to control traffic flows between ASs. However, BGP is not designed for the inter-AS traffic engineering. This makes it difficult to implement effective routing policies to address network performance and utilization problems. Network operators usually tweak routing policies to influence the inter-domain traffic among the available links. This can lead to undesirable traffic flow patterns across the Internet and degrade the Internet traffic performance. In this paper, we show several observations on Internet traffic flow patterns and derive routing policies that give rise to the traffic flow patterns. Our results show that an AS can reach as much as 20% of the prefixes via a peer link even though there is a path via a customer link. In addition, an AS can reach as much as 80% of the prefixes via a provider link even though there is a path via a peer link. Second, we analyze the cause of the prevalence of these traffic patterns. Our analysis shows that an AS typically does not receive the potential route from its customers or peers. Third, we find that alternate routes have with lower propagation delay than the chosen routes for some prefixes. This shows that some traffic engineering practices might adversely affect Internet performance.
A Novel Biobjective Risk-Based Model for Stochastic Air Traffic Network Flow Optimization Problem
Cai, Kaiquan; Jia, Yaoguang; Zhu, Yanbo; Xiao, Mingming
2015-01-01
Network-wide air traffic flow management (ATFM) is an effective way to alleviate demand-capacity imbalances globally and thereafter reduce airspace congestion and flight delays. The conventional ATFM models assume the capacities of airports or airspace sectors are all predetermined. However, the capacity uncertainties due to the dynamics of convective weather may make the deterministic ATFM measures impractical. This paper investigates the stochastic air traffic network flow optimization (SATNFO) problem, which is formulated as a weighted biobjective 0-1 integer programming model. In order to evaluate the effect of capacity uncertainties on ATFM, the operational risk is modeled via probabilistic risk assessment and introduced as an extra objective in SATNFO problem. Computation experiments using real-world air traffic network data associated with simulated weather data show that presented model has far less constraints compared to stochastic model with nonanticipative constraints, which means our proposed model reduces the computation complexity. PMID:26180842
A Novel Biobjective Risk-Based Model for Stochastic Air Traffic Network Flow Optimization Problem.
Cai, Kaiquan; Jia, Yaoguang; Zhu, Yanbo; Xiao, Mingming
2015-01-01
Network-wide air traffic flow management (ATFM) is an effective way to alleviate demand-capacity imbalances globally and thereafter reduce airspace congestion and flight delays. The conventional ATFM models assume the capacities of airports or airspace sectors are all predetermined. However, the capacity uncertainties due to the dynamics of convective weather may make the deterministic ATFM measures impractical. This paper investigates the stochastic air traffic network flow optimization (SATNFO) problem, which is formulated as a weighted biobjective 0-1 integer programming model. In order to evaluate the effect of capacity uncertainties on ATFM, the operational risk is modeled via probabilistic risk assessment and introduced as an extra objective in SATNFO problem. Computation experiments using real-world air traffic network data associated with simulated weather data show that presented model has far less constraints compared to stochastic model with nonanticipative constraints, which means our proposed model reduces the computation complexity. PMID:26180842
An intelligent traffic controller
Kagolanu, K.; Fink, R.; Smartt, H.; Powell, R.; Larsen, E.
1995-12-01
A controller with advanced control logic can significantly improve traffic flows at intersections. In this vein, this paper explores fuzzy rules and algorithms to improve the intersection operation by rationalizing phase changes and green times. The fuzzy logic for control is enhanced by the exploration of neural networks for families of membership functions and for ideal cost functions. The concepts of fuzzy logic control are carried forth into the controller architecture. Finally, the architecture and the modules are discussed. In essence, the control logic and architecture of an intelligent controller are explored.
Demonstration of alternative traffic information collection and management technologies
NASA Astrophysics Data System (ADS)
Knee, Helmut E.; Smith, Cy; Black, George; Petrolino, Joe
2004-03-01
Many of the components associated with the deployment of Intelligent Transportation Systems (ITS) to support a traffic management center (TMC) such as remote control cameras, traffic speed detectors, and variable message signs, have been available for many years. Their deployment, however, has been expensive and applied primarily to freeways and interstates, and have been deployed principally in the major metropolitan areas in the US; not smaller cities. The Knoxville (Tennessee) Transportation Planning Organization is sponsoring a project that will test the integration of several technologies to estimate near-real time traffic information data and information that could eventually be used by travelers to make better and more informed decisions related to their travel needs. The uniqueness of this demonstration is that it will seek to predict traffic conditions based on cellular phone signals already being collected by cellular communications companies. Information about the average speed on various portions of local arterials and incident identification (incident location) will be collected and compared to similar data generated by "probe vehicles". Successful validation of the speed information generated from cell phone data will allow traffic data to be generated much more economically and utilize technologies that are minimally infrastructure invasive. Furthermore, when validated, traffic information could be provided to the traveling public allowing then to make better decisions about trips. More efficient trip planning and execution can reduce congestion and associated vehicle emissions. This paper will discuss the technologies, the demonstration project, the project details, and future directions.
Estimating the epidemic threshold on networks by deterministic connections
Li, Kezan Zhu, Guanghu; Fu, Xinchu; Small, Michael
2014-12-15
For many epidemic networks some connections between nodes are treated as deterministic, while the remainder are random and have different connection probabilities. By applying spectral analysis to several constructed models, we find that one can estimate the epidemic thresholds of these networks by investigating information from only the deterministic connections. Nonetheless, in these models, generic nonuniform stochastic connections and heterogeneous community structure are also considered. The estimation of epidemic thresholds is achieved via inequalities with upper and lower bounds, which are found to be in very good agreement with numerical simulations. Since these deterministic connections are easier to detect than those stochastic connections, this work provides a feasible and effective method to estimate the epidemic thresholds in real epidemic networks.
Deterministic teleportation of electrons in a quantum dot nanostructure.
de Visser, R L; Blaauboer, M
2006-06-23
We present a proposal for deterministic quantum teleportation of electrons in a semiconductor nanostructure consisting of a single and a double quantum dot. The central issue addressed in this Letter is how to design and implement the most efficient--in terms of the required number of single and two-qubit operations--deterministic teleportation protocol for this system. Using a group-theoretical analysis, we show that deterministic teleportation requires a minimum of three single-qubit rotations and two entangling (square root SWAP) operations. These can be implemented for spin qubits in quantum dots using electron-spin resonance (for single-spin rotations) and exchange interaction (for square root SWAP operations).
DETERMINISTIC TRANSPORT METHODS AND CODES AT LOS ALAMOS
J. E. MOREL
1999-06-01
The purposes of this paper are to: Present a brief history of deterministic transport methods development at Los Alamos National Laboratory from the 1950's to the present; Discuss the current status and capabilities of deterministic transport codes at Los Alamos; and Discuss future transport needs and possible future research directions. Our discussion of methods research necessarily includes only a small fraction of the total research actually done. The works that have been included represent a very subjective choice on the part of the author that was strongly influenced by his personal knowledge and experience. The remainder of this paper is organized in four sections: the first relates to deterministic methods research performed at Los Alamos, the second relates to production codes developed at Los Alamos, the third relates to the current status of transport codes at Los Alamos, and the fourth relates to future research directions at Los Alamos.
Deterministic sensing matrices in compressive sensing: a survey.
Nguyen, Thu L N; Shin, Yoan
2013-01-01
Compressive sensing is a sampling method which provides a new approach to efficient signal compression and recovery by exploiting the fact that a sparse signal can be suitably reconstructed from very few measurements. One of the most concerns in compressive sensing is the construction of the sensing matrices. While random sensing matrices have been widely studied, only a few deterministic sensing matrices have been considered. These matrices are highly desirable on structure which allows fast implementation with reduced storage requirements. In this paper, a survey of deterministic sensing matrices for compressive sensing is presented. We introduce a basic problem in compressive sensing and some disadvantage of the random sensing matrices. Some recent results on construction of the deterministic sensing matrices are discussed.
Inherent Conservatism in Deterministic Quasi-Static Structural Analysis
NASA Technical Reports Server (NTRS)
Verderaime, V.
1997-01-01
The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.
Deterministic and efficient quantum cryptography based on Bell's theorem
Chen Zengbing; Pan Jianwei; Zhang Qiang; Bao Xiaohui; Schmiedmayer, Joerg
2006-05-15
We propose a double-entanglement-based quantum cryptography protocol that is both efficient and deterministic. The proposal uses photon pairs with entanglement both in polarization and in time degrees of freedom; each measurement in which both of the two communicating parties register a photon can establish one and only one perfect correlation, and thus deterministically create a key bit. Eavesdropping can be detected by violation of local realism. A variation of the protocol shows a higher security, similar to the six-state protocol, under individual attacks. Our scheme allows a robust implementation under the current technology.
Risk estimates for deterministic health effects of inhaled weapons grade plutonium.
Scott, Bobby R; Peterson, Vern L
2003-09-01
Risk estimates for deterministic effects of inhaled weapons-grade plutonium (WG Pu) are needed to evaluate potential serious harm to (1) U.S. Department of Energy nuclear workers from accidental or other work-place releases of WG Pu; and (2) the public from terrorist actions resulting in the release of WG Pu to the environment. Deterministic health effects (the most serious radiobiological consequences to humans) can arise when large amounts of WG Pu are taken into the body. Inhalation is considered the most likely route of intake during work-place accidents or during a nuclear terrorism incident releasing WG Pu to the environment. Our current knowledge about radiation-related harm is insufficient for generating precise estimates of risk for a given WG Pu exposure scenario. This relates largely to uncertainties associated with currently available risk and dosimetry models. Thus, rather than generating point estimates of risk, distributions that account for variability/uncertainty are needed to properly characterize potential harm to humans from a given WG Pu exposure scenario. In this manuscript, we generate and summarize risk distributions for deterministic radiation effects in the lungs of nuclear workers from inhaled WG Pu particles (standard isotopic mix). These distributions were developed using NUREG/CR-4214 risk models and time-dependent, dose conversion factor data based on Publication 30 of the International Commission on Radiological Protection. Dose conversion factors based on ICRP Publication 30 are more relevant to deterministic effects than are the dose conversion factors based on ICRP Publication 66, which relate to targets for stochastic effects. Risk distributions that account for NUREG/CR-4214 parameter and model uncertainties were generated using the Monte Carlo method. Risks were evaluated for both lethality (from radiation pneumonitis) and morbidity (due to radiation-induced respiratory dysfunction) and were found to depend strongly on absorbed
Elliptical quantum dots as on-demand single photons sources with deterministic polarization states
Teng, Chu-Hsiang; Demory, Brandon; Ku, Pei-Cheng; Zhang, Lei; Hill, Tyler A.; Deng, Hui
2015-11-09
In quantum information, control of the single photon's polarization is essential. Here, we demonstrate single photon generation in a pre-programmed and deterministic polarization state, on a chip-scale platform, utilizing site-controlled elliptical quantum dots (QDs) synthesized by a top-down approach. The polarization from the QD emission is found to be linear with a high degree of linear polarization and parallel to the long axis of the ellipse. Single photon emission with orthogonal polarizations is achieved, and the dependence of the degree of linear polarization on the QD geometry is analyzed.
Investigating the Use of 3-D Deterministic Transport for Core Safety Analysis
H. D. Gougar; D. Scott
2004-04-01
An LDRD (Laboratory Directed Research and Development) project is underway at the Idaho National Laboratory (INL) to demonstrate the feasibility of using a three-dimensional multi-group deterministic neutron transport code (Attila®) to perform global (core-wide) criticality, flux and depletion calculations for safety analysis of the Advanced Test Reactor (ATR). This paper discusses the ATR, model development, capabilities of Attila, generation of the cross-section libraries, comparisons to experimental results for Advanced Fuel Cycle (AFC) concepts, and future work planned with Attila.
Deterministic and stochastic modifications of the Stokes formula
NASA Astrophysics Data System (ADS)
Ellmann, A.
2009-04-01
Several recent space technologies have improved our knowledge of the global gravity field and Earth's topography. However, the space-borne data is not only limited by its accuracy but also by the spatial resolution. For instance, the ongoing satellite gravimetric mission GRACE has resolved the long-wavelength component of the global geoid with an accuracy of a few cm, whilst the spatial resolution of such information is limited to about 200 km. Even though the first satellite gradiometry mission GOCE (to be launched by the European Space Agency in 2009) will be capable to further enhance the intermediate wavelength information of the gravity field, but only up to the 65 km spatial resolution. Further improvements to the Earth gravity models (EGM) at shorter wavelengths should still come from the use of terrestrial surveys and satellite altimetry (over the oceans). The resolution of a new combined EGM08 is 5´ arc-minutes (corresponding to 9 km, i.e. to the spectral degree of 2160). For many applications, however, the resolution of the EGM08 may not be sufficient. For solving a large variety of engineering tasks a high-resolution (2-3 km) regional geoid model with an 1 cm accuracy is required. Obviously, due to tremendous computational burden and the voids of terrestrial gravity data it is unrealistic to develop such an ultra-high-degree spectral model of the global geoid. Therefore, the usage of the local terrestrial data is still requested for the the high-resolution regional geoid modeling. Regional improvements of global geoid models can be obtained by modifying Stokes's integral formula. This method combines an appropriate EGM spectrum with local terrestrial data in a truncated Stokes's integral. The integral argument is usually a residual gravity anomaly, which is obtained by subtracting the EGM derived long-wavelength contribution from the complete gravity anomaly. This contribution focuses on the integral part of the formula, which generates short wavelenght
Automatic Data Traffic Control on DSM Architecture
NASA Technical Reports Server (NTRS)
Frumkin, Michael; Jin, Hao-Qiang; Yan, Jerry; Kwak, Dochan (Technical Monitor)
2000-01-01
We study data traffic on distributed shared memory machines and conclude that data placement and grouping improve performance of scientific codes. We present several methods which user can employ to improve data traffic in his code. We report on implementation of a tool which detects the code fragments causing data congestions and advises user on improvements of data routing in these fragments. The capabilities of the tool include deduction of data alignment and affinity from the source code; detection of the code constructs having abnormally high cache or TLB misses; generation of data placement constructs. We demonstrate the capabilities of the tool on experiments with NAS parallel benchmarks and with a simple computational fluid dynamics application ARC3D.
Toxicity of inhaled traffic related particulate matter
NASA Astrophysics Data System (ADS)
Gerlofs-Nijland, Miriam E.; Campbell, Arezoo; Miller, Mark R.; Newby, David E.; Cassee, Flemming R.
2009-02-01
Traffic generated ultrafine particulates may play a major role in the development of adverse health effects. However, little is known about harmful effects caused by recurring exposure. We hypothesized that repeated exposure to particulate matter results in adverse pulmonary and systemic toxic effects. Exposure to diesel engine exhaust resulted in signs of oxidative stress in the lung, impaired coagulation, and changes in the immune system. Pro-inflammatory cytokine levels were decreased in some regions of the brain but increased in the striatum implying that exposure to diesel engine exhaust may selectively aggravate neurological impairment. Data from these three studies suggest that exposure to traffic related PM can mediate changes in the vasculature and brain of healthy rats. To what extent these changes may contribute to chronic neurodegenerative or vascular diseases is at present unclear.
Traffic Calming: A Social Issue
ERIC Educational Resources Information Center
Crouse, David W.
2004-01-01
Substantial urban growth fueled by a strong economy often results in heavy traffic thus making streets less hospitable. Traffic calming is one response to the pervasiveness of the automobile. The issues concern built environments and involve multiple actors reflecting different interests. The issues are rarely technical and involve combinations of…
Traffic Safety for Special Children
ERIC Educational Resources Information Center
Wilson, Val; MacKenzie, R. A.
1974-01-01
In a 6 weeks' unit on traffic education using flannel graphs, filmstrips and models, 12 special class students (IQ 55-82) ages 7- to 11-years-old learned six basic skills including crossing a road, obeying traffic lights and walking on country roads. (CL)
Probabilistic description of traffic flow
NASA Astrophysics Data System (ADS)
Mahnke, R.; Kaupužs, J.; Lubashevsky, I.
2005-03-01
A stochastic description of traffic flow, called probabilistic traffic flow theory, is developed. The general master equation is applied to relatively simple models to describe the formation and dissolution of traffic congestions. Our approach is mainly based on spatially homogeneous systems like periodically closed circular rings without on- and off-ramps. We consider a stochastic one-step process of growth or shrinkage of a car cluster (jam). As generalization we discuss the coexistence of several car clusters of different sizes. The basic problem is to find a physically motivated ansatz for the transition rates of the attachment and detachment of individual cars to a car cluster consistent with the empirical observations in real traffic. The emphasis is put on the analogy with first-order phase transitions and nucleation phenomena in physical systems like supersaturated vapour. The results are summarized in the flux-density relation, the so-called fundamental diagram of traffic flow, and compared with empirical data. Different regimes of traffic flow are discussed: free flow, congested mode as stop-and-go regime, and heavy viscous traffic. The traffic breakdown is studied based on the master equation as well as the Fokker-Planck approximation to calculate mean first passage times or escape rates. Generalizations are developed to allow for on-ramp effects. The calculated flux-density relation and characteristic breakdown times coincide with empirical data measured on highways. Finally, a brief summary of the stochastic cellular automata approach is given.
From deterministic cellular automata to coupled map lattices
NASA Astrophysics Data System (ADS)
García-Morales, Vladimir
2016-07-01
A general mathematical method is presented for the systematic construction of coupled map lattices (CMLs) out of deterministic cellular automata (CAs). The entire CA rule space is addressed by means of a universal map for CAs that we have recently derived and that is not dependent on any freely adjustable parameters. The CMLs thus constructed are termed real-valued deterministic cellular automata (RDCA) and encompass all deterministic CAs in rule space in the asymptotic limit κ \\to 0 of a continuous parameter κ. Thus, RDCAs generalize CAs in such a way that they constitute CMLs when κ is finite and nonvanishing. In the limit κ \\to ∞ all RDCAs are shown to exhibit a global homogeneous fixed-point that attracts all initial conditions. A new bifurcation is discovered for RDCAs and its location is exactly determined from the linear stability analysis of the global quiescent state. In this bifurcation, fuzziness gradually begins to intrude in a purely deterministic CA-like dynamics. The mathematical method presented allows to get insight in some highly nontrivial behavior found after the bifurcation.
Deterministic retrieval of complex Green's functions using hard X rays.
Vine, D J; Paganin, D M; Pavlov, K M; Uesugi, K; Takeuchi, A; Suzuki, Y; Yagi, N; Kämpfe, T; Kley, E-B; Förster, E
2009-01-30
A massively parallel deterministic method is described for reconstructing shift-invariant complex Green's functions. As a first experimental implementation, we use a single phase contrast x-ray image to reconstruct the complex Green's function associated with Bragg reflection from a thick perfect crystal. The reconstruction is in excellent agreement with a classic prediction of dynamical diffraction theory. PMID:19257417
A Unit on Deterministic Chaos for Student Teachers
ERIC Educational Resources Information Center
Stavrou, D.; Assimopoulos, S.; Skordoulis, C.
2013-01-01
A unit aiming to introduce pre-service teachers of primary education to the limited predictability of deterministic chaotic systems is presented. The unit is based on a commercial chaotic pendulum system connected with a data acquisition interface. The capabilities and difficulties in understanding the notion of limited predictability of 18…
Risk-based versus deterministic explosives safety criteria
Wright, R.E.
1996-12-01
The Department of Defense Explosives Safety Board (DDESB) is actively considering ways to apply risk-based approaches in its decision- making processes. As such, an understanding of the impact of converting to risk-based criteria is required. The objectives of this project are to examine the benefits and drawbacks of risk-based criteria and to define the impact of converting from deterministic to risk-based criteria. Conclusions will be couched in terms that allow meaningful comparisons of deterministic and risk-based approaches. To this end, direct comparisons of the consequences and impacts of both deterministic and risk-based criteria at selected military installations are made. Deterministic criteria used in this report are those in DoD 6055.9-STD, `DoD Ammunition and Explosives Safety Standard.` Risk-based criteria selected for comparison are those used by the government of Switzerland, `Technical Requirements for the Storage of Ammunition (TLM 75).` The risk-based criteria used in Switzerland were selected because they have been successfully applied for over twenty-five years.
Deterministic dense coding and faithful teleportation with multipartite graph states
Huang, C.-Y.; Yu, I-C.; Lin, F.-L.; Hsu, L.-Y.
2009-05-15
We propose schemes to perform the deterministic dense coding and faithful teleportation with multipartite graph states. We also find the sufficient and necessary condition of a viable graph state for the proposed schemes. That is, for the associated graph, the reduced adjacency matrix of the Tanner-type subgraph between senders and receivers should be invertible.
Integrating local static and dynamic information for routing traffic
NASA Astrophysics Data System (ADS)
Wang, Wen-Xu; Yin, Chuan-Yang; Yan, Gang; Wang, Bing-Hong
2006-07-01
The efficiency of traffic routing on complex networks can be reflected by two key measurements, i.e., the network capacity and the average travel time of data packets. In this paper we propose a mixing routing strategy by integrating local static and dynamic information for enhancing the efficiency of traffic on scale-free networks. The strategy is governed by a single parameter. Simulation results show that maximizing the network capacity and reducing the packet travel time can generate an optimal parameter value. Compared with the strategy of adopting exclusive local static information, the new strategy shows its advantages in improving the efficiency of the system. The detailed analysis of the mixing strategy is provided for explaining its effects on traffic routing. The work indicates that effectively utilizing the larger degree nodes plays a key role in scale-free traffic systems.
Design of automated system for management of arrival traffic
NASA Technical Reports Server (NTRS)
Erzberger, Heinz; Nedell, William
1989-01-01
The design of an automated air traffic control system based on a hierarchy of advisory tools for controllers is described. Compatibility of the tools with the human controller, a key objective of the design, is achieved by a judicious selection of tasks to be automated and careful attention to the design of the controller system interface. The design comprises three interconnected subsystems referred to as the Traffic Management Advisor, the Descent Advisor, and the Final Approach Spacing Tool. Each of these subsystems provides a collection of tools for specific controller positions and tasks. The design of two of these tools, the Descent Advisor, which provides automation tools for managing descent traffic, and the Traffic Management Advisor, which generates optimum landing schedules is focused on. The algorithms, automation modes, and graphical interfaces incorporated in the design are described.
Full randomness from arbitrarily deterministic events
NASA Astrophysics Data System (ADS)
Gallego, Rodrigo; Masanes, Lluis; de la Torre, Gonzalo; Dhara, Chirag; Aolita, Leandro; Acín, Antonio
2013-10-01
Do completely unpredictable events exist? Classical physics excludes fundamental randomness. Although quantum theory makes probabilistic predictions, this does not imply that nature is random, as randomness should be certified without relying on the complete structure of the theory being used. Bell tests approach the question from this perspective. However, they require prior perfect randomness, falling into a circular reasoning. A Bell test that generates perfect random bits from bits possessing high—but less than perfect—randomness has recently been obtained. Yet, the main question remained open: does any initial randomness suffice to certify perfect randomness? Here we show that this is indeed the case. We provide a Bell test that uses arbitrarily imperfect random bits to produce bits that are, under the non-signalling principle assumption, perfectly random. This provides the first protocol attaining full randomness amplification. Our results have strong implications onto the debate of whether there exist events that are fully random.
A Hybrid Monte Carlo-Deterministic Method for Global Binary Stochastic Medium Transport Problems
Keady, K P; Brantley, P
2010-03-04
Global deep-penetration transport problems are difficult to solve using traditional Monte Carlo techniques. In these problems, the scalar flux distribution is desired at all points in the spatial domain (global nature), and the scalar flux typically drops by several orders of magnitude across the problem (deep-penetration nature). As a result, few particle histories may reach certain regions of the domain, producing a relatively large variance in tallies in those regions. Implicit capture (also known as survival biasing or absorption suppression) can be used to increase the efficiency of the Monte Carlo transport algorithm to some degree. A hybrid Monte Carlo-deterministic technique has previously been developed by Cooper and Larsen to reduce variance in global problems by distributing particles more evenly throughout the spatial domain. This hybrid method uses an approximate deterministic estimate of the forward scalar flux distribution to automatically generate weight windows for the Monte Carlo transport simulation, avoiding the necessity for the code user to specify the weight window parameters. In a binary stochastic medium, the material properties at a given spatial location are known only statistically. The most common approach to solving particle transport problems involving binary stochastic media is to use the atomic mix (AM) approximation in which the transport problem is solved using ensemble-averaged material properties. The most ubiquitous deterministic model developed specifically for solving binary stochastic media transport problems is the Levermore-Pomraning (L-P) model. Zimmerman and Adams proposed a Monte Carlo algorithm (Algorithm A) that solves the Levermore-Pomraning equations and another Monte Carlo algorithm (Algorithm B) that is more accurate as a result of improved local material realization modeling. Recent benchmark studies have shown that Algorithm B is often significantly more accurate than Algorithm A (and therefore the L-P model
Traffic information computing platform for big data
Duan, Zongtao Li, Ying Zheng, Xibin Liu, Yan Dai, Jiting Kang, Jun
2014-10-06
Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users.
Nearly deterministic preparation of the perfect W state with weak cross-Kerr nonlinearities
NASA Astrophysics Data System (ADS)
Dong, Li; Wang, Jun-Xi; Li, Qing-Yang; Shen, Hong-Zhi; Dong, Hai-Kuan; Xiu, Xiao-Ming; Gao, Ya-Jun; Oh, Choo Hiap
2016-01-01
Relying on weak cross-Kerr nonlinearities, we propose a nearly deterministic generation scheme of the three-photon polarization-entangled perfect W state which can be applied to the perfect teleportation of an unknown single-photon state and has robust entanglement against the loss of one photon of them. Three photons entangle together by virtue of the bus function of the coherent state serving as the intermediate among them. In the scheme, three processes are executed successively and two kinds of modules are inserted into the circuit, where the homodyne measurement and the photon number measurement are aptly performed. By means of classical feedforward techniques, single-photon unitary transformation operations are performed on the corresponding photons based on the obtained measurement outcomes, by which the generation efficiency of the perfect W state aims to nearly unity. Moreover, some currently available optical elements are applied in the generation process, which offer facilities for the practical implementation.
NASA Astrophysics Data System (ADS)
Raeesi, M.; Mesgari, M. S.; Mahmoudi, P.
2014-10-01
Short time prediction is one of the most important factors in intelligence transportation system (ITS). In this research, the use of feed forward neural network for traffic time-series prediction is presented. In this paper, the traffic in one direction of the road segment is predicted. The input of the neural network is the time delay data exported from the road traffic data of Monroe city. The time delay data is used for training the network. For generating the time delay data, the traffic data related to the first 300 days of 2008 is used. The performance of the feed forward neural network model is validated using the real observation data of the 301st day.
Automated Conflict Resolution For Air Traffic Control
NASA Technical Reports Server (NTRS)
Erzberger, Heinz
2005-01-01
The ability to detect and resolve conflicts automatically is considered to be an essential requirement for the next generation air traffic control system. While systems for automated conflict detection have been used operationally by controllers for more than 20 years, automated resolution systems have so far not reached the level of maturity required for operational deployment. Analytical models and algorithms for automated resolution have been traffic conditions to demonstrate that they can handle the complete spectrum of conflict situations encountered in actual operations. The resolution algorithm described in this paper was formulated to meet the performance requirements of the Automated Airspace Concept (AAC). The AAC, which was described in a recent paper [1], is a candidate for the next generation air traffic control system. The AAC's performance objectives are to increase safety and airspace capacity and to accommodate user preferences in flight operations to the greatest extent possible. In the AAC, resolution trajectories are generated by an automation system on the ground and sent to the aircraft autonomously via data link .The algorithm generating the trajectories must take into account the performance characteristics of the aircraft, the route structure of the airway system, and be capable of resolving all types of conflicts for properly equipped aircraft without requiring supervision and approval by a controller. Furthermore, the resolution trajectories should be compatible with the clearances, vectors and flight plan amendments that controllers customarily issue to pilots in resolving conflicts. The algorithm described herein, although formulated specifically to meet the needs of the AAC, provides a generic engine for resolving conflicts. Thus, it can be incorporated into any operational concept that requires a method for automated resolution, including concepts for autonomous air to air resolution.
Low Earth Orbit satellite traffic simulator
NASA Technical Reports Server (NTRS)
Hoelzel, John
1995-01-01
This paper describes a significant tool for Low Earth Orbit (LEO) capacity analysis, needed to support marketing, economic, and design analysis, known as a Satellite Traffic Simulator (STS). LEO satellites typically use multiple beams to help achieve the desired communication capacity, but the traffic demand in these beams in usually not uniform. Simulations of dynamic, average, and peak expected demand per beam is a very critical part of the marketing, economic, and design analysis necessary to field a viable LEO system. An STS is described in this paper which can simulate voice, data and FAX traffic carried by LEO satellite beams and Earth Station Gateways. It is applicable world-wide for any LEO satellite constellations operating over any regions. For aeronautical applications to LEO satellites. the anticipates aeronautical traffic (Erlangs for each hour of the day to be simulated) is prepared for geographically defined 'area targets' (each major operational region for the respective aircraft), and used as input to the STS. The STS was designed by Constellations Communications Inc. (CCI) and E-Systems for usage in Brazil in accordance with an ESCA/INPE Statement Of Work, and developed by Analytical Graphics Inc. (AGI) to execute on top of its Satellite Tool Kit (STK) commercial software. The STS simulates constellations of LEO satellite orbits, with input of traffic intensity (Erlangs) for each hour of the day generated from area targets (such as Brazilian States). accumulated in custom LEO satellite beams, and then accumulated in Earth Station Gateways. The STS is a very general simulator which can accommodate: many forms of orbital element and Walker Constellation input; simple beams or any user defined custom beams; and any location of Gateways. The paper describes some of these features, including Manual Mode dynamic graphical display of communication links, to illustrate which Gateway links are accessible and which links are not, at each 'step' of the
Fluctuations in Urban Traffic Networks
NASA Astrophysics Data System (ADS)
Chen, Yu-Dong; Li, Li; Zhang, Yi; Hu, Jian-Ming; Jin, Xue-Xiang
Urban traffic network is a typical complex system, in which movements of tremendous microscopic traffic participants (pedestrians, bicyclists and vehicles) form complicated spatial and temporal dynamics. We collected flow volumes data on the time-dependent activity of a typical urban traffic network, finding that the coupling between the average flux and the fluctuation on individual links obeys a certain scaling law, with a wide variety of scaling exponents between 1/2 and 1. These scaling phenomena can explain the interaction between the nodes' internal dynamics (i.e. queuing at intersections, car-following in driving) and changes in the external (network-wide) traffic demand (i.e. the every day increase of traffic amount during peak hours and shocking caused by traffic accidents), allowing us to further understand the mechanisms governing the transportation system's collective behavior. Multiscaling and hotspot features are observed in the traffic flow data as well. But the reason why the separated internal dynamics are comparable to the external dynamics in magnitude is still unclear and needs further investigations.
Air Traffic Management Research at NASA
NASA Technical Reports Server (NTRS)
Farley, Todd
2012-01-01
The U.S. air transportation system is the most productive in the world, moving far more people and goods than any other. It is also the safest system in the world, thanks in part to its venerable air traffic control system. But as demand for air travel continues to grow, the air traffic control systems aging infrastructure and labor-intensive procedures are impinging on its ability to keep pace with demand. And that impinges on the growth of our economy. Part of NASA's current mission in aeronautics research is to invent new technologies and procedures for ATC that will enable our national airspace system to accommodate the increasing demand for air transportation well into the next generation while still maintaining its excellent record for safety. It is a challenging mission, as efforts to modernize have, for decades, been hamstrung by the inability to assure safety to the satisfaction of system operators, system regulators, and/or the traveling public. In this talk, we'll provide a brief history of air traffic control, focusing on the tension between efficiency and safety assurance, and we'll highlight some new NASA technologies coming down the pike.
Automated Traffic Management System and Method
NASA Technical Reports Server (NTRS)
Glass, Brian J. (Inventor); Spirkovska, Liljana (Inventor); McDermott, William J. (Inventor); Reisman, Ronald J. (Inventor); Gibson, James (Inventor); Iverson, David L. (Inventor)
2000-01-01
A data management system and method that enables acquisition, integration, and management of real-time data generated at different rates, by multiple heterogeneous incompatible data sources. The system achieves this functionality by using an expert system to fuse data from a variety of airline, airport operations, ramp control, and air traffic control tower sources, to establish and update reference data values for every aircraft surface operation. The system may be configured as a real-time airport surface traffic management system (TMS) that electronically interconnects air traffic control, airline data, and airport operations data to facilitate information sharing and improve taxi queuing. In the TMS operational mode, empirical data shows substantial benefits in ramp operations for airlines, reducing departure taxi times by about one minute per aircraft in operational use, translating as $12 to $15 million per year savings to airlines at the Atlanta, Georgia airport. The data management system and method may also be used for scheduling the movement of multiple vehicles in other applications, such as marine vessels in harbors and ports, trucks or railroad cars in ports or shipping yards, and railroad cars in switching yards. Finally, the data management system and method may be used for managing containers at a shipping dock, stock on a factory floor or in a warehouse, or as a training tool for improving situational awareness of FAA tower controllers, ramp and airport operators, or commercial airline personnel in airfield surface operations.
Spreading of Traffic Jam in a Traffic Flow Model
NASA Astrophysics Data System (ADS)
Nagatani, Takashi
1993-04-01
A cellular automaton (CA) model is presented to simulate the traffic jam induced by a traffic accident. The spreading of jamming cars induced by a car crash is investigated by computer simulation. An analogy is proposed between the crystal growth and the traffic-jam spreading. The scaling behavior of the traffic-jam spreading is studied. It is shown that the number N of jamming cars scales as N≈t2.34± 0.03 for p above the dynamical jamming transition pc{=}0.35 and N≈t1.07 below pc where t is the time and p is the density of cars. The time constant ts, which is the time required for all cars to stop, scales as ts≈p-1.07± 0.03 for p
[Comics for traffic education: evaluation of a traffic safety campaign].
Bonfadelli, H
1989-01-01
Traffic safety campaigns often are ineffective to change driving behavior because they don't reach the target group or are recognized only by people who are already interested or concerned. The evaluation of a traffic safety campaign called "Leo Lässig", addressed to young new drivers, shows that recognition and acceptance by the target group were stimulated by the age-conform means of comic-strips.
The Future of Air Traffic Management
NASA Technical Reports Server (NTRS)
Denery, Dallas G.; Erzberger, Heinz; Edwards, Thomas A. (Technical Monitor)
1998-01-01
A system for the control of terminal area traffic to improve productivity, referred to as the Center-TRACON Automation System (CTAS), is being developed at NASA's Ames Research Center under a joint program with the FAA. CTAS consists of a set of integrated tools that provide computer-generated advisories for en-route and terminal area controllers. The premise behind the design of CTAS has been that successful planning of traffic requires accurate trajectory prediction. Data bases consisting of representative aircraft performance models, airline preferred operational procedures and a three dimensional wind model support the trajectory prediction. The research effort has been the design of a set of automation tools that make use of this trajectory prediction capability to assist controllers in overall management of traffic. The first tool, the Traffic Management Advisor (TMA), provides the overall flow management between the en route and terminal areas. A second tool, the Final Approach Spacing Tool (FAST) provides terminal area controllers with sequence and runway advisories to allow optimal use of the runways. The TMA and FAST are now being used in daily operations at Dallas/Ft. Worth airport. Additional activities include the development of several other tools. These include: 1) the En Route Descent Advisor that assist the en route controller in issuing conflict free descents and ascents; 2) the extension of FAST to include speed and heading advisories and the Expedite Departure Path (EDP) that assists the terminal controller in management of departures; and 3) the Collaborative Arrival Planner (CAP) that will assist the airlines in operational decision making. The purpose of this presentation is to review the CTAS concept and to present the results of recent field tests. The paper will first discuss the overall concept and then discuss the status of the individual tools.
Dynamics of traffic flow with real-time traffic information
NASA Astrophysics Data System (ADS)
Yokoya, Yasushi
2004-01-01
We studied dynamics of traffic flow with real-time information provided. Provision of the real-time traffic information based on advancements in telecommunication technology is expected to facilitate the efficient utilization of available road capacity. This system has a potentiality of not only engineering for road usage but also the science of complexity series. In the system, the information plays a role of feedback connecting microscopic and macroscopic phenomena beyond the hierarchical structure of statistical physics. In this paper, we tried to clarify how the information works in a network of traffic flow from the perspective of statistical physics. The dynamical feature of the traffic flow is abstracted by a contrastive study between the nonequilibrium statistical physics and a computer simulation based on cellular automaton. We found that the information disrupts the local equilibrium of traffic flow by a characteristic dissipation process due to interaction between the information and individual vehicles. The dissipative structure was observed in the time evolution of traffic flow driven far from equilibrium as a consequence of the breakdown of the local-equilibrium hypothesis.
Modeling conflicts of heterogeneous traffic at urban uncontrolled intersections
Trinadha Rao, V.; Rengaraju, V.R.
1998-01-01
The behavior of traffic in the heterogeneous environment of an urban uncontrolled intersection is complex and difficult to model. The present study describes the methodology of simulating the traffic flow and thereby estimating the number of conflicts in varying traffic flow conditions. The arrival pattern of vehicles was represented by a multivariate distribution to generate input to the simulation model. The model was validated externally, using field observed data, and was found to predict the number of conflicts well. As an illustration of usefulness of the model, variation of conflict rate (the probability of a vehicle`s getting involved in conflict) due to variation in traffic volume and the proportion of right-turning traffic has been quantified. Under the prevailing traffic composition and turning movements, the conflict rate is estimated to lie in the range of 0.66--0.70, 0.79--0.84, and 0.80-0.87 for intersection volumes of 2,000, 2,500, and 3,000 vehicles per hour, respectively. Issues related to the applicability of the proposed model are briefly discussed.
Air Traffic Management Research at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Lee, Katharine
2005-01-01
Since the late 1980's, NASA Ames researchers have been investigating ways to improve the air transportation system through the development of decision support automation. These software advances, such as the Center-TRACON Automation System (eTAS) have been developed with teams of engineers, software developers, human factors experts, and air traffic controllers; some ASA Ames decision support tools are currently operational in Federal Aviation Administration (FAA) facilities and some are in use by the airlines. These tools have provided air traffic controllers and traffic managers the capabilities to help reduce overall delays and holding, and provide significant cost savings to the airlines as well as more manageable workload levels for air traffic service providers. NASA is continuing to collaborate with the FAA, as well as other government agencies, to plan and develop the next generation of decision support tools that will support anticipated changes in the air transportation system, including a projected increase to three times today's air-traffic levels by 2025. The presentation will review some of NASA Ames' recent achievements in air traffic management research, and discuss future tool developments and concepts currently under consideration.
Deterministic remote two-qubit state preparation in dissipative environments
NASA Astrophysics Data System (ADS)
Li, Jin-Fang; Liu, Jin-Ming; Feng, Xun-Li; Oh, C. H.
2016-05-01
We propose a new scheme for efficient remote preparation of an arbitrary two-qubit state, introducing two auxiliary qubits and using two Einstein-Podolsky-Rosen (EPR) states as the quantum channel in a non-recursive way. At variance with all existing schemes, our scheme accomplishes deterministic remote state preparation (RSP) with only one sender and the simplest entangled resource (say, EPR pairs). We construct the corresponding quantum logic circuit using a unitary matrix decomposition procedure and analytically obtain the average fidelity of the deterministic RSP process for dissipative environments. Our studies show that, while the average fidelity gradually decreases to a stable value without any revival in the Markovian regime, it decreases to the same stable value with a dampened revival amplitude in the non-Markovian regime. We also find that the average fidelity's approximate maximal value can be preserved for a long time if the non-Markovian and the detuning conditions are satisfied simultaneously.
Deterministic synthesis of mechanical NOON states in ultrastrong optomechanics
NASA Astrophysics Data System (ADS)
Macrí, V.; Garziano, L.; Ridolfo, A.; Di Stefano, O.; Savasta, S.
2016-07-01
We propose a protocol for the deterministic preparation of entangled NOON mechanical states. The system is constituted by two identical, optically coupled optomechanical systems. The protocol consists of two steps. In the first, one of the two optical resonators is excited by a resonant external π -like Gaussian optical pulse. When the optical excitation coherently partly transfers to the second cavity, the second step starts. It consists of sending simultaneously two additional π -like Gaussian optical pulses, one at each optical resonator, with specific frequencies. In the optomechanical ultrastrong coupling regime, when the coupling strength becomes a significant fraction of the mechanical frequency, we show that NOON mechanical states with quite high Fock states can be deterministically obtained. The operating range of this protocol is carefully analyzed. Calculations have been carried out taking into account the presence of decoherence, thermal noise, and imperfect cooling.
Deterministic error correction for nonlocal spatial-polarization hyperentanglement.
Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu
2016-01-01
Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication. PMID:26861681
On the secure obfuscation of deterministic finite automata.
Anderson, William Erik
2008-06-01
In this paper, we show how to construct secure obfuscation for Deterministic Finite Automata, assuming non-uniformly strong one-way functions exist. We revisit the software protection approaches originally proposed by [5, 10, 12, 17] and revise them to the current obfuscation setting of Barak et al. [2]. Under this model, we introduce an efficient oracle that retains some 'small' secret about the original program. Using this secret, we can construct an obfuscator and two-party protocol that securely obfuscates Deterministic Finite Automata against malicious adversaries. The security of this model retains the strong 'virtual black box' property originally proposed in [2] while incorporating the stronger condition of dependent auxiliary inputs in [15]. Additionally, we show that our techniques remain secure under concurrent self-composition with adaptive inputs and that Turing machines are obfuscatable under this model.
Approaches to implementing deterministic models in a probabilistic framework
Talbott, D.V.
1995-04-01
The increasing use of results from probabilistic risk assessments in the decision-making process makes it ever more important to eliminate simplifications in probabilistic models that might lead to conservative results. One area in which conservative simplifications are often made is modeling the physical interactions that occur during the progression of an accident sequence. This paper demonstrates and compares different approaches for incorporating deterministic models of physical parameters into probabilistic models; parameter range binning, response curves, and integral deterministic models. An example that combines all three approaches in a probabilistic model for the handling of an energetic material (i.e. high explosive, rocket propellant,...) is then presented using a directed graph model.
Deterministic error correction for nonlocal spatial-polarization hyperentanglement
Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu
2016-01-01
Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication. PMID:26861681
Deterministic error correction for nonlocal spatial-polarization hyperentanglement
NASA Astrophysics Data System (ADS)
Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu
2016-02-01
Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication.
36 CFR 4.13 - Obstructing traffic.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Obstructing traffic. 4.13... VEHICLES AND TRAFFIC SAFETY § 4.13 Obstructing traffic. The following are prohibited: (a) Stopping or... interfere with the normal flow of traffic....
36 CFR 4.13 - Obstructing traffic.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 36 Parks, Forests, and Public Property 1 2011-07-01 2011-07-01 false Obstructing traffic. 4.13... VEHICLES AND TRAFFIC SAFETY § 4.13 Obstructing traffic. The following are prohibited: (a) Stopping or... interfere with the normal flow of traffic....
Probabilistic versus deterministic hazard assessment in liquefaction susceptible zones
NASA Astrophysics Data System (ADS)
Daminelli, Rosastella; Gerosa, Daniele; Marcellini, Alberto; Tento, Alberto
2015-04-01
Probabilistic seismic hazard assessment (PSHA), usually adopted in the framework of seismic codes redaction, is based on Poissonian description of the temporal occurrence, negative exponential distribution of magnitude and attenuation relationship with log-normal distribution of PGA or response spectrum. The main positive aspect of this approach stems into the fact that is presently a standard for the majority of countries, but there are weak points in particular regarding the physical description of the earthquake phenomenon. Factors like site effects, source characteristics like duration of the strong motion and directivity that could significantly influence the expected motion at the site are not taken into account by PSHA. Deterministic models can better evaluate the ground motion at a site from a physical point of view, but its prediction reliability depends on the degree of knowledge of the source, wave propagation and soil parameters. We compare these two approaches in selected sites affected by the May 2012 Emilia-Romagna and Lombardia earthquake, that caused widespread liquefaction phenomena unusually for magnitude less than 6. We focus on sites liquefiable because of their soil mechanical parameters and water table level. Our analysis shows that the choice between deterministic and probabilistic hazard analysis is strongly dependent on site conditions. The looser the soil and the higher the liquefaction potential, the more suitable is the deterministic approach. Source characteristics, in particular the duration of strong ground motion, have long since recognized as relevant to induce liquefaction; unfortunately a quantitative prediction of these parameters appears very unlikely, dramatically reducing the possibility of their adoption in hazard assessment. Last but not least, the economic factors are relevant in the choice of the approach. The case history of 2012 Emilia-Romagna and Lombardia earthquake, with an officially estimated cost of 6 billions
Deterministic entanglement of two neutral atoms via Rydberg blockade
Zhang, X. L.; Isenhower, L.; Gill, A. T.; Walker, T. G.; Saffman, M.
2010-09-15
We demonstrate the deterministic entanglement of two individually addressed neutral atoms using a Rydberg blockade mediated controlled-not gate. Parity oscillation measurements reveal a Bell state fidelity of F=0.58{+-}0.04, which is above the entanglement threshold of F=0.5, without any correction for atom loss, and F=0.71{+-}0.05 after correcting for background collisional losses. The fidelity results are shown to be in good agreement with a detailed error model.
Comment on: Supervisory Asymmetric Deterministic Secure Quantum Communication
NASA Astrophysics Data System (ADS)
Kao, Shih-Hung; Tsai, Chia-Wei; Hwang, Tzonelih
2012-12-01
In 2010, Xiu et al. (Optics Communications 284:2065-2069, 2011) proposed several applications based on a new secure four-site distribution scheme using χ-type entangled states. This paper points out that one of these applications, namely, supervisory asymmetric deterministic secure quantum communication, is subject to an information leakage problem, in which the receiver can extract two bits of a three-bit secret message without the supervisor's permission. An enhanced protocol is proposed to resolve this problem.
The deterministic SIS epidemic model in a Markovian random environment.
Economou, Antonis; Lopez-Herrero, Maria Jesus
2016-07-01
We consider the classical deterministic susceptible-infective-susceptible epidemic model, where the infection and recovery rates depend on a background environmental process that is modeled by a continuous time Markov chain. This framework is able to capture several important characteristics that appear in the evolution of real epidemics in large populations, such as seasonality effects and environmental influences. We propose computational approaches for the determination of various distributions that quantify the evolution of the number of infectives in the population. PMID:26515172
Nano transfer and nanoreplication using deterministically grown sacrificial nanotemplates
Melechko, Anatoli V.; McKnight, Timothy E.; Guillorn, Michael A.; Ilic, Bojan; Merkulov, Vladimir I.; Doktycz, Mitchel J.; Lowndes, Douglas H.; Simpson, Michael L.
2012-03-27
Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoconduit material coupled to a surface of the substrate. The substrate defines an aperture and the nanoconduit material defines a nanoconduit that is i) contiguous with the aperture and ii) aligned substantially non-parallel to a plane defined by the surface of the substrate.
A deterministic algorithm for constrained enumeration of transmembrane protein folds.
Brown, William Michael; Young, Malin M.; Sale, Kenneth L.; Faulon, Jean-Loup Michel; Schoeniger, Joseph S.
2004-07-01
A deterministic algorithm for enumeration of transmembrane protein folds is presented. Using a set of sparse pairwise atomic distance constraints (such as those obtained from chemical cross-linking, FRET, or dipolar EPR experiments), the algorithm performs an exhaustive search of secondary structure element packing conformations distributed throughout the entire conformational space. The end result is a set of distinct protein conformations, which can be scored and refined as part of a process designed for computational elucidation of transmembrane protein structures.
Beyond Dispersity: Deterministic Control of Polymer Molecular Weight Distribution.
Gentekos, Dillon T; Dupuis, Lauren N; Fors, Brett P
2016-02-17
The breadth of the molecular weight distributions (MWD) of polymers influences their physical properties; however, no synthetic methods allow precise control of the exact shape and composition of a distribution. We report a modular strategy that enables deterministic control over polymer MWD through temporal regulation of initiation in nitroxide-mediated polymerization reactions. This approach is applicable to any controlled polymerization that uses a discrete initiator, and it allows the use of MWD composition as a parameter to tune material properties.
Deterministic polarization-entanglement purification using spatial entanglement
Li Xihan
2010-10-15
We present an efficient entanglement purification protocol with hyperentanglement in which additional spatial entanglement is utilized to purify the two-particle polarization-entangled state. The bit-flip error and phase-flip error can be corrected and eliminated in one step. Two remote parties can obtain maximally entangled polarization states deterministically and only passive linear optics are employed. We also discuss the protocol with practical quantum source and noisy channel.
Turbulent Dispersion of Traffic Emissions
NASA Astrophysics Data System (ADS)
Staebler, R. M.; Gordon, M.; Liggio, J.; Makar, P.; Mihele, C.; Brook, J.; Wentzell, J. J.; Gong, S.; Lu, G.; Lee, P.
2010-12-01
Emissions from the transportation sector are a significant source of air pollution. Ongoing efforts to reduce the impacts require tools to provide guidance on policies regarding fuels, vehicle types and traffic control. The air quality models currently used to predict the effectiveness of policies typically treat traffic emissions as a source uniformly distributed across the surface of a model grid. In reality, emissions occur along lines above the surface, in an initially highly concentrated form, and are immediately mixed by traffic-enhanced turbulence. Differences between model and reality in terms of both chemistry and dispersion are to be expected. The ALMITEE (Advancing Local-scale Modeling through Inclusion of Transportation Emission Experiments) subproject FEVER (Fast Evolution of Vehicle Emissions from Roadways), conducted on multi-lane highways in the Toronto area in the summer of 2010, included measurements to quantify the evolution and dispersion of traffic emissions. Continuous micro-meteorological data (heat and momentum fluxes, temperature, humidity and incoming solar radiation) were collected 10m from the road, next to a traffic camera used to determine traffic density, composition and speed. Sonic anemometers and an aircraft turbulence probe mounted on a mobile lab provided measurements of turbulent dispersion both directly in traffic on the highway as well as on perpendicular side roads, as a function of distance from the highway. The mobile lab was equipped with instruments to characterize the aerosol size and mass distributions, aerosol composition including black carbon content, NO, NO2, CO2, CO, SO2 and VOCs at high time resolution. Preliminary results on the consequences of turbulent dispersion of traffic emissions levels under a variety of conditions will be disseminated.
Fully automated urban traffic system
NASA Technical Reports Server (NTRS)
Dobrotin, B. M.; Hansen, G. R.; Peng, T. K. C.; Rennels, D. A.
1977-01-01
The replacement of the driver with an automatic system which could perform the functions of guiding and routing a vehicle with a human's capability of responding to changing traffic demands was discussed. The problem was divided into four technological areas; guidance, routing, computing, and communications. It was determined that the latter three areas being developed independent of any need for fully automated urban traffic. A guidance system that would meet system requirements was not being developed but was technically feasible.
De Los Ríos, F. A.; Paluszny, M.
2015-01-01
We consider some methods to extract information about the rotator cuff based on magnetic resonance images; the study aims to define an alternative method of display that might facilitate the detection of partial tears in the supraspinatus tendon. Specifically, we are going to use families of ellipsoidal triangular patches to cover the humerus head near the affected area. These patches are going to be textured and displayed with the information of the magnetic resonance images using the trilinear interpolation technique. For the generation of points to texture each patch, we propose a new method that guarantees the uniform distribution of its points using a random statistical method. Its computational cost, defined as the average computing time to generate a fixed number of points, is significantly lower as compared with deterministic and other standard statistical techniques. PMID:25650281
Deterministic Binary Vectors for Efficient Automated Indexing of MEDLINE/PubMed Abstracts
Wahle, Manuel; Widdows, Dominic; Herskovic, Jorge R.; Bernstam, Elmer V.; Cohen, Trevor
2012-01-01
The need to maintain accessibility of the biomedical literature has led to development of methods to assist human indexers by recommending index terms for newly encountered articles. Given the rapid expansion of this literature, it is essential that these methods be scalable. Document vector representations are commonly used for automated indexing, and Random Indexing (RI) provides the means to generate them efficiently. However, RI is difficult to implement in real-world indexing systems, as (1) efficient nearest-neighbor search requires retaining all document vectors in RAM, and (2) it is necessary to maintain a store of randomly generated term vectors to index future documents. Motivated by these concerns, this paper documents the development and evaluation of a deterministic binary variant of RI. The increased capacity demonstrated by binary vectors has implications for information retrieval, and the elimination of the need to retain term vectors facilitates distributed implementations, enhancing the scalability of RI. PMID:23304369
Deterministic binary vectors for efficient automated indexing of MEDLINE/PubMed abstracts.
Wahle, Manuel; Widdows, Dominic; Herskovic, Jorge R; Bernstam, Elmer V; Cohen, Trevor
2012-01-01
The need to maintain accessibility of the biomedical literature has led to development of methods to assist human indexers by recommending index terms for newly encountered articles. Given the rapid expansion of this literature, it is essential that these methods be scalable. Document vector representations are commonly used for automated indexing, and Random Indexing (RI) provides the means to generate them efficiently. However, RI is difficult to implement in real-world indexing systems, as (1) efficient nearest-neighbor search requires retaining all document vectors in RAM, and (2) it is necessary to maintain a store of randomly generated term vectors to index future documents. Motivated by these concerns, this paper documents the development and evaluation of a deterministic binary variant of RI. The increased capacity demonstrated by binary vectors has implications for information retrieval, and the elimination of the need to retain term vectors facilitates distributed implementations, enhancing the scalability of RI. PMID:23304369
Demographic noise can reverse the direction of deterministic selection.
Constable, George W A; Rogers, Tim; McKane, Alan J; Tarnita, Corina E
2016-08-01
Deterministic evolutionary theory robustly predicts that populations displaying altruistic behaviors will be driven to extinction by mutant cheats that absorb common benefits but do not themselves contribute. Here we show that when demographic stochasticity is accounted for, selection can in fact act in the reverse direction to that predicted deterministically, instead favoring cooperative behaviors that appreciably increase the carrying capacity of the population. Populations that exist in larger numbers experience a selective advantage by being more stochastically robust to invasions than smaller populations, and this advantage can persist even in the presence of reproductive costs. We investigate this general effect in the specific context of public goods production and find conditions for stochastic selection reversal leading to the success of public good producers. This insight, developed here analytically, is missed by the deterministic analysis as well as by standard game theoretic models that enforce a fixed population size. The effect is found to be amplified by space; in this scenario we find that selection reversal occurs within biologically reasonable parameter regimes for microbial populations. Beyond the public good problem, we formulate a general mathematical framework for models that may exhibit stochastic selection reversal. In this context, we describe a stochastic analog to [Formula: see text] theory, by which small populations can evolve to higher densities in the absence of disturbance. PMID:27450085
Deterministic form correction of extreme freeform optical surfaces
NASA Astrophysics Data System (ADS)
Lynch, Timothy P.; Myer, Brian W.; Medicus, Kate; DeGroote Nelson, Jessica
2015-10-01
The blistering pace of recent technological advances has led lens designers to rely increasingly on freeform optical components as crucial pieces of their designs. As these freeform components increase in geometrical complexity and continue to deviate further from traditional optical designs, the optical manufacturing community must rethink their fabrication processes in order to keep pace. To meet these new demands, Optimax has developed a variety of new deterministic freeform manufacturing processes. Combining traditional optical fabrication techniques with cutting edge technological innovations has yielded a multifaceted manufacturing approach that can successfully handle even the most extreme freeform optical surfaces. In particular, Optimax has placed emphasis on refining the deterministic form correction process. By developing many of these procedures in house, changes can be implemented quickly and efficiently in order to rapidly converge on an optimal manufacturing method. Advances in metrology techniques allow for rapid identification and quantification of irregularities in freeform surfaces, while deterministic correction algorithms precisely target features on the part and drastically reduce overall correction time. Together, these improvements have yielded significant advances in the realm of freeform manufacturing. With further refinements to these and other aspects of the freeform manufacturing process, the production of increasingly radical freeform optical components is quickly becoming a reality.
Probabilistic vs deterministic views in facing natural hazards
NASA Astrophysics Data System (ADS)
Arattano, Massimo; Coviello, Velio
2015-04-01
Natural hazards can be mitigated through active or passive measures. Among these latter countermeasures, Early Warning Systems (EWSs) are playing an increasing and significant role. In particular, a growing number of studies investigate the reliability of landslide EWSs, their comparability to alternative protection measures and their cost-effectiveness. EWSs, however, inevitably and intrinsically imply the concept of probability of occurrence and/or probability of error. Since a long time science has accepted and integrated the probabilistic nature of reality and its phenomena. The same cannot be told for other fields of knowledge, such as law or politics, with which scientists sometimes have to interact. These disciplines are in fact still linked to more deterministic views of life. The same is true for what is perceived by the public opinion, which often requires or even pretends a deterministic type of answer to its needs. So, as an example, it might be easy for people to feel completely safe because an EWS has been installed. It is also easy for an administrator or a politician to contribute to spread this wrong feeling, together with the idea of having dealt with the problem and done something definitive to face it. May geoethics play a role to create a link between the probabilistic world of nature and science and the tendency of the society to a more deterministic view of things? Answering this question could help scientists to feel more confident in planning and performing their research activities.
Convergence studies of deterministic methods for LWR explicit reflector methodology
Canepa, S.; Hursin, M.; Ferroukhi, H.; Pautz, A.
2013-07-01
The standard approach in modem 3-D core simulators, employed either for steady-state or transient simulations, is to use Albedo coefficients or explicit reflectors at the core axial and radial boundaries. In the latter approach, few-group homogenized nuclear data are a priori produced with lattice transport codes using 2-D reflector models. Recently, the explicit reflector methodology of the deterministic CASMO-4/SIMULATE-3 code system was identified to potentially constitute one of the main sources of errors for core analyses of the Swiss operating LWRs, which are all belonging to GII design. Considering that some of the new GIII designs will rely on very different reflector concepts, a review and assessment of the reflector methodology for various LWR designs appeared as relevant. Therefore, the purpose of this paper is to first recall the concepts of the explicit reflector modelling approach as employed by CASMO/SIMULATE. Then, for selected reflector configurations representative of both GII and GUI designs, a benchmarking of the few-group nuclear data produced with the deterministic lattice code CASMO-4 and its successor CASMO-5, is conducted. On this basis, a convergence study with regards to geometrical requirements when using deterministic methods with 2-D homogenous models is conducted and the effect on the downstream 3-D core analysis accuracy is evaluated for a typical GII deflector design in order to assess the results against available plant measurements. (authors)
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
A hierarchical framework for air traffic control
NASA Astrophysics Data System (ADS)
Roy, Kaushik
Air travel in recent years has been plagued by record delays, with over $8 billion in direct operating costs being attributed to 100 million flight delay minutes in 2007. Major contributing factors to delay include weather, congestion, and aging infrastructure; the Next Generation Air Transportation System (NextGen) aims to alleviate these delays through an upgrade of the air traffic control system. Changes to large-scale networked systems such as air traffic control are complicated by the need for coordinated solutions over disparate temporal and spatial scales. Individual air traffic controllers must ensure aircraft maintain safe separation locally with a time horizon of seconds to minutes, whereas regional plans are formulated to efficiently route flows of aircraft around weather and congestion on the order of every hour. More efficient control algorithms that provide a coordinated solution are required to safely handle a larger number of aircraft in a fixed amount of airspace. Improved estimation algorithms are also needed to provide accurate aircraft state information and situational awareness for human controllers. A hierarchical framework is developed to simultaneously solve the sometimes conflicting goals of regional efficiency and local safety. Careful attention is given in defining the interactions between the layers of this hierarchy. In this way, solutions to individual air traffic problems can be targeted and implemented as needed. First, the regional traffic flow management problem is posed as an optimization problem and shown to be NP-Hard. Approximation methods based on aggregate flow models are developed to enable real-time implementation of algorithms that reduce the impact of congestion and adverse weather. Second, the local trajectory design problem is solved using a novel slot-based sector model. This model is used to analyze sector capacity under varying traffic patterns, providing a more comprehensive understanding of how increased automation
Evidence of Long Range Dependence and Self-similarity in Urban Traffic Systems
Thakur, Gautam S; Helmy, Ahmed; Hui, Pan
2015-01-01
Transportation simulation technologies should accurately model traffic demand, distribution, and assignment parame- ters for urban environment simulation. These three param- eters significantly impact transportation engineering bench- mark process, are also critical in realizing realistic traffic modeling situations. In this paper, we model and charac- terize traffic density distribution of thousands of locations around the world. The traffic densities are generated from millions of images collected over several years and processed using computer vision techniques. The resulting traffic den- sity distribution time series are then analyzed. It is found using the goodness-of-fit test that the traffic density dis- tributions follows heavy-tail models such as Log-gamma, Log-logistic, and Weibull in over 90% of analyzed locations. Moreover, a heavy-tail gives rise to long-range dependence and self-similarity, which we studied by estimating the Hurst exponent (H). Our analysis based on seven different Hurst estimators strongly indicate that the traffic distribution pat- terns are stochastically self-similar (0.5 H 1.0). We believe this is an important finding that will influence the design and development of the next generation traffic simu- lation techniques and also aid in accurately modeling traffic engineering of urban systems. In addition, it shall provide a much needed input for the development of smart cities.
Optimal structure of complex networks for minimizing traffic congestion.
Zhao, Liang; Cupertino, Thiago Henrique; Park, Kwangho; Lai, Ying-Cheng; Jin, Xiaogang
2007-12-01
To design complex networks to minimize traffic congestion, it is necessary to understand how traffic flow depends on network structure. We study data packet flow on complex networks, where the packet delivery capacity of each node is not fixed. The optimal configuration of capacities to minimize traffic congestion is derived and the critical packet generating rate is determined, below which the network is at a free flow state but above which congestion occurs. Our analysis reveals a direct relation between network topology and traffic flow. Optimal network structure, free of traffic congestion, should have two features: uniform distribution of load over all nodes and small network diameter. This finding is confirmed by numerical simulations. Our analysis also makes it possible to theoretically compare the congestion conditions for different types of complex networks. In particular, we find that network with low critical generating rate is more susceptible to congestion. The comparison has been made on the following complex-network topologies: random, scale-free, and regular.
Distributed Traffic Complexity Management by Preserving Trajectory Flexibility
NASA Technical Reports Server (NTRS)
Idris, Husni; Vivona, Robert A.; Garcia-Chico, Jose-Luis; Wing, David J.
2007-01-01
In order to handle the expected increase in air traffic volume, the next generation air transportation system is moving towards a distributed control architecture, in which groundbased service providers such as controllers and traffic managers and air-based users such as pilots share responsibility for aircraft trajectory generation and management. This paper presents preliminary research investigating a distributed trajectory-oriented approach to manage traffic complexity, based on preserving trajectory flexibility. The underlying hypotheses are that preserving trajectory flexibility autonomously by aircraft naturally achieves the aggregate objective of avoiding excessive traffic complexity, and that trajectory flexibility is increased by collaboratively minimizing trajectory constraints without jeopardizing the intended air traffic management objectives. This paper presents an analytical framework in which flexibility is defined in terms of robustness and adaptability to disturbances and preliminary metrics are proposed that can be used to preserve trajectory flexibility. The hypothesized impacts are illustrated through analyzing a trajectory solution space in a simple scenario with only speed as a degree of freedom, and in constraint situations involving meeting multiple times of arrival and resolving conflicts.
Computation of a Canadian SCWR unit cell with deterministic and Monte Carlo codes
Harrisson, G.; Marleau, G.
2012-07-01
The Canadian SCWR has the potential to achieve the goals that the generation IV nuclear reactors must meet. As part of the optimization process for this design concept, lattice cell calculations are routinely performed using deterministic codes. In this study, the first step (self-shielding treatment) of the computation scheme developed with the deterministic code DRAGON for the Canadian SCWR has been validated. Some options available in the module responsible for the resonance self-shielding calculation in DRAGON 3.06 and different microscopic cross section libraries based on the ENDF/B-VII.0 evaluated nuclear data file have been tested and compared to a reference calculation performed with the Monte Carlo code SERPENT under the same conditions. Compared to SERPENT, DRAGON underestimates the infinite multiplication factor in all cases. In general, the original Stammler model with the Livolant-Jeanpierre approximations are the most appropriate self-shielding options to use in this case of study. In addition, the 89 groups WIMS-AECL library for slight enriched uranium and the 172 groups WLUP library for a mixture of plutonium and thorium give the most consistent results with those of SERPENT. (authors)
Bianchini, G.; Burgio, N.; Carta, M.; Peluso, V.; Fabrizio, V.; Ricci, L.
2012-07-01
The GUINEVERE experiment (Generation of Uninterrupted Intense Neutrons at the lead Venus Reactor) is an experimental program in support of the ADS technology presently carried out at SCK-CEN in Mol (Belgium). In the experiment a modified lay-out of the original thermal VENUS critical facility is coupled to an accelerator, built by the French body CNRS in Grenoble, working in both continuous and pulsed mode and delivering 14 MeV neutrons by bombardment of deuterons on a tritium-target. The modified lay-out of the facility consists of a fast subcritical core made of 30% U-235 enriched metallic Uranium in a lead matrix. Several off-line and on-line reactivity measurement techniques will be investigated during the experimental campaign. This report is focused on the simulation by deterministic (ERANOS French code) and Monte Carlo (MCNPX US code) calculations of three reactivity measurement techniques, Slope ({alpha}-fitting), Area-ratio and Source-jerk, applied to a GUINEVERE subcritical configuration (namely SC1). The inferred reactivity, in dollar units, by the Area-ratio method shows an overall agreement between the two deterministic and Monte Carlo computational approaches, whereas the MCNPX Source-jerk results are affected by large uncertainties and allow only partial conclusions about the comparison. Finally, no particular spatial dependence of the results is observed in the case of the GUINEVERE SC1 subcritical configuration. (authors)
Godt, J.W.; Baum, R.L.; Savage, W.Z.; Salciarini, D.; Schulz, W.H.; Harp, E.L.
2008-01-01
Application of transient deterministic shallow landslide models over broad regions for hazard and susceptibility assessments requires information on rainfall, topography and the distribution and properties of hillside materials. We survey techniques for generating the spatial and temporal input data for such models and present an example using a transient deterministic model that combines an analytic solution to assess the pore-pressure response to rainfall infiltration with an infinite-slope stability calculation. Pore-pressures and factors of safety are computed on a cell-by-cell basis and can be displayed or manipulated in a grid-based GIS. Input data are high-resolution (1.8??m) topographic information derived from LiDAR data and simple descriptions of initial pore-pressure distribution and boundary conditions for a study area north of Seattle, Washington. Rainfall information is taken from a previously defined empirical rainfall intensity-duration threshold and material strength and hydraulic properties were measured both in the field and laboratory. Results are tested by comparison with a shallow landslide inventory. Comparison of results with those from static infinite-slope stability analyses assuming fixed water-table heights shows that the spatial prediction of shallow landslide susceptibility is improved using the transient analyses; moreover, results can be depicted in terms of the rainfall intensity and duration known to trigger shallow landslides in the study area.
Barcelo, Steven J; Kim, Ansoon; Wu, Wei; Li, Zhiyong
2012-07-24
Deterministic patterning or assembly of nanoparticles often requires complex processes that are not easily incorporated into system architectures of arbitrary design. We have developed a technique to fabricate deterministic nanoparticle assemblies using simple and inexpensive nanoimprinting equipment and procedures. First, a metal film is evaporated onto flexible polymer pillars made by nanoimprinting. The resulting metal caps on top of the pillars can be pulled into assemblies of arbitrary design by collapsing the pillars in a well-controlled manner. The nanoparticle assemblies are then transferred from the pillars onto a new substrate via nanoimprinting with the aid of either cold welding or chemical bonding. Using this technique, a variety of patterned nanoparticle assemblies of Au and Ag with a critical dimension less than 2 nm were fabricated and transferred to silicon-, glass-, and metal-coated substrates. Separating the nanostructure assembly from the final architecture removes significant design constraints from devices incorporating nanoparticle assemblies. The application of this process as a technique for generating surface-enhanced Raman spectroscopy substrates is presented.
Deterministic time-reversible thermostats: chaos, ergodicity, and the zeroth law of thermodynamics
NASA Astrophysics Data System (ADS)
Patra, Puneet Kumar; Sprott, Julien Clinton; Hoover, William Graham; Griswold Hoover, Carol
2015-09-01
The relative stability and ergodicity of deterministic time-reversible thermostats, both singly and in coupled pairs, are assessed through their Lyapunov spectra. Five types of thermostat are coupled to one another through a single Hooke's-law harmonic spring. The resulting dynamics shows that three specific thermostat types, Hoover-Holian, Ju-Bulgac, and Martyna-Klein-Tuckerman, have very similar Lyapunov spectra in their equilibrium four-dimensional phase spaces and when coupled in equilibrium or nonequilibrium pairs. All three of these oscillator-based thermostats are shown to be ergodic, with smooth analytic Gaussian distributions in their extended phase spaces (coordinate, momentum, and two control variables). Evidently these three ergodic and time-reversible thermostat types are particularly useful as statistical-mechanical thermometers and thermostats. Each of them generates Gibbs' universal canonical distribution internally as well as for systems to which they are coupled. Thus they obey the zeroth law of thermodynamics, as a good heat bath should. They also provide dissipative heat flow with relatively small nonlinearity when two or more such temperature baths interact and provide useful deterministic replacements for the stochastic Langevin equation.
Sub-surface single ion detection in diamond: A path for deterministic color center creation
NASA Astrophysics Data System (ADS)
Abraham, John; Aguirre, Brandon; Pacheco, Jose; Camacho, Ryan; Bielejec, Edward; Sandia National Laboratories Team
Deterministic single color center creation remains a critical milestone for the integrated use of diamond color centers. It depends on three components: focused ion beam implantation to control the location, yield improvement to control the activation, and single ion implantation to control the number of implanted ions. A surface electrode detector has been fabricated on diamond where the electron hole pairs generated during ion implantation are used as the detection signal. Results will be presented demonstrating single ion detection. The detection efficiency of the device will be described as a function of implant energy and device geometry. It is anticipated that the controlled introduction of single dopant atoms in diamond will provide a basis for deterministic single localized color centers. This work was performed, in part, at the Center for Integrated Nanotechnologies, an Office of Science User Facility operated for the U.S. Department of Energy Office of Science. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
Systematic and Deterministic Graph-Minor Embedding of Cartesian Products of Complete Graphs
NASA Astrophysics Data System (ADS)
Zaribafiyan, Arman; Marchand, Dominic J. J.; Changiz Rezaei, Seyed Saeed
The limited connectivity of current and next-generation quantum annealers motivates the need for efficient graph-minor embedding methods. The overhead of the widely used heuristic techniques is quickly proving to be a significant bottleneck for real-world applications. To alleviate this obstacle, we propose a systematic deterministic embedding method that exploits the structures of both the input graph of the specific combinatorial optimization problem and the quantum annealer. We focus on the specific case of the Cartesian product of two complete graphs, a regular structure that occurs in many problems. We first divide the problem by embedding one of the factors of the Cartesian product in a repeatable unit. The resulting simplified problem consists of placing copies of this unit and connecting them together appropriately. Aside from the obvious speed and efficiency advantages of a systematic deterministic approach, the embeddings produced can be easily scaled for larger processors and show desirable properties with respect to the number of qubits used and the chain length distribution.
A traffic analyzer for multiple SpaceWire links
NASA Astrophysics Data System (ADS)
Liu, Scige J.; Giusi, Giovanni; Di Giorgio, Anna M.; Vertolli, Nello; Galli, Emanuele; Biondi, David; Farina, Maria; Pezzuto, Stefano; Spinoglio, Luigi
2014-07-01
Modern space missions are becoming increasingly complex: the interconnection of the units in a satellite is now a network of terminals linked together through routers, where devices with different level of automation and intelligence share the same data-network. The traceability of the network transactions is performed mostly at terminal level through log analysis and hence it is difficult to verify in real time the reliability of the interconnections and the interchange protocols. To improve and ease the traffic analysis in a SpaceWire network we implemented a low-level link analyzer, with the specific goal to simplify the integration and test phases in the development of space instrumentation. The traffic analyzer collects signals coming from pod probes connected in-series on the interested links between two SpaceWire terminals. With respect to the standard traffic analyzers, the design of this new tool includes the possibility to internally reshape the LVDS signal. This improvement increases the robustness of the analyzer towards environmental noise effects and guarantees a deterministic delay on all analyzed signals. The analyzer core is implemented on a Xilinx FPGA, programmed to decode the bidirectional LVDS signals at Link and Network level. Successively, the core packetizes protocol characters in homogeneous sets of time ordered events. The analyzer provides time-tagging functionality for each characters set, with a precision down to the FPGA Clock, i.e. about 20nsec in the adopted HW environment. The use of a common time reference for each character stream allows synchronous performance measurements. The collected information is then routed to an external computer for quick analysis: this is done via high-speed USB2 connection. With this analyzer it is possible to verify the link performances in terms of induced delays in the transmitted signals. A case study focused on the analysis of the Time-Code synchronization in presence of a SpaceWire Router is
Wildfire susceptibility mapping: comparing deterministic and stochastic approaches
NASA Astrophysics Data System (ADS)
Pereira, Mário; Leuenberger, Michael; Parente, Joana; Tonini, Marj
2016-04-01
Conservation of Nature and Forests (ICNF) (http://www.icnf.pt/portal) which provides a detailed description of the shape and the size of area burnt by each fire in each year of occurrence. Two methodologies for susceptibility mapping were compared. First, the deterministic approach, based on the study of Verde and Zêzere (2010), which includes the computation of the favorability scores for each variable and the fire occurrence probability, as well as the validation of each model, resulting from the integration of different variables. Second, as non-linear method we selected the Random Forest algorithm (Breiman, 2001): this led us to identifying the most relevant variables conditioning the presence of wildfire and allowed us generating a map of fire susceptibility based on the resulting variable importance measures. By means of GIS techniques, we mapped the obtained predictions which represent the susceptibility of the study area to fires. Results obtained applying both the methodologies for wildfire susceptibility mapping, as well as of wildfire hazard maps for different total annual burnt area scenarios, were compared with the reference maps and allow us to assess the best approach for susceptibility mapping in Portugal. References: - Breiman, L. (2001). Random forests. Machine Learning, 45, 5-32. - Verde, J. C., & Zêzere, J. L. (2010). Assessment and validation of wildfire susceptibility and hazard in Portugal. Natural Hazards and Earth System Science, 10(3), 485-497.
A traffic situation analysis system
NASA Astrophysics Data System (ADS)
Sidla, Oliver; Rosner, Marcin
2011-01-01
The observation and monitoring of traffic with smart visions systems for the purpose of improving traffic safety has a big potential. For example embedded vision systems built into vehicles can be used as early warning systems, or stationary camera systems can modify the switching frequency of signals at intersections. Today the automated analysis of traffic situations is still in its infancy - the patterns of vehicle motion and pedestrian flow in an urban environment are too complex to be fully understood by a vision system. We present steps towards such a traffic monitoring system which is designed to detect potentially dangerous traffic situations, especially incidents in which the interaction of pedestrians and vehicles might develop into safety critical encounters. The proposed system is field-tested at a real pedestrian crossing in the City of Vienna for the duration of one year. It consists of a cluster of 3 smart cameras, each of which is built from a very compact PC hardware system in an outdoor capable housing. Two cameras run vehicle detection software including license plate detection and recognition, one camera runs a complex pedestrian detection and tracking module based on the HOG detection principle. As a supplement, all 3 cameras use additional optical flow computation in a low-resolution video stream in order to estimate the motion path and speed of objects. This work describes the foundation for all 3 different object detection modalities (pedestrians, vehi1cles, license plates), and explains the system setup and its design.
Percolation properties in a traffic model
NASA Astrophysics Data System (ADS)
Wang, Feilong; Li, Daqing; Xu, Xiaoyun; Wu, Ruoqian; Havlin, Shlomo
2015-11-01
As a dynamical complex system, traffic is characterized by a transition from free flow to congestions, which is mostly studied in highways. However, despite its importance in developing congestion mitigation strategies, the understanding of this common traffic phenomenon in a city scale is still missing. An open question is how the traffic in the network collapses from a global efficient traffic to isolated local flows in small clusters, i.e. the question of traffic percolation. Here we study the traffic percolation properties on a lattice by simulation of an agent-based model for traffic. A critical traffic volume in this model distinguishes the free state from the congested state of traffic. Our results show that the threshold of traffic percolation decreases with increasing traffic volume and reaches a minimum value at the critical traffic volume. We show that this minimal threshold is the result of longest spatial correlation between traffic flows at the critical traffic volume. These findings may help to develop congestion mitigation strategies in a network view.
Self-Organized Criticality and Scaling in Lifetime of Traffic Jams
NASA Astrophysics Data System (ADS)
Nagatani, Takashi
1995-01-01
The deterministic cellular automaton 184 (the one-dimensional asymmetric simple-exclusion model with parallel dynamics) is extended to take into account injection or extraction of particles. The model presents the traffic flow on a highway with inflow or outflow of cars.Introducing injection or extraction of particles into the asymmetric simple-exclusion model drives the system asymptotically into a steady state exhibiting a self-organized criticality. The typical lifetime
The fully actuated traffic control problem solved by global optimization and complementarity
NASA Astrophysics Data System (ADS)
Ribeiro, Isabel M.; de Lurdes de Oliveira Simões, Maria
2016-02-01
Global optimization and complementarity are used to determine the signal timing for fully actuated traffic control, regarding effective green and red times on each cycle. The average values of these parameters can be used to estimate the control delay of vehicles. In this article, a two-phase queuing system for a signalized intersection is outlined, based on the principle of minimization of the total waiting time for the vehicles. The underlying model results in a linear program with linear complementarity constraints, solved by a sequential complementarity algorithm. Departure rates of vehicles during green and yellow periods were treated as deterministic, while arrival rates of vehicles were assumed to follow a Poisson distribution. Several traffic scenarios were created and solved. The numerical results reveal that it is possible to use global optimization and complementarity over a reasonable number of cycles and determine with efficiency effective green and red times for a signalized intersection.
An improved multi-value cellular automata model for heterogeneous bicycle traffic flow
NASA Astrophysics Data System (ADS)
Jin, Sheng; Qu, Xiaobo; Xu, Cheng; Ma, Dongfang; Wang, Dianhai
2015-10-01
This letter develops an improved multi-value cellular automata model for heterogeneous bicycle traffic flow taking the higher maximum speed of electric bicycles into consideration. The update rules of both regular and electric bicycles are improved, with maximum speeds of two and three cells per second respectively. Numerical simulation results for deterministic and stochastic cases are obtained. The fundamental diagrams and multiple states effects under different model parameters are analyzed and discussed. Field observations were made to calibrate the slowdown probabilities. The results imply that the improved extended Burgers cellular automata (IEBCA) model is more consistent with the field observations than previous models and greatly enhances the realism of the bicycle traffic model.
Traffic-driven SIR epidemic spreading in networks
NASA Astrophysics Data System (ADS)
Pu, Cunlai; Li, Siyuan; Yang, XianXia; Xu, Zhongqi; Ji, Zexuan; Yang, Jian
2016-03-01
We study SIR epidemic spreading in networks driven by traffic dynamics, which are further governed by static routing protocols. We obtain the maximum instantaneous population of infected nodes and the maximum population of ever infected nodes through simulation. We find that generally more balanced load distribution leads to more intense and wide spread of an epidemic in networks. Increasing either average node degree or homogeneity of degree distribution will facilitate epidemic spreading. When packet generation rate ρ is small, increasing ρ favors epidemic spreading. However, when ρ is large enough, traffic congestion appears which inhibits epidemic spreading.
Preliminary Benefits Assessment of Traffic Aware Strategic Aircrew Requests (TASAR)
NASA Technical Reports Server (NTRS)
Henderson, Jeff; Idris, Husni; Wing, David J.
2012-01-01
While en route, aircrews submit trajectory change requests to air traffic control (ATC) to better meet their objectives including reduced delays, reduced fuel burn, and passenger comfort. Aircrew requests are currently made with limited to no information on surrounding traffic. Consequently, these requests are uninformed about a key ATC objective, ensuring traffic separation, and therefore less likely to be accepted than requests informed by surrounding traffic and that avoids creating conflicts. This paper studies the benefits of providing aircrews with on-board decision support to generate optimized trajectory requests that are probed and cleared of known separation violations prior to issuing the request to ATC. These informed requests are referred to as traffic aware strategic aircrew requests (TASAR) and leverage traffic surveillance information available through Automatic Dependent Surveillance Broadcast (ADS-B) In capability. Preliminary fast-time simulation results show increased benefits with longer stage lengths since beneficial trajectory changes can be applied over a longer distance. Also, larger benefits were experienced between large hub airports as compared to other airport sizes. On average, an aircraft equipped with TASAR reduced its travel time by about one to four minutes per operation and fuel burn by about 50 to 550 lbs per operation depending on the objective of the aircrew (time, fuel, or weighted combination of time and fuel), class of airspace user, and aircraft type. These preliminary results are based on analysis of approximately one week of traffic in July 2012 and additional analysis is planned on a larger data set to confirm these initial findings.
Deterministic and Nondeterministic Behavior of Earthquakes and Hazard Mitigation Strategy
NASA Astrophysics Data System (ADS)
Kanamori, H.
2014-12-01
Earthquakes exhibit both deterministic and nondeterministic behavior. Deterministic behavior is controlled by length and time scales such as the dimension of seismogenic zones and plate-motion speed. Nondeterministic behavior is controlled by the interaction of many elements, such as asperities, in the system. Some subduction zones have strong deterministic elements which allow forecasts of future seismicity. For example, the forecasts of the 2010 Mw=8.8 Maule, Chile, earthquake and the 2012 Mw=7.6, Costa Rica, earthquake are good examples in which useful forecasts were made within a solid scientific framework using GPS. However, even in these cases, because of the nondeterministic elements uncertainties are difficult to quantify. In some subduction zones, nondeterministic behavior dominates because of complex plate boundary structures and defies useful forecasts. The 2011 Mw=9.0 Tohoku-Oki earthquake may be an example in which the physical framework was reasonably well understood, but complex interactions of asperities and insufficient knowledge about the subduction-zone structures led to the unexpected tragic consequence. Despite these difficulties, broadband seismology, GPS, and rapid data processing-telemetry technology can contribute to effective hazard mitigation through scenario earthquake approach and real-time warning. A scale-independent relation between M0 (seismic moment) and the source duration, t, can be used for the design of average scenario earthquakes. However, outliers caused by the variation of stress drop, radiation efficiency, and aspect ratio of the rupture plane are often the most hazardous and need to be included in scenario earthquakes. The recent development in real-time technology would help seismologists to cope with, and prepare for, devastating tsunamis and earthquakes. Combining a better understanding of earthquake diversity and modern technology is the key to effective and comprehensive hazard mitigation practices.
Spatial continuity measures for probabilistic and deterministic geostatistics
Isaaks, E.H.; Srivastava, R.M.
1988-05-01
Geostatistics has traditionally used a probabilistic framework, one in which expected values or ensemble averages are of primary importance. The less familiar deterministic framework views geostatistical problems in terms of spatial integrals. This paper outlines the two frameworks and examines the issue of which spatial continuity measure, the covariance C(h) or the variogram ..sigma..(h), is appropriate for each framework. Although C(h) and ..sigma..(h) were defined originally in terms of spatial integrals, the convenience of probabilistic notation made the expected value definitions more common. These now classical expected value definitions entail a linear relationship between C(h) and ..sigma..(h); the spatial integral definitions do not. In a probabilistic framework, where available sample information is extrapolated to domains other than the one which was sampled, the expected value definitions are appropriate; furthermore, within a probabilistic framework, reasons exist for preferring the variogram to the covariance function. In a deterministic framework, where available sample information is interpolated within the same domain, the spatial integral definitions are appropriate and no reasons are known for preferring the variogram. A case study on a Wiener-Levy process demonstrates differences between the two frameworks and shows that, for most estimation problems, the deterministic viewpoint is more appropriate. Several case studies on real data sets reveal that the sample covariance function reflects the character of spatial continuity better than the sample variogram. From both theoretical and practical considerations, clearly for most geostatistical problems, direct estimation of the covariance is better than the traditional variogram approach.
Deterministic side-branching during thermal dendritic growth
NASA Astrophysics Data System (ADS)
Mullis, Andrew M.
2015-06-01
The accepted view on dendritic side-branching is that side-branches grow as the result of selective amplification of thermal noise and that in the absence of such noise dendrites would grow without the development of side-arms. However, recently there has been renewed speculation about dendrites displaying deterministic side-branching [see e.g. ME Glicksman, Metall. Mater. Trans A 43 (2012) 391]. Generally, numerical models of dendritic growth, such as phase-field simulation, have tended to display behaviour which is commensurate with the former view, in that simulated dendrites do not develop side-branches unless noise is introduced into the simulation. However, here we present simulations at high undercooling that show that under certain conditions deterministic side-branching may occur. We use a model formulated in the thin interface limit and a range of advanced numerical techniques to minimise the numerical noise introduced into the solution, including a multigrid solver. Not only are multigrid solvers one of the most efficient means of inverting the large, but sparse, system of equations that results from implicit time-stepping, they are also very effective at smoothing noise at all wavelengths. This is in contrast to most Jacobi or Gauss-Seidel iterative schemes which are effective at removing noise with wavelengths comparable to the mesh size but tend to leave noise at longer wavelengths largely undamped. From an analysis of the tangential thermal gradients on the solid-liquid interface the mechanism for side-branching appears to be consistent with the deterministic model proposed by Glicksman.
Hidden geometry of traffic jamming
NASA Astrophysics Data System (ADS)
Andjelković, Miroslav; Gupte, Neelima; Tadić, Bosiljka
2015-05-01
We introduce an approach based on algebraic topological methods that allow an accurate characterization of jamming in dynamical systems with queues. As a prototype system, we analyze the traffic of information packets with navigation and queuing at nodes on a network substrate in distinct dynamical regimes. A temporal sequence of traffic density fluctuations is mapped onto a mathematical graph in which each vertex denotes one dynamical state of the system. The coupling complexity between these states is revealed by classifying agglomerates of high-dimensional cliques that are intermingled at different topological levels and quantified by a set of geometrical and entropy measures. The free-flow, jamming, and congested traffic regimes result in graphs of different structure, while the largest geometrical complexity and minimum entropy mark the edge of the jamming region.
Ideal state reconstructor for deterministic digital control systems
NASA Technical Reports Server (NTRS)
Polites, Michael E.
1989-01-01
A state reconstructor for deterministic digital systems is presented which is ideal in the following sense: if the plant parameters are known exactly, the output of the state reconstructor will exactly equal the true state of the plant, not just approximate it. Furthermore, this ideal state reconstructor adds no additional states or eigenvalues to the system. Nor does it affect the plant equation for the system in any way; it affects only the measurement equation. While there are countless ways of choosing the ideal state reconstructor parameters, two distinct methods are described here. An example is presented which illustrates the procedures to completely design the ideal state reconstructor using both methods.
A deterministic global optimization using smooth diagonal auxiliary functions
NASA Astrophysics Data System (ADS)
Sergeyev, Yaroslav D.; Kvasov, Dmitri E.
2015-04-01
In many practical decision-making problems it happens that functions involved in optimization process are black-box with unknown analytical representations and hard to evaluate. In this paper, a global optimization problem is considered where both the goal function f (x) and its gradient f‧ (x) are black-box functions. It is supposed that f‧ (x) satisfies the Lipschitz condition over the search hyperinterval with an unknown Lipschitz constant K. A new deterministic 'Divide-the-Best' algorithm based on efficient diagonal partitions and smooth auxiliary functions is proposed in its basic version, its convergence conditions are studied and numerical experiments executed on eight hundred test functions are presented.
Deterministic versus stochastic aspects of superexponential population growth models
NASA Astrophysics Data System (ADS)
Grosjean, Nicolas; Huillet, Thierry
2016-08-01
Deterministic population growth models with power-law rates can exhibit a large variety of growth behaviors, ranging from algebraic, exponential to hyperexponential (finite time explosion). In this setup, selfsimilarity considerations play a key role, together with two time substitutions. Two stochastic versions of such models are investigated, showing a much richer variety of behaviors. One is the Lamperti construction of selfsimilar positive stochastic processes based on the exponentiation of spectrally positive processes, followed by an appropriate time change. The other one is based on stable continuous-state branching processes, given by another Lamperti time substitution applied to stable spectrally positive processes.
A Deterministic Transport Code for Space Environment Electrons
NASA Technical Reports Server (NTRS)
Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamczyk, Anne M.
2010-01-01
A deterministic computational procedure has been developed to describe transport of space environment electrons in various shield media. This code is an upgrade and extension of an earlier electron code. Whereas the former code was formulated on the basis of parametric functions derived from limited laboratory data, the present code utilizes well established theoretical representations to describe the relevant interactions and transport processes. The shield material specification has been made more general, as have the pertinent cross sections. A combined mean free path and average trajectory approach has been used in the transport formalism. Comparisons with Monte Carlo calculations are presented.
The deterministic optical alignment of the HERMES spectrograph
NASA Astrophysics Data System (ADS)
Gers, Luke; Staszak, Nicholas
2014-07-01
The High Efficiency and Resolution Multi Element Spectrograph (HERMES) is a four channel, VPH-grating spectrograph fed by two 400 fiber slit assemblies whose construction and commissioning has now been completed at the Anglo Australian Telescope (AAT). The size, weight, complexity, and scheduling constraints of the system necessitated that a fully integrated, deterministic, opto-mechanical alignment system be designed into the spectrograph before it was manufactured. This paper presents the principles about which the system was assembled and aligned, including the equipment and the metrology methods employed to complete the spectrograph integration.
Deterministic Smoluchowski-Feynman ratchets driven by chaotic noise.
Chew, Lock Yue
2012-01-01
We have elucidated the effect of statistical asymmetry on the directed current in Smoluchowski-Feynman ratchets driven by chaotic noise. Based on the inhomogeneous Smoluchowski equation and its generalized version, we arrive at analytical expressions of the directed current that includes a source term. The source term indicates that statistical asymmetry can drive the system further away from thermodynamic equilibrium, as exemplified by the constant flashing, the state-dependent, and the tilted deterministic Smoluchowski-Feynman ratchets, with the consequence of an enhancement in the directed current.
Non-deterministic analysis of ocean environment loads
Fang Huacan; Xu Fayan; Gao Guohua; Xu Xingping
1995-12-31
Ocean environment loads consist of the wind force, sea wave force etc. Sea wave force not only has randomness, but also has fuzziness. Hence the non-deterministic description of wave environment must be carried out, in designing of an offshore structure or evaluation of the safety of offshore structure members in service. In order to consider the randomness of sea wave, the wind speed single parameter sea wave spectrum is proposed in the paper. And a new fuzzy grading statistic method for considering fuzziness of sea wave height H and period T is given in this paper. The principle and process of calculating fuzzy random sea wave spectrum will be published lastly.
CALTRANS: A parallel, deterministic, 3D neutronics code
Carson, L.; Ferguson, J.; Rogers, J.
1994-04-01
Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.
Spontaneous density fluctuations in granular flow and traffic
NASA Astrophysics Data System (ADS)
Herrmann, Hans J.
It is known that spontaneous density waves appear in granular material flowing through pipes or hoppers. A similar phenomenon is known from traffic jams on highways. Using numerical simulations we show that several types of waves exist and find that the density fluctuations follow a power law spectrum. We also investigate one-dimensional traffic models. If positions and velocities are continuous variables the model shows self-organized criticality driven by the slowest car. Lattice gas and lattice Boltzmann models reproduce the experimentally observed effects. Density waves are spontaneously generated when the viscosity has a non-linear dependence on density or shear rate as it is the case in traffic or granular flow.
Modeling the Environmental Impact of Air Traffic Operations
NASA Technical Reports Server (NTRS)
Chen, Neil
2011-01-01
There is increased interest to understand and mitigate the impacts of air traffic on the climate, since greenhouse gases, nitrogen oxides, and contrails generated by air traffic can have adverse impacts on the climate. The models described in this presentation are useful for quantifying these impacts and for studying alternative environmentally aware operational concepts. These models have been developed by leveraging and building upon existing simulation and optimization techniques developed for the design of efficient traffic flow management strategies. Specific enhancements to the existing simulation and optimization techniques include new models that simulate aircraft fuel flow, emissions and contrails. To ensure that these new models are beneficial to the larger climate research community, the outputs of these new models are compatible with existing global climate modeling tools like the FAA's Aviation Environmental Design Tool.
A radome for air traffic control SSR radar systems
NASA Astrophysics Data System (ADS)
A new generation of monopulse and discrete interrogation systems has evolved for air traffic control applications that presents significant challenges to total system design and performance. Reliable operation of the antenna system is essential in today's ever increasing air traffic congestion. An important component of the total system is a radome to protect the antenna from the environment and to enable consistent, reliable electromagnetic performance. The various types of radomes that have been employed over the years to protect antennas are discussed and evaluated relative to the air traffic control radar application. The sandwich radome is selected as the best option and a detailed design analysis is presented which considers the vital characteristics of transmissivity, boresight error, and sidelobe perturbations.
Traffic Flow Management and Optimization
NASA Technical Reports Server (NTRS)
Rios, Joseph Lucio
2014-01-01
This talk will present an overview of Traffic Flow Management (TFM) research at NASA Ames Research Center. Dr. Rios will focus on his work developing a large-scale, parallel approach to solving traffic flow management problems in the national airspace. In support of this talk, Dr. Rios will provide some background on operational aspects of TFM as well a discussion of some of the tools needed to perform such work including a high-fidelity airspace simulator. Current, on-going research related to TFM data services in the national airspace system and general aviation will also be presented.
Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W; Grove, Robert E
2015-01-01
Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNce reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).
Latanision, R.M.
1990-12-01
Electrochemical corrosion is pervasive in virtually all engineering systems and in virtually all industrial circumstances. Although engineers now understand how to design systems to minimize corrosion in many instances, many fundamental questions remain poorly understood and, therefore, the development of corrosion control strategies is based more on empiricism than on a deep understanding of the processes by which metals corrode in electrolytes. Fluctuations in potential, or current, in electrochemical systems have been observed for many years. To date, all investigations of this phenomenon have utilized non-deterministic analyses. In this work it is proposed to study electrochemical noise from a deterministic viewpoint by comparison of experimental parameters, such as first and second order moments (non-deterministic), with computer simulation of corrosion at metal surfaces. In this way it is proposed to analyze the origins of these fluctuations and to elucidate the relationship between these fluctuations and kinetic parameters associated with metal dissolution and cathodic reduction reactions. This research program addresses in essence two areas of interest: (a) computer modeling of corrosion processes in order to study the electrochemical processes on an atomistic scale, and (b) experimental investigations of fluctuations in electrochemical systems and correlation of experimental results with computer modeling. In effect, the noise generated by mathematical modeling will be analyzed and compared to experimental noise in electrochemical systems. 1 fig.
Study of CANDU thorium-based fuel cycles by deterministic and Monte Carlo methods
Nuttin, A.; Guillemin, P.; Courau, T.; Marleau, G.; Meplan, O.; David, S.; Michel-Sendis, F.; Wilson, J. N.
2006-07-01
In the framework of the Generation IV forum, there is a renewal of interest in self-sustainable thorium fuel cycles applied to various concepts such as Molten Salt Reactors [1, 2] or High Temperature Reactors [3, 4]. Precise evaluations of the U-233 production potential relying on existing reactors such as PWRs [5] or CANDUs [6] are hence necessary. As a consequence of its design (online refueling and D{sub 2}O moderator in a thermal spectrum), the CANDU reactor has moreover an excellent neutron economy and consequently a high fissile conversion ratio [7]. For these reasons, we try here, with a shorter term view, to re-evaluate the economic competitiveness of once-through thorium-based fuel cycles in CANDU [8]. Two simulation tools are used: the deterministic Canadian cell code DRAGON [9] and MURE [10], a C++ tool for reactor evolution calculations based on the Monte Carlo code MCNP [11]. (authors)
BTM: A Single-Key, Inverse-Cipher-Free Mode for Deterministic Authenticated Encryption
NASA Astrophysics Data System (ADS)
Iwata, Tetsu; Yasuda, Kan
We present a new blockcipher mode of operation named BTM, which stands for Bivariate Tag Mixing. BTM falls into the category of Deterministic Authenticated Encryption, which we call DAE for short. BTM makes all-around improvements over the previous two DAE constructions, SIV (Eurocrypt 2006) and HBS (FSE 2009). Specifically, our BTM requires just one blockcipher key, whereas SIV requires two. Our BTM does not require the decryption algorithm of the underlying blockcipher, whereas HBS does. The BTM mode utilizes bivariate polynomial hashing for authentication, which enables us to handle vectorial inputs of dynamic dimensions. BTM then generates an initial value for its counter mode of encryption by mixing the resulting tag with one of the two variables (hash keys), which avoids the need for an implementation of the inverse cipher.
Impact of traffic-related air pollution on health.
Jakubiak-Lasocka, J; Lasocki, J; Siekmeier, R; Chłopek, Z
2015-01-01
Road transport contributes significantly to air quality problems through vehicle emissions, which have various detrimental impacts on public health and the environment. The aim of this study was to assess the impact of traffic-related air pollution on health of Warsaw citizens, following the basics of the Health Impact Assessment (HIA) method, and evaluate its social cost. PM10 was chosen as an indicator of traffic-related air pollution. Exposure-response functions between air pollution and health impacts were employed. The value of statistical life (VSL) approach was used for the estimation of the cost of mortality attributable to traffic-related air pollution. Costs of hospitalizations and restricted activity days were assessed basing on the cost of illness (COI) method. According to the calculations, about 827 Warsaw citizens die in a year as a result of traffic-related air pollution. Also, about 566 and 250 hospital admissions due to cardiovascular and respiratory diseases, respectively, and more than 128,453 restricted activity days can be attributed to the traffic emissions. From the social perspective, these losses generate the cost of 1,604 million PLN (1 EUR-approx. 4.2 PLN). This cost is very high and, therefore, more attention should be paid for the integrated environmental health policy.
Evolutionary Concepts for Decentralized Air Traffic Flow Management
NASA Technical Reports Server (NTRS)
Adams, Milton; Kolitz, Stephan; Milner, Joseph; Odoni, Amedeo
1997-01-01
Alternative concepts for modifying the policies and procedures under which the air traffic flow management system operates are described, and an approach to the evaluation of those concepts is discussed. Here, air traffic flow management includes all activities related to the management of the flow of aircraft and related system resources from 'block to block.' The alternative concepts represent stages in the evolution from the current system, in which air traffic management decision making is largely centralized within the FAA, to a more decentralized approach wherein the airlines and other airspace users collaborate in air traffic management decision making with the FAA. The emphasis in the discussion is on a viable medium-term partially decentralized scenario representing a phase of this evolution that is consistent with the decision-making approaches embodied in proposed Free Flight concepts for air traffic management. System-level metrics for analyzing and evaluating the various alternatives are defined, and a simulation testbed developed to generate values for those metrics is described. The fundamental issue of modeling airline behavior in decentralized environments is also raised, and an example of such a model, which deals with the preservation of flight bank integrity in hub airports, is presented.
Management of heterogeneous traffic loading in DBS networks
NASA Astrophysics Data System (ADS)
Vojcic, Branimir; Alagoz, Fatih; Al-Rustamani, Amina; Pickholtz, Raymond L.; Walters, David H.
1999-07-01
In the paper we present the Adaptive Resource Allocation and Management (ARAM) algorithms developed to manage a Direct Broadcast Satellite (DBS) system supporting heterogeneous traffic mixes and operating under dynamic channel conditions. This traffic mix includes both: (i) data traffic that operates as an available bit rate flow and, (ii) video traffic that generates a variable bit rate flow. Both types of traffic use the Internet Protocol (IP) so they can be efficiently multiplexed on the same link. The dynamic channel conditions reflect time variation error rates due to external effects such as rain or jamming. ARAM attempts to maximize the utilization of the available capacity on the forward DBS link while maintaining Quality of Service (QoS) in the presence of congestion int he network and channel degradation effects. To achieve these ends, it utilizes adaptive control of video compression rates, data transmission rates, and channel forward error correction rates. One of the major features of ARAM is the admission control algorithm used to determine the number of variable bit rate flows admitted for service. In order to maximize the resource utilization, assignment of the variable bit rate services based on their peak rate is avoided. Instead, a flexible utilization of the bandwidth requiring the estimation of statistical multiplexing gain is used enabling more services to share the DBS link. Therefore in this paper, we focus on the ARAM admission control algorithm and assess its impact on QoS and DBS link utilization.
Impact of traffic-related air pollution on health.
Jakubiak-Lasocka, J; Lasocki, J; Siekmeier, R; Chłopek, Z
2015-01-01
Road transport contributes significantly to air quality problems through vehicle emissions, which have various detrimental impacts on public health and the environment. The aim of this study was to assess the impact of traffic-related air pollution on health of Warsaw citizens, following the basics of the Health Impact Assessment (HIA) method, and evaluate its social cost. PM10 was chosen as an indicator of traffic-related air pollution. Exposure-response functions between air pollution and health impacts were employed. The value of statistical life (VSL) approach was used for the estimation of the cost of mortality attributable to traffic-related air pollution. Costs of hospitalizations and restricted activity days were assessed basing on the cost of illness (COI) method. According to the calculations, about 827 Warsaw citizens die in a year as a result of traffic-related air pollution. Also, about 566 and 250 hospital admissions due to cardiovascular and respiratory diseases, respectively, and more than 128,453 restricted activity days can be attributed to the traffic emissions. From the social perspective, these losses generate the cost of 1,604 million PLN (1 EUR-approx. 4.2 PLN). This cost is very high and, therefore, more attention should be paid for the integrated environmental health policy. PMID:25310941
Dynamic Density: An Air Traffic Management Metric
NASA Technical Reports Server (NTRS)
Laudeman, I. V.; Shelden, S. G.; Branstrom, R.; Brasil, C. L.
1998-01-01
The definition of a metric of air traffic controller workload based on air traffic characteristics is essential to the development of both air traffic management automation and air traffic procedures. Dynamic density is a proposed concept for a metric that includes both traffic density (a count of aircraft in a volume of airspace) and traffic complexity (a measure of the complexity of the air traffic in a volume of airspace). It was hypothesized that a metric that includes terms that capture air traffic complexity will be a better measure of air traffic controller workload than current measures based only on traffic density. A weighted linear dynamic density function was developed and validated operationally. The proposed dynamic density function includes a traffic density term and eight traffic complexity terms. A unit-weighted dynamic density function was able to account for an average of 22% of the variance in observed controller activity not accounted for by traffic density alone. A comparative analysis of unit weights, subjective weights, and regression weights for the terms in the dynamic density equation was conducted. The best predictor of controller activity was the dynamic density equation with regression-weighted complexity terms.
Automatic drawing for traffic marking with MMS LIDAR intensity
NASA Astrophysics Data System (ADS)
Takahashi, G.; Takeda, H.; Shimano, Y.
2014-05-01
Upgrading the database of CYBER JAPAN has been strategically promoted because the "Basic Act on Promotion of Utilization of Geographical Information", was enacted in May 2007. In particular, there is a high demand for road information that comprises a framework in this database. Therefore, road inventory mapping work has to be accurate and eliminate variation caused by individual human operators. Further, the large number of traffic markings that are periodically maintained and possibly changed require an efficient method for updating spatial data. Currently, we apply manual photogrammetry drawing for mapping traffic markings. However, this method is not sufficiently efficient in terms of the required productivity, and data variation can arise from individual operators. In contrast, Mobile Mapping Systems (MMS) and high-density Laser Imaging Detection and Ranging (LIDAR) scanners are rapidly gaining popularity. The aim in this study is to build an efficient method for automatically drawing traffic markings using MMS LIDAR data. The key idea in this method is extracting lines using a Hough transform strategically focused on changes in local reflection intensity along scan lines. However, also note that this method processes every traffic marking. In this paper, we discuss a highly accurate and non-human-operator-dependent method that applies the following steps: (1) Binarizing LIDAR points by intensity and extracting higher intensity points; (2) Generating a Triangulated Irregular Network (TIN) from higher intensity points; (3) Deleting arcs by length and generating outline polygons on the TIN; (4) Generating buffers from the outline polygons; (5) Extracting points from the buffers using the original LIDAR points; (6) Extracting local-intensity-changing points along scan lines using the extracted points; (7) Extracting lines from intensity-changing points through a Hough transform; and (8) Connecting lines to generate automated traffic marking mapping data.
Stochastic and deterministic causes of streamer branching in liquid dielectrics
Jadidian, Jouya; Zahn, Markus; Lavesson, Nils; Widlund, Ola; Borg, Karl
2013-08-14
Streamer branching in liquid dielectrics is driven by stochastic and deterministic factors. The presence of stochastic causes of streamer branching such as inhomogeneities inherited from noisy initial states, impurities, or charge carrier density fluctuations is inevitable in any dielectric. A fully three-dimensional streamer model presented in this paper indicates that deterministic origins of branching are intrinsic attributes of streamers, which in some cases make the branching inevitable depending on shape and velocity of the volume charge at the streamer frontier. Specifically, any given inhomogeneous perturbation can result in streamer branching if the volume charge layer at the original streamer head is relatively thin and slow enough. Furthermore, discrete nature of electrons at the leading edge of an ionization front always guarantees the existence of a non-zero inhomogeneous perturbation ahead of the streamer head propagating even in perfectly homogeneous dielectric. Based on the modeling results for streamers propagating in a liquid dielectric, a gauge on the streamer head geometry is introduced that determines whether the branching occurs under particular inhomogeneous circumstances. Estimated number, diameter, and velocity of the born branches agree qualitatively with experimental images of the streamer branching.
Forced Translocation of Polymer through Nanopore: Deterministic Model and Simulations
NASA Astrophysics Data System (ADS)
Wang, Yanqian; Panyukov, Sergey; Liao, Qi; Rubinstein, Michael
2012-02-01
We propose a new theoretical model of forced translocation of a polymer chain through a nanopore. We assume that DNA translocation at high fields proceeds too fast for the chain to relax, and thus the chain unravels loop by loop in an almost deterministic way. So the distribution of translocation times of a given monomer is controlled by the initial conformation of the chain (the distribution of its loops). Our model predicts the translocation time of each monomer as an explicit function of initial polymer conformation. We refer to this concept as ``fingerprinting''. The width of the translocation time distribution is determined by the loop distribution in initial conformation as well as by the thermal fluctuations of the polymer chain during the translocation process. We show that the conformational broadening δt of translocation times of m-th monomer δtm^1.5 is stronger than the thermal broadening δtm^1.25 The predictions of our deterministic model were verified by extensive molecular dynamics simulations
On the deterministic and stochastic use of hydrologic models
NASA Astrophysics Data System (ADS)
Farmer, William H.; Vogel, Richard M.
2016-07-01
Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.
Non-Deterministic Dynamic Instability of Composite Shells
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Abumeri, Galib H.
2004-01-01
A computationally effective method is described to evaluate the non-deterministic dynamic instability (probabilistic dynamic buckling) of thin composite shells. The method is a judicious combination of available computer codes for finite element, composite mechanics, and probabilistic structural analysis. The solution method is incrementally updated Lagrangian. It is illustrated by applying it to thin composite cylindrical shell subjected to dynamic loads. Both deterministic and probabilistic buckling loads are evaluated to demonstrate the effectiveness of the method. A universal plot is obtained for the specific shell that can be used to approximate buckling loads for different load rates and different probability levels. Results from this plot show that the faster the rate, the higher the buckling load and the shorter the time. The lower the probability, the lower is the buckling load for a specific time. Probabilistic sensitivity results show that the ply thickness, the fiber volume ratio and the fiber longitudinal modulus, dynamic load and loading rate are the dominant uncertainties, in that order.
A DETERMINISTIC METHOD FOR TRANSIENT, THREE-DIMENSIONAL NUETRON TRANSPORT
Goluoglu, S.; Bentley, C.; Demeglio, R.; Dunn, M.; Norton, K.; Pevey, R.; Suslov, I.; Dodds, H. L.
1998-01-14
A deterministic method for solving the time-dependent, three-dimensional Boltzmam transport equation with explicit representation of delayed neutrons has been developed and evaluated. The methodology used in this study for the time variable of the neutron flux is known as the improved quasi-static (IQS) method. The position, energy, and angle-dependent neutron flux is computed deterministically by using the three-dimensional discrete ordinates code TORT. This paper briefly describes the methodology and selected results. The code developed at the University of Tennessee based on this methodology is called TDTORT. TDTORT can be used to model transients involving voided and/or strongly absorbing regions that require transport theory for accuracy. This code can also be used to model either small high-leakage systems, such as space reactors, or asymmetric control rod movements. TDTORT can model step, ramp, step followed by another step, and step followed by ramp type perturbations. It can also model columnwise rod movement can also be modeled. A special case of columnwise rod movement in a three-dimensional model of a boiling water reactor (BWR) with simple adiabatic feedback is also included. TDTORT is verified through several transient one-dimensional, two-dimensional, and three-dimensional benchmark problems. The results show that the transport methodology and corresponding code developed in this work have sufficient accuracy and speed for computing the dynamic behavior of complex multidimensional neutronic systems.
A deterministic method for transient, three-dimensional neutron transport
Goluoglu, S.; Bentley, C.; DeMeglio, R.; Dunn, M.; Norton, K.; Pevey, R.; Suslov, I.; Dodds, H.L.
1998-05-01
A deterministic method for solving the time-dependent, three-dimensional Boltzmann transport equation with explicit representation of delayed neutrons has been developed and evaluated. The methodology used in this study for the time variable of the neutron flux is known as the improved quasi-static (IQS) method. The position, energy, and angle-dependent neutron flux is computed deterministically by using the three-dimensional discrete ordinates code TORT. This paper briefly describes the methodology and selected results. The code developed at the University of Tennessee based on this methodology is called TDTORT. TDTORT can be used to model transients involving voided and/or strongly absorbing regions that require transport theory for accuracy. This code can also be used to model either small high-leakage systems, such as space reactors, or asymmetric control rod movements. TDTORT can model step, ramp, step followed by another step, and step followed by ramp type perturbations. It can also model columnwise rod movement. A special case of columnwise rod movement in a three-dimensional model of a boiling water reactor (BWR) with simple adiabatic feedback is also included. TDTORT is verified through several transient one-dimensional, two-dimensional, and three-dimensional benchmark problems. The results show that the transport methodology and corresponding code developed in this work have sufficient accuracy and speed for computing the dynamic behavior of complex multi-dimensional neutronic systems.
Integrability of a deterministic cellular automaton driven by stochastic boundaries
NASA Astrophysics Data System (ADS)
Prosen, Tomaž; Mejía-Monasterio, Carlos
2016-05-01
We propose an interacting many-body space–time-discrete Markov chain model, which is composed of an integrable deterministic and reversible cellular automaton (rule 54 of Bobenko et al 1993 Commun. Math. Phys. 158 127) on a finite one-dimensional lattice {({{{Z}}}2)}× n, and local stochastic Markov chains at the two lattice boundaries which provide chemical baths for absorbing or emitting the solitons. Ergodicity and mixing of this many-body Markov chain is proven for generic values of bath parameters, implying the existence of a unique nonequilibrium steady state. The latter is constructed exactly and explicitly in terms of a particularly simple form of matrix product ansatz which is termed a patch ansatz. This gives rise to an explicit computation of observables and k-point correlations in the steady state as well as the construction of a nontrivial set of local conservation laws. The feasibility of an exact solution for the full spectrum and eigenvectors (decay modes) of the Markov matrix is suggested as well. We conjecture that our ideas can pave the road towards a theory of integrability of boundary driven classical deterministic lattice systems.
Deterministic direct reprogramming of somatic cells to pluripotency.
Rais, Yoach; Zviran, Asaf; Geula, Shay; Gafni, Ohad; Chomsky, Elad; Viukov, Sergey; Mansour, Abed AlFatah; Caspi, Inbal; Krupalnik, Vladislav; Zerbib, Mirie; Maza, Itay; Mor, Nofar; Baran, Dror; Weinberger, Leehee; Jaitin, Diego A; Lara-Astiaso, David; Blecher-Gonen, Ronnie; Shipony, Zohar; Mukamel, Zohar; Hagai, Tzachi; Gilad, Shlomit; Amann-Zalcenstein, Daniela; Tanay, Amos; Amit, Ido; Novershtern, Noa; Hanna, Jacob H
2013-10-01
Somatic cells can be inefficiently and stochastically reprogrammed into induced pluripotent stem (iPS) cells by exogenous expression of Oct4 (also called Pou5f1), Sox2, Klf4 and Myc (hereafter referred to as OSKM). The nature of the predominant rate-limiting barrier(s) preventing the majority of cells to successfully and synchronously reprogram remains to be defined. Here we show that depleting Mbd3, a core member of the Mbd3/NuRD (nucleosome remodelling and deacetylation) repressor complex, together with OSKM transduction and reprogramming in naive pluripotency promoting conditions, result in deterministic and synchronized iPS cell reprogramming (near 100% efficiency within seven days from mouse and human cells). Our findings uncover a dichotomous molecular function for the reprogramming factors, serving to reactivate endogenous pluripotency networks while simultaneously directly recruiting the Mbd3/NuRD repressor complex that potently restrains the reactivation of OSKM downstream target genes. Subsequently, the latter interactions, which are largely depleted during early pre-implantation development in vivo, lead to a stochastic and protracted reprogramming trajectory towards pluripotency in vitro. The deterministic reprogramming approach devised here offers a novel platform for the dissection of molecular dynamics leading to establishing pluripotency at unprecedented flexibility and resolution.
A survey of deterministic solvers for rarefied flows (Invited)
NASA Astrophysics Data System (ADS)
Mieussens, Luc
2014-12-01
Numerical simulations of rarefied gas flows are generally made with DSMC methods. Up to a recent period, deterministic numerical methods based on a discretization of the Boltzmann equation were restricted to simple problems (1D, linearized flows, or simple geometries, for instance). In the last decade, several deterministic solvers have been developed in different teams to tackle more complex problems like 2D and 3D flows. Some of them are based on the full Boltzmann equation. Solving this equation numerically is still very challenging, and 3D solvers are still restricted to monoatomic gases, even if recent works have proved it was possible to simulate simple flows for polyatomic gases. Other solvers are based on simpler BGK like models: they allow for much more intensive simulations on 3D flows for realistic geometries, but treating complex gases requires extended BGK models that are still under development. In this paper, we discuss the main features of these existing solvers, and we focus on their strengths and inefficiencies. We will also review some recent results that show how these solvers can be improved: - higher accuracy (higher order finite volume methods, discontinuous Galerkin approaches) - lower memory and CPU costs with special velocity discretization (adaptive grids, spectral methods) - multi-scale simulations by using hybrid and asymptotic preserving schemes - efficient implementation on high performance computers (parallel computing, hybrid parallelization) Finally, we propose some perspectives to make these solvers more efficient and more popular.
An advanced deterministic method for spent fuel criticality safety analysis
DeHart, M.D.
1998-01-01
Over the past two decades, criticality safety analysts have come to rely to a large extent on Monte Carlo methods for criticality calculations. Monte Carlo has become popular because of its capability to model complex, non-orthogonal configurations or fissile materials, typical of real world problems. Over the last few years, however, interest in determinist transport methods has been revived, due shortcomings in the stochastic nature of Monte Carlo approaches for certain types of analyses. Specifically, deterministic methods are superior to stochastic methods for calculations requiring accurate neutron density distributions or differential fluxes. Although Monte Carlo methods are well suited for eigenvalue calculations, they lack the localized detail necessary to assess uncertainties and sensitivities important in determining a range of applicability. Monte Carlo methods are also inefficient as a transport solution for multiple pin depletion methods. Discrete ordinates methods have long been recognized as one of the most rigorous and accurate approximations used to solve the transport equation. However, until recently, geometric constraints in finite differencing schemes have made discrete ordinates methods impractical for non-orthogonal configurations such as reactor fuel assemblies. The development of an extended step characteristic (ESC) technique removes the grid structure limitations of traditional discrete ordinates methods. The NEWT computer code, a discrete ordinates code built upon the ESC formalism, is being developed as part of the SCALE code system. This paper will demonstrate the power, versatility, and applicability of NEWT as a state-of-the-art solution for current computational needs.
Strongly Deterministic Population Dynamics in Closed Microbial Communities
NASA Astrophysics Data System (ADS)
Frentz, Zak; Kuehn, Seppe; Leibler, Stanislas
2015-10-01
Biological systems are influenced by random processes at all scales, including molecular, demographic, and behavioral fluctuations, as well as by their interactions with a fluctuating environment. We previously established microbial closed ecosystems (CES) as model systems for studying the role of random events and the emergent statistical laws governing population dynamics. Here, we present long-term measurements of population dynamics using replicate digital holographic microscopes that maintain CES under precisely controlled external conditions while automatically measuring abundances of three microbial species via single-cell imaging. With this system, we measure spatiotemporal population dynamics in more than 60 replicate CES over periods of months. In contrast to previous studies, we observe strongly deterministic population dynamics in replicate systems. Furthermore, we show that previously discovered statistical structure in abundance fluctuations across replicate CES is driven by variation in external conditions, such as illumination. In particular, we confirm the existence of stable ecomodes governing the correlations in population abundances of three species. The observation of strongly deterministic dynamics, together with stable structure of correlations in response to external perturbations, points towards a possibility of simple macroscopic laws governing microbial systems despite numerous stochastic events present on microscopic levels.
Shock-induced explosive chemistry in a deterministic sample configuration.
Stuecker, John Nicholas; Castaneda, Jaime N.; Cesarano, Joseph, III; Trott, Wayne Merle; Baer, Melvin R.; Tappan, Alexander Smith
2005-10-01
Explosive initiation and energy release have been studied in two sample geometries designed to minimize stochastic behavior in shock-loading experiments. These sample concepts include a design with explosive material occupying the hole locations of a close-packed bed of inert spheres and a design that utilizes infiltration of a liquid explosive into a well-defined inert matrix. Wave profiles transmitted by these samples in gas-gun impact experiments have been characterized by both velocity interferometry diagnostics and three-dimensional numerical simulations. Highly organized wave structures associated with the characteristic length scales of the deterministic samples have been observed. Initiation and reaction growth in an inert matrix filled with sensitized nitromethane (a homogeneous explosive material) result in wave profiles similar to those observed with heterogeneous explosives. Comparison of experimental and numerical results indicates that energetic material studies in deterministic sample geometries can provide an important new tool for validation of models of energy release in numerical simulations of explosive initiation and performance.
Deterministic Stress Modeling of Hot Gas Segregation in a Turbine
NASA Technical Reports Server (NTRS)
Busby, Judy; Sondak, Doug; Staubach, Brent; Davis, Roger
1998-01-01
Simulation of unsteady viscous turbomachinery flowfields is presently impractical as a design tool due to the long run times required. Designers rely predominantly on steady-state simulations, but these simulations do not account for some of the important unsteady flow physics. Unsteady flow effects can be modeled as source terms in the steady flow equations. These source terms, referred to as Lumped Deterministic Stresses (LDS), can be used to drive steady flow solution procedures to reproduce the time-average of an unsteady flow solution. The goal of this work is to investigate the feasibility of using inviscid lumped deterministic stresses to model unsteady combustion hot streak migration effects on the turbine blade tip and outer air seal heat loads using a steady computational approach. The LDS model is obtained from an unsteady inviscid calculation. The LDS model is then used with a steady viscous computation to simulate the time-averaged viscous solution. Both two-dimensional and three-dimensional applications are examined. The inviscid LDS model produces good results for the two-dimensional case and requires less than 10% of the CPU time of the unsteady viscous run. For the three-dimensional case, the LDS model does a good job of reproducing the time-averaged viscous temperature migration and separation as well as heat load on the outer air seal at a CPU cost that is 25% of that of an unsteady viscous computation.
On the deterministic and stochastic use of hydrologic models
Farmer, William H.; Vogel, Richard M.
2016-01-01
Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.
Integrability of a deterministic cellular automaton driven by stochastic boundaries
NASA Astrophysics Data System (ADS)
Prosen, Tomaž; Mejía-Monasterio, Carlos
2016-05-01
We propose an interacting many-body space-time-discrete Markov chain model, which is composed of an integrable deterministic and reversible cellular automaton (rule 54 of Bobenko et al 1993 Commun. Math. Phys. 158 127) on a finite one-dimensional lattice {({{{Z}}}2)}× n, and local stochastic Markov chains at the two lattice boundaries which provide chemical baths for absorbing or emitting the solitons. Ergodicity and mixing of this many-body Markov chain is proven for generic values of bath parameters, implying the existence of a unique nonequilibrium steady state. The latter is constructed exactly and explicitly in terms of a particularly simple form of matrix product ansatz which is termed a patch ansatz. This gives rise to an explicit computation of observables and k-point correlations in the steady state as well as the construction of a nontrivial set of local conservation laws. The feasibility of an exact solution for the full spectrum and eigenvectors (decay modes) of the Markov matrix is suggested as well. We conjecture that our ideas can pave the road towards a theory of integrability of boundary driven classical deterministic lattice systems.
Predictability of normal heart rhythms and deterministic chaos
NASA Astrophysics Data System (ADS)
Lefebvre, J. H.; Goodings, D. A.; Kamath, M. V.; Fallen, E. L.
1993-04-01
The evidence for deterministic chaos in normal heart rhythms is examined. Electrocardiograms were recorded of 29 subjects falling into four groups—a young healthy group, an older healthy group, and two groups of patients who had recently suffered an acute myocardial infarction. From the measured R-R intervals, a time series of 1000 first differences was constructed for each subject. The correlation integral of Grassberger and Procaccia was calculated for several subjects using these relatively short time series. No evidence was found for the existence of an attractor having a dimension less than about 4. However, a prediction method recently proposed by Sugihara and May and an autoregressive linear predictor both show that there is a measure of short-term predictability in the differenced R-R intervals. Further analysis revealed that the short-term predictability calculated by the Sugihara-May method is not consistent with the null hypothesis of a Gaussian random process. The evidence for a small amount of nonlinear dynamical behavior together with the short-term predictability suggest that there is an element of deterministic chaos in normal heart rhythms, although it is not strong or persistent. Finally, two useful parameters of the predictability curves are identified, namely, the `first step predictability' and the `predictability decay rate,' neither of which appears to be significantly correlated with the standard deviation of the R-R intervals.
Deterministic doping and the exploration of spin qubits
Schenkel, T.; Weis, C. D.; Persaud, A.; Lo, C. C.; Chakarov, I.; Schneider, D. H.; Bokor, J.
2015-01-09
Deterministic doping by single ion implantation, the precise placement of individual dopant atoms into devices, is a path for the realization of quantum computer test structures where quantum bits (qubits) are based on electron and nuclear spins of donors or color centers. We present a donor - quantum dot type qubit architecture and discuss the use of medium and highly charged ions extracted from an Electron Beam Ion Trap/Source (EBIT/S) for deterministic doping. EBIT/S are attractive for the formation of qubit test structures due to the relatively low emittance of ion beams from an EBIT/S and due to the potential energy associated with the ions' charge state, which can aid single ion impact detection. Following ion implantation, dopant specific diffusion mechanisms during device processing affect the placement accuracy and coherence properties of donor spin qubits. For bismuth, range straggling is minimal but its relatively low solubility in silicon limits thermal budgets for the formation of qubit test structures.
NASA Technical Reports Server (NTRS)
Huber, Hans
2006-01-01
Air transport forms complex networks that can be measured in order to understand its structural characteristics and functional properties. Recent models for network growth (i.e., preferential attachment, etc.) remain stochastic and do not seek to understand other network-specific mechanisms that may account for their development in a more microscopic way. Air traffic is made up of many constituent airlines that are either privately or publicly owned and that operate their own networks. They follow more or less similar business policies each. The way these airline networks organize among themselves into distinct traffic distributions reveals complex interaction among them, which in turn can be aggregated into larger (macro-) traffic distributions. Our approach allows for a more deterministic methodology that will assess the impact of airline strategies on the distinct distributions for air traffic, particularly inside Europe. One key question this paper is seeking to answer is whether there are distinct patterns of preferential attachment for given classes of airline networks to distinct types of European airports. Conclusions about the advancing degree of concentration in this industry and the airline operators that accelerate this process can be drawn.
Time Relevance of Convective Weather Forecast for Air Traffic Automation
NASA Technical Reports Server (NTRS)
Chan, William N.
2006-01-01
The Federal Aviation Administration (FAA) is handling nearly 120,000 flights a day through its Air Traffic Management (ATM) system and air traffic congestion is expected to increse substantially over the next 20 years. Weather-induced impacts to throughput and efficiency are the leading cause of flight delays accounting for 70% of all delays with convective weather accounting for 60% of all weather related delays. To support the Next Generation Air Traffic System goal of operating at 3X current capacity in the NAS, ATC decision support tools are being developed to create advisories to assist controllers in all weather constraints. Initial development of these decision support tools did not integrate information regarding weather constraints such as thunderstorms and relied on an additional system to provide that information. Future Decision Support Tools should move towards an integrated system where weather constraints are factored into the advisory of a Decision Support Tool (DST). Several groups such at NASA-Ames, Lincoln Laboratories, and MITRE are integrating convective weather data with DSTs. A survey of current convective weather forecast and observation data show they span a wide range of temporal and spatial resolutions. Short range convective observations can be obtained every 5 mins with longer range forecasts out to several days updated every 6 hrs. Today, the short range forecasts of less than 2 hours have a temporal resolution of 5 mins. Beyond 2 hours, forecasts have much lower temporal. resolution of typically 1 hour. Spatial resolutions vary from 1km for short range to 40km for longer range forecasts. Improving the accuracy of long range convective forecasts is a major challenge. A report published by the National Research Council states improvements for convective forecasts for the 2 to 6 hour time frame will only be achieved for a limited set of convective phenomena in the next 5 to 10 years. Improved longer range forecasts will be probabilistic
De Franceschi, Nicola; Hamidi, Hellyeh; Alanko, Jonna; Sahgal, Pranshu; Ivaska, Johanna
2015-01-01
ABSTRACT Integrins are a family of transmembrane cell surface molecules that constitute the principal adhesion receptors for the extracellular matrix (ECM) and are indispensable for the existence of multicellular organisms. In vertebrates, 24 different integrin heterodimers exist with differing substrate specificity and tissue expression. Integrin–extracellular-ligand interaction provides a physical anchor for the cell and triggers a vast array of intracellular signalling events that determine cell fate. Dynamic remodelling of adhesions, through rapid endocytic and exocytic trafficking of integrin receptors, is an important mechanism employed by cells to regulate integrin–ECM interactions, and thus cellular signalling, during processes such as cell migration, invasion and cytokinesis. The initial concept of integrin traffic as a means to translocate adhesion receptors within the cell has now been expanded with the growing appreciation that traffic is intimately linked to the cell signalling apparatus. Furthermore, endosomal pathways are emerging as crucial regulators of integrin stability and expression in cells. Thus, integrin traffic is relevant in a number of pathological conditions, especially in cancer. Nearly a decade ago we wrote a Commentary in Journal of Cell Science entitled ‘Integrin traffic’. With the advances in the field, we felt it would be appropriate to provide the growing number of researchers interested in integrin traffic with an update. PMID:25663697
Broadcast control of air traffic
NASA Technical Reports Server (NTRS)
Litchford, G. B.
1972-01-01
The development of a system of broadcast control for improved flight safety and air traffic control is discussed. The system provides a balance of equality between improved cockpit guidance and control capability and ground control in order to provide the pilot with a greater degree of participation. The manner in which the system is operated and the equipment required for safe operation are examined.
Overview. Traffic Safety Facts, 2000.
ERIC Educational Resources Information Center
National Highway Traffic Safety Administration (DOT), Washington, DC.
This document provides statistical information on U.S. motor vehicle and traffic safety. Data include: (1) motor vehicle occupants and non-occupants killed and injured, 1990-2000; (2) persons killed and injured, and fatality and injury rates, 1990-2000; (3) restraint use rates for passenger car occupants in fatal crashes, 1990 and 2000; (4)…
Traffic Safety Facts, 2001. Overview.
ERIC Educational Resources Information Center
National Highway Traffic Safety Administration (DOT), Washington, DC.
This document provides statistical information on U.S. motor vehicle and traffic safety. Data include: (1) motor vehicle occupants and non-occupants killed and injured, 1991-2001; (2) persons killed and injured, and fatality and injury rates, 1991-2001; (3) restraint use rates for passenger car occupants in fatal crashes, 1991 and 2001; (4)…
Traffic Aware Strategic Aircrew Requests (TASAR)
NASA Technical Reports Server (NTRS)
Wing, David J.
2014-01-01
The Traffic Aware Strategic Aircrew Request (TASAR) concept offers onboard automation for the purpose of advising the pilot of traffic compatible trajectory changes that would be beneficial to the flight. A fast-time simulation study was conducted to assess the benefits of TASAR to Alaska Airlines. The simulation compares historical trajectories without TASAR to trajectories developed with TASAR and evaluated by controllers against their objectives. It was estimated that between 8,000 and 12,000 gallons of fuel and 900 to 1,300 minutes could be saved annually per aircraft. These savings were applied fleet-wide to produce an estimated annual cost savings to Alaska Airlines in excess of $5 million due to fuel, maintenance, and depreciation cost savings. Switching to a more wind-optimal trajectory was found to be the use case that generated the highest benefits out of the three TASAR use cases analyzed. Alaska TASAR requests peaked at four to eight requests per hour in high-altitude Seattle center sectors south of Seattle-Tacoma airport..
77 FR 49859 - Proposed Traffic Records Program Assessment Advisory
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-17
... National Highway Traffic Safety Administration Proposed Traffic Records Program Assessment Advisory AGENCY: National Highway Traffic Safety Administration (NHTSA), Department of Transportation (DOT). ACTION: Notice. SUMMARY: This notice announces the publication of the Traffic Records Program Assessment Advisory, DOT...
Feasibility of a Monte Carlo-deterministic hybrid method for fast reactor analysis
Heo, W.; Kim, W.; Kim, Y.; Yun, S.
2013-07-01
A Monte Carlo and deterministic hybrid method is investigated for the analysis of fast reactors in this paper. Effective multi-group cross sections data are generated using a collision estimator in the MCNP5. A high order Legendre scattering cross section data generation module was added into the MCNP5 code. Both cross section data generated from MCNP5 and TRANSX/TWODANT using the homogeneous core model were compared, and were applied to DIF3D code for fast reactor core analysis of a 300 MWe SFR TRU burner core. For this analysis, 9 groups macroscopic-wise data was used. In this paper, a hybrid calculation MCNP5/DIF3D was used to analyze the core model. The cross section data was generated using MCNP5. The k{sub eff} and core power distribution were calculated using the 54 triangle FDM code DIF3D. A whole core calculation of the heterogeneous core model using the MCNP5 was selected as a reference. In terms of the k{sub eff}, 9-group MCNP5/DIF3D has a discrepancy of -154 pcm from the reference solution, 9-group TRANSX/TWODANT/DIF3D analysis gives -1070 pcm discrepancy. (authors)
Traffic fatalities and economic growth.
Kopits, Elizabeth; Cropper, Maureen
2005-01-01
This paper examines the relationship between traffic fatality risk and per capita income and uses it to forecast traffic fatalities by geographic region. Equations for the road death rate (fatalities/population) and its components--the rate of motorization (vehicles/population) and fatalities per vehicle (F/V)--are estimated using panel data from 1963 to 1999 for 88 countries. The natural logarithm of F/P, V/P, and F/V are expressed as spline (piecewise linear) functions of the logarithm of real per capita GDP (measured in 1985 international prices). Region-specific time trends during the period 1963-1999 are modeled in linear and log-linear form. These models are used to project traffic fatalities and the stock of motor vehicles to 2020. The per capita income at which traffic fatality risk (fatalities/population) begins to decline is 8600 US dollars (1985 international dollars) when separate time trends are used for each geographic region. This turning point is driven by the rate of decline in fatalities/vehicles as income rises since vehicles/population, while increasing with income at a decreasing rate, never declines with economic growth. Projections of future traffic fatalities suggest that the global road death toll will grow by approximately 66% over the next twenty years. This number, however, reflects divergent rates of change in different parts of the world: a decline in fatalities in high-income countries of approximately 28% versus an increase in fatalities of almost 92% in China and 147% in India. The road death rate is projected to rise to approximately 2 per 10,000 persons in developing countries by 2020, while it will fall to less than 1 per 10,000 in high-income countries.
Predicting Information Flows in Network Traffic.
ERIC Educational Resources Information Center
Hinich, Melvin J.; Molyneux, Robert E.
2003-01-01
Discusses information flow in networks and predicting network traffic and describes a study that uses time series analysis on a day's worth of Internet log data. Examines nonlinearity and traffic invariants, and suggests that prediction of network traffic may not be possible with current techniques. (Author/LRW)
14 CFR 25 - Traffic and Capacity Elements
Code of Federal Regulations, 2012 CFR
2012-01-01
...) Editorial Note: For Federal Register citations affecting part 241, section 25, see the List of CFR Sections... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Traffic and Capacity Elements Section 25... Traffic Reporting Requirements Section 25 Traffic and Capacity Elements General Instructions. (a)...
30 CFR 56.9100 - Traffic control.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Traffic control. 56.9100 Section 56.9100 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE... Dumping Traffic Safety § 56.9100 Traffic control. To provide for the safe movement of...
30 CFR 57.9100 - Traffic control.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Traffic control. 57.9100 Section 57.9100 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE... Dumping Traffic Safety § 57.9100 Traffic control. To provide for the safe movement of...
30 CFR 56.9100 - Traffic control.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Traffic control. 56.9100 Section 56.9100 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE... Dumping Traffic Safety § 56.9100 Traffic control. To provide for the safe movement of...
30 CFR 57.9100 - Traffic control.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Traffic control. 57.9100 Section 57.9100 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE... Dumping Traffic Safety § 57.9100 Traffic control. To provide for the safe movement of...
30 CFR 56.9100 - Traffic control.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Traffic control. 56.9100 Section 56.9100 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE... Dumping Traffic Safety § 56.9100 Traffic control. To provide for the safe movement of...
49 CFR 236.769 - Locking, traffic.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 4 2011-10-01 2011-10-01 false Locking, traffic. 236.769 Section 236.769... Locking, traffic. Electric locking which prevents the manipulation of levers or other devices for changing the direction of traffic on a section of track while that section is occupied or while a...
36 CFR 1004.13 - Obstructing traffic.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 36 Parks, Forests, and Public Property 3 2011-07-01 2011-07-01 false Obstructing traffic. 1004.13 Section 1004.13 Parks, Forests, and Public Property PRESIDIO TRUST VEHICLES AND TRAFFIC SAFETY § 1004.13 Obstructing traffic. The following are prohibited: (a) Stopping or parking a vehicle upon a Presidio...
30 CFR 56.9100 - Traffic control.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Traffic control. 56.9100 Section 56.9100 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE... Dumping Traffic Safety § 56.9100 Traffic control. To provide for the safe movement of...
49 CFR 236.769 - Locking, traffic.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Locking, traffic. 236.769 Section 236.769... Locking, traffic. Electric locking which prevents the manipulation of levers or other devices for changing the direction of traffic on a section of track while that section is occupied or while a...
49 CFR 236.381 - Traffic locking.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Traffic locking. 236.381 Section 236.381 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... and Tests § 236.381 Traffic locking. Traffic locking shall be tested when placed in service...
15 CFR 265.22 - Bicycle traffic.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Bicycle traffic. 265.22 Section 265.22... STANDARDS AND TECHNOLOGY, DEPARTMENT OF COMMERCE REGULATIONS GOVERNING TRAFFIC AND CONDUCT REGULATIONS GOVERNING TRAFFIC AND CONDUCT ON THE GROUNDS OF THE NATIONAL INSTITUTE OF STANDARDS &...
15 CFR 265.22 - Bicycle traffic.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 15 Commerce and Foreign Trade 1 2011-01-01 2011-01-01 false Bicycle traffic. 265.22 Section 265.22... STANDARDS AND TECHNOLOGY, DEPARTMENT OF COMMERCE REGULATIONS GOVERNING TRAFFIC AND CONDUCT REGULATIONS GOVERNING TRAFFIC AND CONDUCT ON THE GROUNDS OF THE NATIONAL INSTITUTE OF STANDARDS &...
30 CFR 57.9100 - Traffic control.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Traffic control. 57.9100 Section 57.9100 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE... Dumping Traffic Safety § 57.9100 Traffic control. To provide for the safe movement of...
30 CFR 57.9100 - Traffic control.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Traffic control. 57.9100 Section 57.9100 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE... Dumping Traffic Safety § 57.9100 Traffic control. To provide for the safe movement of...
49 CFR 236.381 - Traffic locking.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 4 2011-10-01 2011-10-01 false Traffic locking. 236.381 Section 236.381 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... and Tests § 236.381 Traffic locking. Traffic locking shall be tested when placed in service...
36 CFR 1004.13 - Obstructing traffic.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Obstructing traffic. 1004.13 Section 1004.13 Parks, Forests, and Public Property PRESIDIO TRUST VEHICLES AND TRAFFIC SAFETY § 1004.13 Obstructing traffic. The following are prohibited: (a) Stopping or parking a vehicle upon a Presidio...
A macro traffic flow model accounting for real-time traffic state
NASA Astrophysics Data System (ADS)
Tang, Tie-Qiao; Chen, Liang; Wu, Yong-Hong; Caccetta, Lou
2015-11-01
In this paper, we propose a traffic flow model to study the effects of the real-time traffic state on traffic flow. The numerical results show that the proposed model can describe oscillation in traffic and stop-and-go traffic, where the speed-density relationship is qualitatively accordant with the empirical data of the Weizikeng segment of the Badaling freeway in Beijing, which means that the proposed model can qualitatively reproduce some complex traffic phenomena associated with real-time traffic state.
Traffic Network Aided Plan and Road Line Optimization in Intelligent Traffic System
NASA Astrophysics Data System (ADS)
Liu, Bin; Qin, Guofeng
2008-11-01
In ITS(intelligent traffic system), traffic network plan is important. Public traffic network is a basic part in contemporary intelligent traffic and a basis of the municipal infrastructure construction. To construct the public traffic network aided plan, two problems are studied. One is how to plan traffic road line in order to cover the traffic districts; the other is how to choice the best way from the start point to the end. For the first one, a traffic road line aided plan algorithm is taken forward. The other is a road line optimization algorithm. It utilizes the topology theory to analyze the spatial character in public traffic network, and designs the best choice method to meet the user's requirements. The two algorithms are realized, and proved by a case in the graphical interface of GIS(Geographic Information System), including simulation for rationalization of the public traffic network.
Evolution of Traffic Jam in Traffic Flow Model
NASA Astrophysics Data System (ADS)
Fukui, Minoru; Ishibashi, Yoshihiro
1993-11-01
Traffic flow is simulated in a three-state cellular automaton model. In a two-dimensional cell without a crashed car, the ensemble average of the velocity of the cars is enhanced by the self-organization in the low-density phase of cars. In the high-density phase above p{=}0.5 of car density, the velocity is decreased and the system then degenerates into a global jamming phase in which all cars are stopped. A crashed car provides the seed of a jamming cluster, which grows into a global traffic jam even in the low-density phase. The growth of the jamming cluster is studied, and the time dependence of the number of jamming cars and the scaling law for the cell sizes are discussed.
Simulation Study of Traffic Accidents in Bidirectional Traffic Models
NASA Astrophysics Data System (ADS)
Moussa, Najem
Conditions for the occurrence of bidirectional collisions are developed based on the Simon-Gutowitz bidirectional traffic model. Three types of dangerous situations can occur in this model. We analyze those corresponding to head-on collision; rear-end collision and lane-changing collision. Using Monte Carlo simulations, we compute the probability of the occurrence of these collisions for different values of the oncoming cars' density. It is found that the risk of collisions is important when the density of cars in one lane is small and that of the other lane is high enough. The influence of different proportions of heavy vehicles is also studied. We found that heavy vehicles cause an important reduction of traffic flow on the home lane and provoke an increase of the risk of car accidents.
Traffic flow theory and traffic flow simulation models. Transportation research record
1996-12-31
;Contents: Comparison of Simulation Modules of TRANSYT and INTEGRATION Models; Evaluation of SCATSIM-RTA Adaptive Traffic Network Simulation Model; Comparison NETSIM, NETFLO I, and NETFLO II Traffic Simulation Models for Fixed-Time Signal Control; Traffic Flow Simulation Through Parallel Processing; Cluster Analysis as Tool in Traffic Engineering; Traffic Platoon Dispersion Modeling on Arterial Streets; Hybrid Model for Estimating Permitted Left-Turn Saturations Flow Rate; and Passing Sight Distance and Overtaking Dilemma on Two-Lane Roads.
M Dzhambov, Angel; D Dimitrova, Donka; H Turnovska, Tanya
2014-09-01
Noise pollution is one of the four major pollutions in the world. In order to implement adequate strategies for noise control, assessment of traffic-generated noise is essential in city planning and management. The aim of this study was to determine whether space syntax could improve the predictive power of noise simulation. This paper reports a record linkage study which combined a documentary method with space syntax analysis. It analyses data about traffic flow as well as field-measured and computer-simulated traffic noise in two Bulgarian agglomerations. Our findings suggest that space syntax might have a potential in predicting traffic noise exposure by improving models for noise simulations using specialised software or actual traffic counts. The scientific attention might need to be directed towards space syntax in order to study its further application in current models and algorithms for noise prediction. PMID:25222575
[Reduction of automobile traffic: urgent health promotion policy].
Tapia Granados, J A
1998-03-01
During the last few decades, traffic injuries have become one of the leading causes of death and disability in the world. In urban areas, traffic congestion, noise, and emissions from motor vehicles produce subjective disturbances and detectable pathological effects. More than one billion people are exposed to harmful levels of environmental pollution. Because its combustion engine generates carbon dioxide (CO2), the automobile is one of the chief sources of the gases that are causing the greenhouse effect. The latter has already caused a rise in the average ambient temperature, and over the next decades it will predictable cause significant climatic changes whose consequences, though uncertain, are likely to be harmful and possibly catastrophic. Aside from the greenhouse effect, the relentless growth of parking zones, traffic, and the roadway infrastructure in urban and rural areas is currently one of the leading causes of environmental degradation. Urban development, which is nearly always "planned" around traffic instead of people, leads to a significant deterioration in the quality of life, while it also destroys the social fabric. Unlike the private automobile, public transportation, bicycles, and walking help reduce pollution, congestion, and traffic volume, as well as the morbidity and mortality resulting from injuries and ailments related to pollution. Non-automobile transportation also encourages physical activity--with its positive effect on general health--and helps reduce the greenhouse effect. The drop in traffic volume and the increased use of alternate means of transportation are thus an integrated health promotion policy which should become an inherent part of the movement for the promotion of healthy cities and of transportation policies and economic policy in general. PMID:9567647
Lipo, Carl P; Madsen, Mark E; Dunnell, Robert C
2015-01-01
Frequency seriation played a key role in the formation of archaeology as a discipline due to its ability to generate chronologies. Interest in its utility for exploring issues of contemporary interest beyond chronology, however, has been limited. This limitation is partly due to a lack of quantitative algorithms that can be used to build deterministic seriation solutions. When the number of assemblages becomes greater than just a handful, the resources required for evaluation of possible permutations easily outstrips available computing capacity. On the other hand, probabilistic approaches to creating seriations offer a computationally manageable alternative but rely upon a compressed description of the data to order assemblages. This compression removes the ability to use all of the features of our data to fit to the seriation model, obscuring violations of the model, and thus lessens our ability to understand the degree to which the resulting order is chronological, spatial, or a mixture. Recently, frequency seriation has been reconceived as a general method for studying the structure of cultural transmission through time and across space. The use of an evolution-based framework renews the potential for seriation but also calls for a computationally feasible algorithm that is capable of producing solutions under varying configurations, without manual trial and error fitting. Here, we introduce the Iterative Deterministic Seriation Solution (IDSS) for constructing frequency seriations, an algorithm that dramatically constrains the search for potential valid orders of assemblages. Our initial implementation of IDSS does not solve all the problems of seriation, but begins to moves towards a resolution of a long-standing problem in archaeology while opening up new avenues of research into the study of cultural relatedness. We demonstrate the utility of IDSS using late prehistoric decorated ceramics from the Mississippi River Valley. The results compare favorably to
Capillary-mediated interface perturbations: Deterministic pattern formation
NASA Astrophysics Data System (ADS)
Glicksman, Martin E.
2016-09-01
Leibniz-Reynolds analysis identifies a 4th-order capillary-mediated energy field that is responsible for shape changes observed during melting, and for interface speed perturbations during crystal growth. Field-theoretic principles also show that capillary-mediated energy distributions cancel over large length scales, but modulate the interface shape on smaller mesoscopic scales. Speed perturbations reverse direction at specific locations where they initiate inflection and branching on unstable interfaces, thereby enhancing pattern complexity. Simulations of pattern formation by several independent groups of investigators using a variety of numerical techniques confirm that shape changes during both melting and growth initiate at locations predicted from interface field theory. Finally, limit cycles occur as an interface and its capillary energy field co-evolve, leading to synchronized branching. Synchronous perturbations produce classical dendritic structures, whereas asynchronous perturbations observed in isotropic and weakly anisotropic systems lead to chaotic-looking patterns that remain nevertheless deterministic.
Deterministic Impulsive Vacuum Foundations for Quantum-Mechanical Wavefunctions
NASA Astrophysics Data System (ADS)
Valentine, John S.
2013-09-01
By assuming that a fermion de-constitutes immediately at source, that its constituents, as bosons, propagate uniformly as scalar vacuum terms with phase (radial) symmetry, and that fermions are unique solutions for specific phase conditions, we find a model that self-quantizes matter from continuous waves, unifying bosons and fermion ontologies in a single basis, in a constitution-invariant process. Vacuum energy has a wavefunction context, as a mass-energy term that enables wave collapse and increases its amplitude, with gravitational field as the gradient of the flux density. Gravitational and charge-based force effects emerge as statistics without special treatment. Confinement, entanglement, vacuum statistics, forces, and wavefunction terms emerge from the model's deterministic foundations.
Derivation Of Probabilistic Damage Definitions From High Fidelity Deterministic Computations
Leininger, L D
2004-10-26
This paper summarizes a methodology used by the Underground Analysis and Planning System (UGAPS) at Lawrence Livermore National Laboratory (LLNL) for the derivation of probabilistic damage curves for US Strategic Command (USSTRATCOM). UGAPS uses high fidelity finite element and discrete element codes on the massively parallel supercomputers to predict damage to underground structures from military interdiction scenarios. These deterministic calculations can be riddled with uncertainty, especially when intelligence, the basis for this modeling, is uncertain. The technique presented here attempts to account for this uncertainty by bounding the problem with reasonable cases and using those bounding cases as a statistical sample. Probability of damage curves are computed and represented that account for uncertainty within the sample and enable the war planner to make informed decisions. This work is flexible enough to incorporate any desired damage mechanism and can utilize the variety of finite element and discrete element codes within the national laboratory and government contractor community.
Robust Audio Watermarking Scheme Based on Deterministic Plus Stochastic Model
NASA Astrophysics Data System (ADS)
Dhar, Pranab Kumar; Kim, Cheol Hong; Kim, Jong-Myon
Digital watermarking has been widely used for protecting digital contents from unauthorized duplication. This paper proposes a new watermarking scheme based on spectral modeling synthesis (SMS) for copyright protection of digital contents. SMS defines a sound as a combination of deterministic events plus a stochastic component that makes it possible for a synthesized sound to attain all of the perceptual characteristics of the original sound. In our proposed scheme, watermarks are embedded into the highest prominent peak of the magnitude spectrum of each non-overlapping frame in peak trajectories. Simulation results indicate that the proposed watermarking scheme is highly robust against various kinds of attacks such as noise addition, cropping, re-sampling, re-quantization, and MP3 compression and achieves similarity values ranging from 17 to 22. In addition, our proposed scheme achieves signal-to-noise ratio (SNR) values ranging from 29 dB to 30 dB.
Location deterministic biosensing from quantum-dot-nanowire assemblies.
Liu, Chao; Kim, Kwanoh; Fan, D L
2014-08-25
Semiconductor quantum dots (QDs) with high fluorescent brightness, stability, and tunable sizes, have received considerable interest for imaging, sensing, and delivery of biomolecules. In this research, we demonstrate location deterministic biochemical detection from arrays of QD-nanowire hybrid assemblies. QDs with diameters less than 10 nm are manipulated and precisely positioned on the tips of the assembled Gold (Au) nanowires. The manipulation mechanisms are quantitatively understood as the synergetic effects of dielectrophoretic (DEP) and alternating current electroosmosis (ACEO) due to AC electric fields. The QD-nanowire hybrid sensors operate uniquely by concentrating bioanalytes to QDs on the tips of nanowires before detection, offering much enhanced efficiency and sensitivity, in addition to the position-predictable rationality. This research could result in advances in QD-based biomedical detection and inspires an innovative approach for fabricating various QD-based nanodevices. PMID:25316926
Deterministic secure communications using two-mode squeezed states
Marino, Alberto M.; Stroud, C. R. Jr.
2006-08-15
We propose a scheme for quantum cryptography that uses the squeezing phase of a two-mode squeezed state to transmit information securely between two parties. The basic principle behind this scheme is the fact that each mode of the squeezed field by itself does not contain any information regarding the squeezing phase. The squeezing phase can only be obtained through a joint measurement of the two modes. This, combined with the fact that it is possible to perform remote squeezing measurements, makes it possible to implement a secure quantum communication scheme in which a deterministic signal can be transmitted directly between two parties while the encryption is done automatically by the quantum correlations present in the two-mode squeezed state.
Location deterministic biosensing from quantum-dot-nanowire assemblies
Liu, Chao; Kim, Kwanoh; Fan, D. L.
2014-08-25
Semiconductor quantum dots (QDs) with high fluorescent brightness, stability, and tunable sizes, have received considerable interest for imaging, sensing, and delivery of biomolecules. In this research, we demonstrate location deterministic biochemical detection from arrays of QD-nanowire hybrid assemblies. QDs with diameters less than 10 nm are manipulated and precisely positioned on the tips of the assembled Gold (Au) nanowires. The manipulation mechanisms are quantitatively understood as the synergetic effects of dielectrophoretic (DEP) and alternating current electroosmosis (ACEO) due to AC electric fields. The QD-nanowire hybrid sensors operate uniquely by concentrating bioanalytes to QDs on the tips of nanowires before detection, offering much enhanced efficiency and sensitivity, in addition to the position-predictable rationality. This research could result in advances in QD-based biomedical detection and inspires an innovative approach for fabricating various QD-based nanodevices.
Fast deterministic ptychographic imaging using X-rays.
Yan, Ada W C; D'Alfonso, Adrian J; Morgan, Andrew J; Putkunz, Corey T; Allen, Leslie J
2014-08-01
We present a deterministic approach to the ptychographic retrieval of the wave at the exit surface of a specimen of condensed matter illuminated by X-rays. The method is based on the solution of an overdetermined set of linear equations, and is robust to measurement noise. The set of linear equations is efficiently solved using the conjugate gradient least-squares method implemented using fast Fourier transforms. The method is demonstrated using a data set obtained from a gold-chromium nanostructured test object. It is shown that the transmission function retrieved by this linear method is quantitatively comparable with established methods of ptychography, with a large decrease in computational time, and is thus a good candidate for real-time reconstruction.
Reinforcement learning output feedback NN control using deterministic learning technique.
Xu, Bin; Yang, Chenguang; Shi, Zhongke
2014-03-01
In this brief, a novel adaptive-critic-based neural network (NN) controller is investigated for nonlinear pure-feedback systems. The controller design is based on the transformed predictor form, and the actor-critic NN control architecture includes two NNs, whereas the critic NN is used to approximate the strategic utility function, and the action NN is employed to minimize both the strategic utility function and the tracking error. A deterministic learning technique has been employed to guarantee that the partial persistent excitation condition of internal states is satisfied during tracking control to a periodic reference orbit. The uniformly ultimate boundedness of closed-loop signals is shown via Lyapunov stability analysis. Simulation results are presented to demonstrate the effectiveness of the proposed control. PMID:24807456
YALINA analytical benchmark analyses using the deterministic ERANOS code system.
Gohar, Y.; Aliberti, G.; Nuclear Engineering Division
2009-08-31
The growing stockpile of nuclear waste constitutes a severe challenge for the mankind for more than hundred thousand years. To reduce the radiotoxicity of the nuclear waste, the Accelerator Driven System (ADS) has been proposed. One of the most important issues of ADSs technology is the choice of the appropriate neutron spectrum for the transmutation of Minor Actinides (MA) and Long Lived Fission Products (LLFP). This report presents the analytical analyses obtained with the deterministic ERANOS code system for the YALINA facility within: (a) the collaboration between Argonne National Laboratory (ANL) of USA and the Joint Institute for Power and Nuclear Research (JIPNR) Sosny of Belarus; and (b) the IAEA coordinated research projects for accelerator driven systems (ADS). This activity is conducted as a part of the Russian Research Reactor Fuel Return (RRRFR) Program and the Global Threat Reduction Initiative (GTRI) of DOE/NNSA.
Validation of a Deterministic Vibroacoustic Response Prediction Model
NASA Technical Reports Server (NTRS)
Caimi, Raoul E.; Margasahayam, Ravi
1997-01-01
This report documents the recently completed effort involving validation of a deterministic theory for the random vibration problem of predicting the response of launch pad structures in the low-frequency range (0 to 50 hertz). Use of the Statistical Energy Analysis (SEA) methods is not suitable in this range. Measurements of launch-induced acoustic loads and subsequent structural response were made on a cantilever beam structure placed in close proximity (200 feet) to the launch pad. Innovative ways of characterizing random, nonstationary, non-Gaussian acoustics are used for the development of a structure's excitation model. Extremely good correlation was obtained between analytically computed responses and those measured on the cantilever beam. Additional tests are recommended to bound the problem to account for variations in launch trajectory and inclination.
Deterministic spin-wave interferometer based on the Rydberg blockade
Wei Ran; Deng Youjin; Pan Jianwei; Zhao Bo; Chen Yuao
2011-06-15
The spin-wave (SW) N-particle path-entangled |N,0>+|0,N> (NOON) state is an N-particle Fock state with two atomic spin-wave modes maximally entangled. Attributed to the property that the phase is sensitive to collective atomic motion, the SW NOON state can be utilized as an atomic interferometer and has promising application in quantum enhanced measurement. In this paper we propose an efficient protocol to deterministically produce the atomic SW NOON state by employing the Rydberg blockade. Possible errors in practical manipulations are analyzed. A feasible experimental scheme is suggested. Our scheme is far more efficient than the recent experimentally demonstrated one, which only creates a heralded second-order SW NOON state.
Scattering of electromagnetic light waves from a deterministic anisotropic medium
NASA Astrophysics Data System (ADS)
Li, Jia; Chang, Liping; Wu, Pinghui
2015-11-01
Based on the weak scattering theory of electromagnetic waves, analytical expressions are derived for the spectral densities and degrees of polarization of an electromagnetic plane wave scattered from a deterministic anisotropic medium. It is shown that the normalized spectral densities of scattered field is highly dependent of changes of the scattering angle and degrees of polarization of incident plane waves. The degrees of polarization of scattered field are also subjective to variations of these parameters. In addition, the anisotropic effective radii of the dielectric susceptibility can lead essential influences on both spectral densities and degrees of polarization of scattered field. They are highly dependent of the effective radii of the medium. The obtained results may be applicable to determine anisotropic parameters of medium by quantitatively measuring statistics of a far-zone scattered field.
Reinforcement learning output feedback NN control using deterministic learning technique.
Xu, Bin; Yang, Chenguang; Shi, Zhongke
2014-03-01
In this brief, a novel adaptive-critic-based neural network (NN) controller is investigated for nonlinear pure-feedback systems. The controller design is based on the transformed predictor form, and the actor-critic NN control architecture includes two NNs, whereas the critic NN is used to approximate the strategic utility function, and the action NN is employed to minimize both the strategic utility function and the tracking error. A deterministic learning technique has been employed to guarantee that the partial persistent excitation condition of internal states is satisfied during tracking control to a periodic reference orbit. The uniformly ultimate boundedness of closed-loop signals is shown via Lyapunov stability analysis. Simulation results are presented to demonstrate the effectiveness of the proposed control.
A deterministic global approach for mixed-discrete structural optimization
NASA Astrophysics Data System (ADS)
Lin, Ming-Hua; Tsai, Jung-Fa
2014-07-01
This study proposes a novel approach for finding the exact global optimum of a mixed-discrete structural optimization problem. Although many approaches have been developed to solve the mixed-discrete structural optimization problem, they cannot guarantee finding a global solution or they adopt too many extra binary variables and constraints in reformulating the problem. The proposed deterministic method uses convexification strategies and linearization techniques to convert a structural optimization problem into a convex mixed-integer nonlinear programming problem solvable to obtain a global optimum. To enhance the computational efficiency in treating complicated problems, the range reduction technique is also applied to tighten variable bounds. Several numerical experiments drawn from practical structural design problems are presented to demonstrate the effectiveness of the proposed method.
A Deterministic Computational Procedure for Space Environment Electron Transport
NASA Technical Reports Server (NTRS)
Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamcyk, Anne M.
2010-01-01
A deterministic computational procedure for describing the transport of electrons in condensed media is formulated to simulate the effects and exposures from spectral distributions typical of electrons trapped in planetary magnetic fields. The primary purpose for developing the procedure is to provide a means of rapidly performing numerous repetitive transport calculations essential for electron radiation exposure assessments for complex space structures. The present code utilizes well-established theoretical representations to describe the relevant interactions and transport processes. A combined mean free path and average trajectory approach is used in the transport formalism. For typical space environment spectra, several favorable comparisons with Monte Carlo calculations are made which have indicated that accuracy is not compromised at the expense of the computational speed.
Deterministic processes vary during community assembly for ecologically dissimilar taxa
Powell, Jeff R.; Karunaratne, Senani; Campbell, Colin D.; Yao, Huaiying; Robinson, Lucinda; Singh, Brajesh K.
2015-01-01
The continuum hypothesis states that both deterministic and stochastic processes contribute to the assembly of ecological communities. However, the contextual dependency of these processes remains an open question that imposes strong limitations on predictions of community responses to environmental change. Here we measure community and habitat turnover across multiple vertical soil horizons at 183 sites across Scotland for bacteria and fungi, both dominant and functionally vital components of all soils but which differ substantially in their growth habit and dispersal capability. We find that habitat turnover is the primary driver of bacterial community turnover in general, although its importance decreases with increasing isolation and disturbance. Fungal communities, however, exhibit a highly stochastic assembly process, both neutral and non-neutral in nature, largely independent of disturbance. These findings suggest that increased focus on dispersal limitation and biotic interactions are necessary to manage and conserve the key ecosystem services provided by these assemblages. PMID:26436640
Deterministic simulation of thermal neutron radiography and tomography
NASA Astrophysics Data System (ADS)
Pal Chowdhury, Rajarshi; Liu, Xin
2016-05-01
In recent years, thermal neutron radiography and tomography have gained much attention as one of the nondestructive testing methods. However, the application of thermal neutron radiography and tomography is hindered by their technical complexity, radiation shielding, and time-consuming data collection processes. Monte Carlo simulations have been developed in the past to improve the neutron imaging facility's ability. In this paper, a new deterministic simulation approach has been proposed and demonstrated to simulate neutron radiographs numerically using a ray tracing algorithm. This approach has made the simulation of neutron radiographs much faster than by previously used stochastic methods (i.e., Monte Carlo methods). The major problem with neutron radiography and tomography simulation is finding a suitable scatter model. In this paper, an analytic scatter model has been proposed that is validated by a Monte Carlo simulation.
Automated mixed traffic transit vehicle microprocessor controller
NASA Technical Reports Server (NTRS)
Marks, R. A.; Cassell, P.; Johnston, A. R.
1981-01-01
An improved Automated Mixed Traffic Vehicle (AMTV) speed control system employing a microprocessor and transistor chopper motor current controller is described and its performance is presented in terms of velocity versus time curves. The on board computer hardware and software systems are described as is the software development system. All of the programming used in this controller was implemented using FORTRAN. This microprocessor controller made possible a number of safety features and improved the comfort associated with starting and shopping. In addition, most of the vehicle's performance characteristics can be altered by simple program parameter changes. A failure analysis of the microprocessor controller was generated and the results are included. Flow diagrams for the speed control algorithms and complete FORTRAN code listings are also included.
Deterministic Earthquake Hazard Assessment by Public Agencies in California
NASA Astrophysics Data System (ADS)
Mualchin, L.
2005-12-01
Even in its short recorded history, California has experienced a number of damaging earthquakes that have resulted in new codes and other legislation for public safety. In particular, the 1971 San Fernando earthquake produced some of the most lasting results such as the Hospital Safety Act, the Strong Motion Instrumentation Program, the Alquist-Priolo Special Studies Zone Act, and the California Department of Transportation (Caltrans') fault-based deterministic seismic hazard (DSH) map. The latter product provides values for earthquake ground motions based on Maximum Credible Earthquakes (MCEs), defined as the largest earthquakes that can reasonably be expected on faults in the current tectonic regime. For surface fault rupture displacement hazards, detailed study of the same faults apply. Originally, hospital, dam, and other critical facilities used seismic design criteria based on deterministic seismic hazard analyses (DSHA). However, probabilistic methods grew and took hold by introducing earthquake design criteria based on time factors and quantifying "uncertainties", by procedures such as logic trees. These probabilistic seismic hazard analyses (PSHA) ignored the DSH approach. Some agencies were influenced to adopt only the PSHA method. However, deficiencies in the PSHA method are becoming recognized, and the use of the method is now becoming a focus of strong debate. Caltrans is in the process of producing the fourth edition of its DSH map. The reason for preferring the DSH method is that Caltrans believes it is more realistic than the probabilistic method for assessing earthquake hazards that may affect critical facilities, and is the best available method for insuring public safety. Its time-invariant values help to produce robust design criteria that are soundly based on physical evidence. And it is the method for which there is the least opportunity for unwelcome surprises.
Traffic Aware Planner (TAP) Flight Evaluation
NASA Technical Reports Server (NTRS)
Maris, John M.; Haynes, Mark A.; Wing, David J.; Burke, Kelly A.; Henderson, Jeff; Woods, Sharon E.
2014-01-01
NASA's Traffic Aware Planner (TAP) is a cockpit decision support tool that has the potential to achieve significant fuel and time savings when it is embedded in the data-rich Next Generation Air Transportation System (NextGen) airspace. To address a key step towards the operational deployment of TAP and the NASA concept of Traffic Aware Strategic Aircrew Requests (TASAR), a system evaluation was conducted in a representative flight environment in November, 2013. Numerous challenges were overcome to achieve this goal, including the porting of the foundational Autonomous Operations Planner (AOP) software from its original simulation-based, avionics-embedded environment to an Electronic Flight Bag (EFB) platform. A flight-test aircraft was modified to host the EFB, the TAP application, an Automatic Dependent Surveillance Broadcast (ADS-B) processor, and a satellite broadband datalink. Nine Evaluation Pilots conducted 26 hours of TAP assessments using four route profiles in the complex eastern and north-eastern United States airspace. Extensive avionics and video data were collected, supplemented by comprehensive inflight and post-flight questionnaires. TAP was verified to function properly in the live avionics and ADS-B environment, characterized by recorded data dropouts, latency, and ADS-B message fluctuations. Twelve TAP-generated optimization requests were submitted to ATC, of which nine were approved, and all of which resulted in fuel and/or time savings. Analysis of subjective workload data indicated that pilot interaction with TAP during flight operations did not induce additional cognitive loading. Additionally, analyses of post-flight questionnaire data showed that the pilots perceived TAP to be useful, understandable, intuitive, and easy to use. All program objectives were met, and the next phase of TAP development and evaluations with partner airlines is in planning for 2015.
Cellular automata model for urban road traffic flow considering pedestrian crossing street
NASA Astrophysics Data System (ADS)
Zhao, Han-Tao; Yang, Shuo; Chen, Xiao-Xu
2016-11-01
In order to analyze the effect of pedestrians' crossing street on vehicle flows, we investigated traffic characteristics of vehicles and pedestrians. Based on that, rules of lane changing, acceleration, deceleration, randomization and update are modified. Then we established two urban two-lane cellular automata models of traffic flow, one of which is about sections with non-signalized crosswalk and the other is on uncontrolled sections with pedestrians crossing street at random. MATLAB is used for numerical simulation of the different traffic conditions; meanwhile space-time diagram and relational graphs of traffic flow parameters are generated and then comparatively analyzed. Simulation results indicate that when vehicle density is lower than around 25 vehs/(km lane), pedestrians have modest impact on traffic flow, whereas when vehicle density is higher than about 60 vehs/(km lane), traffic speed and volume will decrease significantly especially on sections with non-signal-controlled crosswalk. The results illustrate that the proposed models reconstruct the traffic flow's characteristic with the situation where there are pedestrians crossing and can provide some practical reference for urban traffic management.
Air traffic management evaluation tool
NASA Technical Reports Server (NTRS)
Sridhar, Banavar (Inventor); Sheth, Kapil S. (Inventor); Chatterji, Gano Broto (Inventor); Bilimoria, Karl D. (Inventor); Grabbe, Shon (Inventor); Schipper, John F. (Inventor)
2010-01-01
Method and system for evaluating and implementing air traffic management tools and approaches for managing and avoiding an air traffic incident before the incident occurs. The invention provides flight plan routing and direct routing or wind optimal routing, using great circle navigation and spherical Earth geometry. The invention provides for aircraft dynamics effects, such as wind effects at each altitude, altitude changes, airspeed changes and aircraft turns to provide predictions of aircraft trajectory (and, optionally, aircraft fuel use). A second system provides several aviation applications using the first system. These applications include conflict detection and resolution, miles-in trail or minutes-in-trail aircraft separation, flight arrival management, flight re-routing, weather prediction and analysis and interpolation of weather variables based upon sparse measurements.
Crowding Effects in Vehicular Traffic
Combinido, Jay Samuel L.; Lim, May T.
2012-01-01
While the impact of crowding on the diffusive transport of molecules within a cell is widely studied in biology, it has thus far been neglected in traffic systems where bulk behavior is the main concern. Here, we study the effects of crowding due to car density and driving fluctuations on the transport of vehicles. Using a microscopic model for traffic, we found that crowding can push car movement from a superballistic down to a subdiffusive state. The transition is also associated with a change in the shape of the probability distribution of positions from a negatively-skewed normal to an exponential distribution. Moreover, crowding broadens the distribution of cars’ trap times and cluster sizes. At steady state, the subdiffusive state persists only when there is a large variability in car speeds. We further relate our work to prior findings from random walk models of transport in cellular systems. PMID:23139762
NASA Astrophysics Data System (ADS)
Nicolay, S.; Brodie of Brodie, E. B.; Touchon, M.; d'Aubenton-Carafa, Y.; Thermes, C.; Arneodo, A.
2004-10-01
We use the continuous wavelet transform to perform a space-scale analysis of the AT and GC skews (strand asymmetries) in human genomic sequences, which have been shown to correlate with gene transcription. This study reveals the existence of a characteristic scale ℓ c≃25±10 kb that separates a monofractal long-range correlated noisy regime at small scales (ℓ<ℓ c) from relaxational oscillatory behavior at large-scale (ℓ>ℓ c). We show that these large scale nonlinear oscillations enlighten an organization of the human genome into adjacent domains ( ≈400 kb) with preferential gene orientation. When using classical techniques from dynamical systems theory, we demonstrate that these relaxational oscillations display all the characteristic properties of the chaotic strange attractor behavior observed nearby homoclinic orbits of Shil'nikov type. We discuss the possibility that replication and gene regulation processes are governed by a low-dimensional dynamical system that displays deterministic chaos.
Transition characteristic analysis of traffic evolution process for urban traffic network.
Wang, Longfei; Chen, Hong; Li, Yang
2014-01-01
The characterization of the dynamics of traffic states remains fundamental to seeking for the solutions of diverse traffic problems. To gain more insights into traffic dynamics in the temporal domain, this paper explored temporal characteristics and distinct regularity in the traffic evolution process of urban traffic network. We defined traffic state pattern through clustering multidimensional traffic time series using self-organizing maps and construct a pattern transition network model that is appropriate for representing and analyzing the evolution progress. The methodology is illustrated by an application to data flow rate of multiple road sections from Network of Shenzhen's Nanshan District, China. Analysis and numerical results demonstrated that the methodology permits extracting many useful traffic transition characteristics including stability, preference, activity, and attractiveness. In addition, more information about the relationships between these characteristics was extracted, which should be helpful in understanding the complex behavior of the temporal evolution features of traffic patterns.
Terçariol, César Augusto Sangaletti; Martinez, Alexandre Souto
2005-08-01
Consider a medium characterized by N points whose coordinates are randomly generated by a uniform distribution along the edges of a unitary d-dimensional hypercube. A walker leaves from each point of this disordered medium and moves according to the deterministic rule to go to the nearest point which has not been visited in the preceding mu steps (deterministic tourist walk). Each trajectory generated by this dynamics has an initial nonperiodic part of t steps (transient) and a final periodic part of p steps (attractor). The neighborhood rank probabilities are parametrized by the normalized incomplete beta function Id= I1/4 [1/2, (d+1) /2] . The joint distribution S(N) (mu,d) (t,p) is relevant, and the marginal distributions previously studied are particular cases. We show that, for the memory-less deterministic tourist walk in the euclidean space, this distribution is Sinfinity(1,d) (t,p) = [Gamma (1+ I(-1)(d)) (t+ I(-1)(d) ) /Gamma(t+p+ I(-1)(d)) ] delta(p,2), where t=0, 1,2, ... infinity, Gamma(z) is the gamma function and delta(i,j) is the Kronecker delta. The mean-field models are the random link models, which correspond to d-->infinity, and the random map model which, even for mu=0 , presents nontrivial cycle distribution [ S(N)(0,rm) (p) proportional to p(-1) ] : S(N)(0,rm) (t,p) =Gamma(N)/ {Gamma[N+1- (t+p) ] N( t+p)}. The fundamental quantities are the number of explored points n(e)=t+p and Id. Although the obtained distributions are simple, they do not follow straightforwardly and they have been validated by numerical experiments.
Deterministic and Stochastic Analysis of a Prey-Dependent Predator-Prey System
ERIC Educational Resources Information Center
Maiti, Alakes; Samanta, G. P.
2005-01-01
This paper reports on studies of the deterministic and stochastic behaviours of a predator-prey system with prey-dependent response function. The first part of the paper deals with the deterministic analysis of uniform boundedness, permanence, stability and bifurcation. In the second part the reproductive and mortality factors of the prey and…
Stochastic Generator of Chemical Structure. 3. Reaction Network Generation
FAULON,JEAN-LOUP; SAULT,ALLEN G.
2000-07-15
A new method to generate chemical reaction network is proposed. The particularity of the method is that network generation and mechanism reduction are performed simultaneously using sampling techniques. Our method is tested for hydrocarbon thermal cracking. Results and theoretical arguments demonstrate that our method scales in polynomial time while other deterministic network generator scale in exponential time. This finding offers the possibility to investigate complex reacting systems such as those studied in petroleum refining and combustion.
Understanding traffic dynamics at a backbone POP
NASA Astrophysics Data System (ADS)
Taft, Nina; Bhattacharyya, Supratik; Jetcheva, Jorjeta; Diot, Christophe
2001-07-01
Spatial and temporal information about traffic dynamics is central to the design of effective traffic engineering practices for IP backbones. In this paper we study backbone traffic dynamics using data collected at a major POP on a tier-1 IP backbone. We develop a methodology that combines packet-level traces from access links in the POP and BGP routing information to build components of POP-to-POP traffic matrices. Our results show that there is wide disparity in the volume of traffic headed towards different egress POPs. At the same time, we find that current routing practices in the backbone tend to constrain traffic between ingress-egress POP pairs to a small number of paths. As a result, there is a wide variation in the utilization level of links in the backbone. Frequent capacity upgrades of the heavily used links are expensive; the need for such upgrades can be reduced by designing load balancing policies that will route more traffic over less utilized links. We identify traffic aggregates based on destination address prefixes and find that this set of criteria isolates a few aggregates that account for an overwhelmingly large portion of inter-POP traffic. We also demonstrate that these aggregates exhibit stability throughout the day on per-hour time scales, and thus they form a natural basis for splitting traffic over multiple paths in order to improve load balancing.
Kinetic energy management in road traffic injury prevention: a call for action.
Khorasani-Zavareh, Davoud; Bigdeli, Maryam; Saadat, Soheil; Mohammadi, Reza
2015-01-01
By virtue of their variability, mass and speed have important roles in transferring energies during a crash incidence (kinetic energy). The sum of kinetic energy is important in determining an injury severity and that is equal to one half of the vehicle mass multiplied by the square of the vehicle speed. To meet the Vision Zero policy (a traffic safety policy) prevention activities should be focused on vehicle speed management. Understanding the role of kinetic energy will help to develop measures to reduce the generation, distribution, and effects of this energy during a road traffic crash. Road traffic injury preventive activities necessitate Kinetic energy management to improve road user safety.
Hybrid deterministic-stochastic modeling of x-ray beam bowtie filter scatter on a CT system.
Liu, Xin; Hsieh, Jiang
2015-01-01
Knowledge of scatter generated by bowtie filter (i.e. x-ray beam compensator) is crucial for providing artifact free images on the CT scanners. Our approach is to use a hybrid deterministic-stochastic simulation to estimate the scatter level generated by a bowtie filter made of a material with low atomic number. First, major components of CT systems, such as source, flat filter, bowtie filter, body phantom, are built into a 3D model. The scattered photon fluence and the primary transmitted photon fluence are simulated by MCNP - a Monte Carlo simulation toolkit. The rejection of scattered photon by the post patient collimator (anti-scatter grid) is simulated with an analytical formula. The biased sinogram is created by superimposing scatter signal generated by the simulation onto the primary x-ray beam signal. Finally, images with artifacts are reconstructed with the biased signal. The effect of anti-scatter grid height on scatter rejection are also discussed and demonstrated.
Conflict-free trajectory planning for air traffic control automation
NASA Technical Reports Server (NTRS)
Slattery, Rhonda; Green, Steve
1994-01-01
As the traffic demand continues to grow within the National Airspace System (NAS), the need for long-range planning (30 minutes plus) of arrival traffic increases greatly. Research into air traffic control (ATC) automation at ARC has led to the development of the Center-TRACON Automation System (CTAS). CTAS determines optimum landing schedules for arrival traffic and assists controllers in meeting those schedules safely and efficiently. One crucial element in the development of CTAS is the capability to perform long-range (20 minutes) and short-range (5 minutes) conflict prediction and resolution once landing schedules are determined. The determination of conflict-free trajectories within the Center airspace is particularly difficult because of large variations in speed and altitude. The paper describes the current design and implementation of the conflict prediction and resolution tools used to generate CTAS advisories in Center airspace. Conflict criteria (separation requirements) are defined and the process of separation prediction is described. The major portion of the paper will describe the current implementation of CTAS conflict resolution algorithms in terms of the degrees of freedom for resolutions as well as resolution search techniques. The tools described in this paper have been implemented in a research system designed to rapidly develop and evaluate prototype concepts and will form the basis for an operational ATC automation system.
Pseudo-Random Number Generators
NASA Technical Reports Server (NTRS)
Howell, L. W.; Rheinfurth, M. H.
1984-01-01
Package features comprehensive selection of probabilistic distributions. Monte Carlo simulations resorted to whenever systems studied not amenable to deterministic analyses or when direct experimentation not feasible. Random numbers having certain specified distribution characteristic integral part of simulations. Package consists of collector of "pseudorandom" number generators for use in Monte Carlo simulations.
D. Scott Lucas; D. S. Lucas
2005-09-01
An LDRD (Laboratory Directed Research and Development) project is underway at the Idaho National Laboratory (INL) to apply the three-dimensional multi-group deterministic neutron transport code (Attila®) to criticality, flux and depletion calculations of the Advanced Test Reactor (ATR). This paper discusses the development of Attila models for ATR, capabilities of Attila, the generation and use of different cross-section libraries, and comparisons to ATR data, MCNP, MCNPX and future applications.
Deterministic entanglement of two transmon qubits by parity measurement and digital feedback
NASA Astrophysics Data System (ADS)
Ristè, Diego; Dukalski, Marcin; Watson, Christopher; de Lange, Gijs; Tiggelman, Marijn; Blanter, Yaroslav; Lehnert, Konrad; Schouten, Raymond; Dicarlo, Leonardo
2014-03-01
While quantum measurement typically collapses quantum superpositions into a basis state, a special type of joint measurement, detecting the parity of qubit excitations, can create entanglement. Building on recent developments in quantum nondemolition measurement and feedback control in circuit QED, we realize a continuous-time parity meter for two 3D-transmon qubits using a dispersively coupled cavity and Josephson parametric amplification. Starting from a maximal superposition, we first generate entanglement with up to 88 % fidelity to the closest Bell state by postselecting on the odd-parity result. The infidelity is due to measurement-induced dephasing, arising from imperfect cavity resonance matching in the odd-parity subspace and finite transmission in the even. We then incorporate the parity meter into a digital qubit feedback loop to turn the generation of entanglement from probabilistic to deterministic, achieving 66 % fidelity to the targeted Bell state. This combination of parity measurement and conditional qubit control is at the basis of modern error correction protocols. Research funded by FOM, NWO, and the European projects SOLID and SCALEQIT.
The deterministic prediction of failure of low pressure steam turbine disks
Liu, Chun; Macdonald, D.D.
1993-05-01
Localized corrosion phenomena, including pitting corrosion, stress corrosion cracking, and corrosion fatigue, are the principal causes of corrosion-induced damage in electric power generating facilities and typically result in more than 50% of the unscheduled outages. Prediction of damage, so that repairs and inspections can be made during scheduled outages, could have an enormous impact on the economics of electric power generation. To date, prediction of corrosion damage has been made on the basis of empirical/statistical methods that have proven to be insufficiently robust and accurate to form the basis for the desired inspection/repair protocol. In this paper, we describe a deterministic method for predicting localized corrosion damage. We have used the method to illustrate how pitting corrosion initiates stress corrosion cracking (SCC) for low pressure steam turbine disks downstream of the Wilson line, where a thin condensed liquid layer exists on the steel disk surfaces. Our calculations show that the SCC initiation and propagation are sensitive to the oxygen content of the steam, the environment in the thin liquid condensed layer, and the stresses that the disk experiences in service.
Investigation of the effects of oil field traffic on low volume roadways
Mason, J.M. Jr.
1981-01-01
The farm to market roads in Texas are designed to provide service for relatively low traffic volumes and infrequent heavy vehicles. Efforts to increase domestic oil production have increased the demand placed on the rural highway system. These roads were not initially constructed to endure the impact of oil field traffic. This dissertation identifies oil field traffic and provides an estimate of annual cost associated with a reduced pavement life. Identification of oil field traffic through site specific observation provides the basis for the investigation. The study includes a description of traffic during the development of an oil well, an estimation of reduction in pavement life under these operating conditions, a description of associated roadway damage, and an estimation of increased annual pavement cost due to oil well traffic. Three main components of the analysis procedure include a pavement analysis, traffic analysis, and an estimate of the potential traffic generated by an oil well and placed on a section of F.M. roadway. A resurfacing interval for a bituminous surface treated pavement is then determined by comparing the estimated cumulative traffic demand with the terminal structural capability of the intended use pavement section. Comparison of the resurfacing intervals demonstrates the reduction in pavement life; a further comparison is made of the respective annual cost per mile of roadway. The difference between the estimated annual costs constitutes a unit capital loss due to increased traffic. A computational example of the analysis procedure is provided. Specific assumptions and limitations are also discussed. The results of the analysis are summarized in a chart and table format.
Nextgen Technologies for Mid-Term and Far-Term Air Traffic Control Operations
NASA Technical Reports Server (NTRS)
Prevot, Thomas
2009-01-01
This paper describes technologies for mid-term and far-term air traffic control operations in the Next Generation Air Transportation System (NextGen). The technologies were developed and evaluated with human-in-the-loop simulations in the Airspace Operations Laboratory (AOL) at the NASA Ames Research Center. The simulations were funded by several research focus areas within NASA's Airspace Systems program and some were co-funded by the FAA's Air Traffic Organization for Planning, Research and Technology.
A System for Traffic Violation Detection
Aliane, Nourdine; Fernandez, Javier; Mata, Mario; Bemposta, Sergio
2014-01-01
This paper describes the framework and components of an experimental platform for an advanced driver assistance system (ADAS) aimed at providing drivers with a feedback about traffic violations they have committed during their driving. The system is able to detect some specific traffic violations, record data associated to these faults in a local data-base, and also allow visualization of the spatial and temporal information of these traffic violations in a geographical map using the standard Google Earth tool. The test-bed is mainly composed of two parts: a computer vision subsystem for traffic sign detection and recognition which operates during both day and nighttime, and an event data recorder (EDR) for recording data related to some specific traffic violations. The paper covers firstly the description of the hardware architecture and then presents the policies used for handling traffic violations. PMID:25421737
Autosolitons in applied physics and traffic flow
Kerner, B.S.
1996-06-01
A review of investigations of autosolitons in nonlinear systems which are of interest for the applied physics and for the transportation research is presented. Autosolitons are solitary intrinsic states which can be formed in a broad class of physical, chemical, biological dissipative distributed media and in traffic flow. Properties of autosolitons which are general for physical systems and for traffic flow will be discussed. Based on results of recent investigations of traffic jams in traffic flow, a comparison of nonlinear characteristics of traffic jams and with nonlinear properties of autosolitons which can be formed in active systems with diffusion will be given. Forms, properties, processes of evolution of autosolitons in traffic flow, in semiconductors and in gas discharge plasma are considered. {copyright} {ital 1996 American Institute of Physics.}
Traffic Flow Management Wrap-Up
NASA Technical Reports Server (NTRS)
Grabbe, Shon
2011-01-01
Traffic Flow Management involves the scheduling and routing of air traffic subject to airport and airspace capacity constraints, and the efficient use of available airspace. Significant challenges in this area include: (1) weather integration and forecasting, (2) accounting for user preferences in the Traffic Flow Management decision making process, and (3) understanding and mitigating the environmental impacts of air traffic on the environment. To address these challenges, researchers in the Traffic Flow Management area are developing modeling, simulation and optimization techniques to route and schedule air traffic flights and flows while accommodating user preferences, accounting for system uncertainties and considering the environmental impacts of aviation. This presentation will highlight some of the major challenges facing researchers in this domain, while also showcasing recent innovations designed to address these challenges.
Analytical Solution of Traffic Cellular Automata Model
NASA Astrophysics Data System (ADS)
Lo, Shih-Ching; Hsu, Chia-Hung
2009-08-01
Complex traffic system seems to be simulated successfully by cellular automaton (CA) models. Various models are developed to understand single-lane traffic, multilane traffic, lane-changing behavior and network traffic situations. However, the result of CA simulation can only be obtained after massive microscopic computation. Although, the mean field theory (MFT) has been studied to be the approximation of CA model, the MFT can only applied to the simple CA rules or small value of parameters. In this study, we simulate traffic flow by the NaSch model under different combination of parameters, which are maximal speed, dawdling probability and density. After that, the position of critical density, the slope of free-flow and congested regime are observed and modeled due to the simulated data. Finally, the coefficients of the model will be calibrated by the simulated data and the analytical solution of traffic CA is obtained.
A system for traffic violation detection.
Aliane, Nourdine; Fernandez, Javier; Mata, Mario; Bemposta, Sergio
2014-01-01
This paper describes the framework and components of an experimental platform for an advanced driver assistance system (ADAS) aimed at providing drivers with a feedback about traffic violations they have committed during their driving. The system is able to detect some specific traffic violations, record data associated to these faults in a local data-base, and also allow visualization of the spatial and temporal information of these traffic violations in a geographical map using the standard Google Earth tool. The test-bed is mainly composed of two parts: a computer vision subsystem for traffic sign detection and recognition which operates during both day and nighttime, and an event data recorder (EDR) for recording data related to some specific traffic violations. The paper covers firstly the description of the hardware architecture and then presents the policies used for handling traffic violations.
Air traffic management evaluation tool
NASA Technical Reports Server (NTRS)
Sridhar, Banavar (Inventor); Sheth, Kapil S. (Inventor); Chatterji, Gano Broto (Inventor); Bilimoria, Karl D. (Inventor); Grabbe, Shon (Inventor); Schipper, John F. (Inventor)
2012-01-01
Methods for evaluating and implementing air traffic management tools and approaches for managing and avoiding an air traffic incident before the incident occurs. A first system receives parameters for flight plan configurations (e.g., initial fuel carried, flight route, flight route segments followed, flight altitude for a given flight route segment, aircraft velocity for each flight route segment, flight route ascent rate, flight route descent route, flight departure site, flight departure time, flight arrival time, flight destination site and/or alternate flight destination site), flight plan schedule, expected weather along each flight route segment, aircraft specifics, airspace (altitude) bounds for each flight route segment, navigational aids available. The invention provides flight plan routing and direct routing or wind optimal routing, using great circle navigation and spherical Earth geometry. The invention provides for aircraft dynamics effects, such as wind effects at each altitude, altitude changes, airspeed changes and aircraft turns to provide predictions of aircraft trajectory (and, optionally, aircraft fuel use). A second system provides several aviation applications using the first system. Several classes of potential incidents are analyzed and averted, by appropriate change en route of one or more parameters in the flight plan configuration, as provided by a conflict detection and resolution module and/or traffic flow management modules. These applications include conflict detection and resolution, miles-in trail or minutes-in-trail aircraft separation, flight arrival management, flight re-routing, weather prediction and analysis and interpolation of weather variables based upon sparse measurements. The invention combines these features to provide an aircraft monitoring system and an aircraft user system that interact and negotiate changes with each other.
Network traffic analysis using dispersion patterns
Khan, F. N.
2010-03-15
The Verilog code us used to map a measurement solution on FPGA to analyze network traffic. It realizes a set of Bloom filters and counters, besides associated control logic that can quickly measure statistics like InDegree, OutDegree, Depth, in the context of Traffic Dispersion Graphs. Such patterns are helpful in classification of network activity, like Peer to Peer and Port-Scanning, in the traffic.
Empirical synchronized flow in oversaturated city traffic
NASA Astrophysics Data System (ADS)
Kerner, Boris S.; Hemmerle, Peter; Koller, Micha; Hermanns, Gerhard; Klenov, Sergey L.; Rehborn, Hubert; Schreckenberg, Michael
2014-09-01
Based on a study of anonymized GPS probe vehicle traces measured by personal navigation devices in vehicles randomly distributed in city traffic, empirical synchronized flow in oversaturated city traffic has been revealed. It turns out that real oversaturated city traffic resulting from speed breakdown in a city in most cases can be considered random spatiotemporal alternations between sequences of moving queues and synchronized flow patterns in which the moving queues do not occur.
Traffic Management for Satellite-ATM Networks
NASA Technical Reports Server (NTRS)
Goyal, Rohit; Jain, Raj; Fahmy, Sonia; Vandalore, Bobby; Goyal, Mukul
1998-01-01
Various issues associated with "Traffic Management for Satellite-ATM Networks" are presented in viewgraph form. Specific topics include: 1) Traffic management issues for TCP/IP based data services over satellite-ATM networks; 2) Design issues for TCP/IP over ATM; 3) Optimization of the performance of TCP/IP over ATM for long delay networks; and 4) Evaluation of ATM service categories for TCP/IP traffic.
STOL Traffic environment and operational procedures
NASA Technical Reports Server (NTRS)
Schlundt, R. W.; Dewolf, R. W.; Ausrotas, R. A.; Curry, R. E.; Demaio, D.; Keene, D. W.; Speyer, J. L.; Weinreich, M.; Zeldin, S.
1972-01-01
The expected traffic environment for an intercity STOL transportation system is examined, and operational procedures are discussed in order to identify problem areas which impact STOL avionics requirements. Factors considered include: traffic densities, STOL/CTOL/VTOL traffic mix, the expect ATC environment, aircraft noise models and community noise models and community noise impact, flight paths for noise abatement, wind considerations affecting landing, approach and landing considerations, STOLport site selection, runway capacity, and STOL operations at jetports, suburban airports, and separate STOLports.
Turning Indium Oxide into a Superior Electrocatalyst: Deterministic Heteroatoms
Zhang, Bo; Zhang, Nan Nan; Chen, Jian Fu; Hou, Yu; Yang, Shuang; Guo, Jian Wei; Yang, Xiao Hua; Zhong, Ju Hua; Wang, Hai Feng; Hu, P.; Zhao, Hui Jun; Yang, Hua Gui
2013-01-01
The efficient electrocatalysts for many heterogeneous catalytic processes in energy conversion and storage systems must possess necessary surface active sites. Here we identify, from X-ray photoelectron spectroscopy and density functional theory calculations, that controlling charge density redistribution via the atomic-scale incorporation of heteroatoms is paramount to import surface active sites. We engineer the deterministic nitrogen atoms inserting the bulk material to preferentially expose active sites to turn the inactive material into a sufficient electrocatalyst. The excellent electrocatalytic activity of N-In2O3 nanocrystals leads to higher performance of dye-sensitized solar cells (DSCs) than the DSCs fabricated with Pt. The successful strategy provides the rational design of transforming abundant materials into high-efficient electrocatalysts. More importantly, the exciting discovery of turning the commonly used transparent conductive oxide (TCO) in DSCs into counter electrode material means that except for decreasing the cost, the device structure and processing techniques of DSCs can be simplified in future. PMID:24173503
Non-deterministic modelling of food-web dynamics.
Planque, Benjamin; Lindstrøm, Ulf; Subbey, Sam
2014-01-01
A novel approach to model food-web dynamics, based on a combination of chance (randomness) and necessity (system constraints), was presented by Mullon et al. in 2009. Based on simulations for the Benguela ecosystem, they concluded that observed patterns of ecosystem variability may simply result from basic structural constraints within which the ecosystem functions. To date, and despite the importance of these conclusions, this work has received little attention. The objective of the present paper is to replicate this original model and evaluate the conclusions that were derived from its simulations. For this purpose, we revisit the equations and input parameters that form the structure of the original model and implement a comparable simulation model. We restate the model principles and provide a detailed account of the model structure, equations, and parameters. Our model can reproduce several ecosystem dynamic patterns: pseudo-cycles, variation and volatility, diet, stock-recruitment relationships, and correlations between species biomass series. The original conclusions are supported to a large extent by the current replication of the model. Model parameterisation and computational aspects remain difficult and these need to be investigated further. Hopefully, the present contribution will make this approach available to a larger research community and will promote the use of non-deterministic-network-dynamics models as 'null models of food-webs' as originally advocated. PMID:25299245
Automated optimum design of wing structures. Deterministic and probabilistic approaches
NASA Technical Reports Server (NTRS)
Rao, S. S.
1982-01-01
The automated optimum design of airplane wing structures subjected to multiple behavior constraints is described. The structural mass of the wing is considered the objective function. The maximum stress, wing tip deflection, root angle of attack, and flutter velocity during the pull up maneuver (static load), the natural frequencies of the wing structure, and the stresses induced in the wing structure due to landing and gust loads are suitably constrained. Both deterministic and probabilistic approaches are used for finding the stresses induced in the airplane wing structure due to landing and gust loads. A wing design is represented by a uniform beam with a cross section in the form of a hollow symmetric double wedge. The airfoil thickness and chord length are the design variables, and a graphical procedure is used to find the optimum solutions. A supersonic wing design is represented by finite elements. The thicknesses of the skin and the web and the cross sectional areas of the flanges are the design variables, and nonlinear programming techniques are used to find the optimum solution.
Agent-Based Deterministic Modeling of the Bone Marrow Homeostasis.
Kurhekar, Manish; Deshpande, Umesh
2016-01-01
Modeling of stem cells not only describes but also predicts how a stem cell's environment can control its fate. The first stem cell populations discovered were hematopoietic stem cells (HSCs). In this paper, we present a deterministic model of bone marrow (that hosts HSCs) that is consistent with several of the qualitative biological observations. This model incorporates stem cell death (apoptosis) after a certain number of cell divisions and also demonstrates that a single HSC can potentially populate the entire bone marrow. It also demonstrates that there is a production of sufficient number of differentiated cells (RBCs, WBCs, etc.). We prove that our model of bone marrow is biologically consistent and it overcomes the biological feasibility limitations of previously reported models. The major contribution of our model is the flexibility it allows in choosing model parameters which permits several different simulations to be carried out in silico without affecting the homeostatic properties of the model. We have also performed agent-based simulation of the model of bone marrow system proposed in this paper. We have also included parameter details and the results obtained from the simulation. The program of the agent-based simulation of the proposed model is made available on a publicly accessible website. PMID:27340402
Mesoscopic quantum emitters from deterministic aggregates of conjugated polymers
Stangl, Thomas; Wilhelm, Philipp; Remmerssen, Klaas; Höger, Sigurd; Vogelsang, Jan; Lupton, John M.
2015-01-01
An appealing definition of the term “molecule” arises from consideration of the nature of fluorescence, with discrete molecular entities emitting a stream of single photons. We address the question of how large a molecular object may become by growing deterministic aggregates from single conjugated polymer chains. Even particles containing dozens of individual chains still behave as single quantum emitters due to efficient excitation energy transfer, whereas the brightness is raised due to the increased absorption cross-section of the suprastructure. Excitation energy can delocalize between individual polymer chromophores in these aggregates by both coherent and incoherent coupling, which are differentiated by their distinct spectroscopic fingerprints. Coherent coupling is identified by a 10-fold increase in excited-state lifetime and a corresponding spectral red shift. Exciton quenching due to incoherent FRET becomes more significant as aggregate size increases, resulting in single-aggregate emission characterized by strong blinking. This mesoscale approach allows us to identify intermolecular interactions which do not exist in isolated chains and are inaccessible in bulk films where they are present but masked by disorder. PMID:26417079
Is there a sharp phase transition for deterministic cellular automata?
NASA Astrophysics Data System (ADS)
Wootters, William K.; Langton, Chris G.
1990-09-01
Previous work has suggested that there is a kind of phase transition between deterministic automata exhibiting periodic behavior and those exhibiting chaotic behavior. However, unlike the usual phase transitions of physics, this transition takes place over a range of values of the parameter rather than at a specific value. The present paper asks whether the transition can be made sharp, either by taking the limit of an infinitely large rule table, or by changing the parameter in terms of which the space of automata is explored. We find strong evidence that, for the class of automata we consider, the transition does become sharp in the limit of an infinite number of symbols, the size of the neighborhood being held fixed. Our work also suggests an alternative parameter in terms of which it is likely that the transition will become fairly sharp even if one does not increase the number of symbols. In the course of our analysis, we find that mean field theory, which is our main tool, gives surprisingly good predictions of the statistical properties of the class of automata we consider.
Particle separation using virtual deterministic lateral displacement (vDLD).
Collins, David J; Alan, Tuncay; Neild, Adrian
2014-05-01
We present a method for sensitive and tunable particle sorting that we term virtual deterministic lateral displacement (vDLD). The vDLD system is composed of a set of interdigital transducers (IDTs) within a microfluidic chamber that produce a force field at an angle to the flow direction. Particles above a critical diameter, a function of the force induced by viscous drag and the force field, are displaced laterally along the minimum force potential lines, while smaller particles continue in the direction of the fluid flow without substantial perturbations. We demonstrate the effective separation of particles in a continuous-flow system with size sensitivity comparable or better than other previously reported microfluidic separation techniques. Separation of 5.0 μm from 6.6 μm, 6.6 μm from 7.0 μm and 300 nm from 500 nm particles are all achieved using the same device architecture. With the high sensitivity and flexibility vDLD affords we expect to find application in a wide variety of microfluidic platforms. PMID:24638896
Deterministic ripple-spreading model for complex networks
NASA Astrophysics Data System (ADS)
Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S.; Hines, Evor L.; di Paolo, Ezequiel
2011-04-01
This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications.
Self-Organization in 2D Traffic Flow Model with Jam-Avoiding Drive
NASA Astrophysics Data System (ADS)
Nagatani, Takashi
1995-04-01
A stochastic cellular automaton (CA) model is presented to investigate the traffic jam by self-organization in the two-dimensional (2D) traffic flow. The CA model is the extended version of the 2D asymmetric exclusion model to take into account jam-avoiding drive. Each site contains either a car moving to the up, a car moving to the right, or is empty. A up car can shift right with probability p ja if it is blocked ahead by other cars. It is shown that the three phases (the low-density phase, the intermediate-density phase and the high-density phase) appear in the traffic flow. The intermediate-density phase is characterized by the right moving of up cars. The jamming transition to the high-density jamming phase occurs with higher density of cars than that without jam-avoiding drive. The jamming transition point p 2c increases with the shifting probability p ja. In the deterministic limit of p ja=1, it is found that a new jamming transition occurs from the low-density synchronized-shifting phase to the high-density moving phase with increasing density of cars. In the synchronized-shifting phase, all up cars do not move to the up but shift to the right by synchronizing with the move of right cars. We show that the jam-avoiding drive has an important effect on the dynamical jamming transition.
On the reproducibility of spatiotemporal traffic dynamics with microscopic traffic models
NASA Astrophysics Data System (ADS)
Knorr, Florian; Schreckenberg, Michael
2012-10-01
Traffic flow is a very prominent example of a driven non-equilibrium system. A characteristic phenomenon of traffic dynamics is the spontaneous and abrupt drop of the average velocity on a stretch of road leading to congestion. Such a traffic breakdown corresponds to a boundary-induced phase transition from free flow to congested traffic. In this paper, we study the ability of selected microscopic traffic models to reproduce a traffic breakdown, and we investigate its spatiotemporal dynamics. For our analysis, we use empirical traffic data from stationary loop detectors on a German Autobahn showing a spontaneous breakdown. We then present several methods to assess the results and compare the models with each other. In addition, we will also discuss some important modeling aspects and their impact on the resulting spatiotemporal pattern. The investigation of different downstream boundary conditions, for example, shows that the physical origin of the traffic breakdown may be artificially induced by the setup of the boundaries.
49 CFR 1139.2 - Traffic study.
Code of Federal Regulations, 2014 CFR
2014-10-01
... procedure and disclosure of sampling errors for derived characteristics), quality control aspects involved... one of the motor carrier industry's Continuous Traffic Studies, and which derive either $1 million...
49 CFR 1139.2 - Traffic study.
Code of Federal Regulations, 2012 CFR
2012-10-01
... procedure and disclosure of sampling errors for derived characteristics), quality control aspects involved... one of the motor carrier industry's Continuous Traffic Studies, and which derive either $1 million...
49 CFR 1139.2 - Traffic study.
Code of Federal Regulations, 2013 CFR
2013-10-01
... procedure and disclosure of sampling errors for derived characteristics), quality control aspects involved... one of the motor carrier industry's Continuous Traffic Studies, and which derive either $1 million...
Temporal Statistic of Traffic Accidents in Turkey
NASA Astrophysics Data System (ADS)
Erdogan, S.; Yalcin, M.; Yilmaz, M.; Korkmaz Takim, A.
2015-10-01
Traffic accidents form clusters in terms of geographic space and over time which themselves exhibit distinct spatial and temporal patterns. There is an imperative need to understand how, where and when traffic accidents occur in order to develop appropriate accident reduction strategies. An improved understanding of the location, time and reasons for traffic accidents makes a significant contribution to preventing them. Traffic accident occurrences have been extensively studied from different spatial and temporal points of view using a variety of methodological approaches. In literature, less research has been dedicated to the temporal patterns of traffic accidents. In this paper, the numbers of traffic accidents are normalized according to the traffic volume and the distribution and fluctuation of these accidents is examined in terms of Islamic time intervals. The daily activities and worship of Muslims are arranged according to these time intervals that are spaced fairly throughout the day according to the position of the sun. The Islamic time intervals are never been used before to identify the critical hour for traffic accidents in the world. The results show that the sunrise is the critical time that acts as a threshold in the rate of traffic accidents throughout Turkey in Islamic time intervals.
Synchronized flow in oversaturated city traffic
NASA Astrophysics Data System (ADS)
Kerner, Boris S.; Klenov, Sergey L.; Hermanns, Gerhard; Hemmerle, Peter; Rehborn, Hubert; Schreckenberg, Michael
2013-11-01
Based on numerical simulations with a stochastic three-phase traffic flow model, we reveal that moving queues (moving jams) in oversaturated city traffic dissolve at some distance upstream of the traffic signal while transforming into synchronized flow. It is found that, as in highway traffic [Kerner, Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.85.036110 85, 036110 (2012)], such a jam-absorption effect in city traffic is explained by a strong driver's speed adaptation: Time headways (space gaps) between vehicles increase upstream of a moving queue (moving jam), resulting in moving queue dissolution. It turns out that at given traffic signal parameters, the stronger the speed adaptation effect, the shorter the mean distance between the signal location and the road location at which moving queues dissolve fully and oversaturated traffic consists of synchronized flow only. A comparison of the synchronized flow in city traffic found in this Brief Report with synchronized flow in highway traffic is made.
Air Traffic Management Research at NASA Ames
NASA Technical Reports Server (NTRS)
Davis, Thomas J.
2012-01-01
The Aviation Systems Division at the NASA Ames Research Center conducts leading edge research in air traffic management concepts and technologies. This overview will present concepts and simulation results for research in traffic flow management, safe and efficient airport surface operations, super density terminal area operations, separation assurance and system wide modeling and simulation. A brief review of the ongoing air traffic management technology demonstration (ATD-1) will also be presented. A panel discussion, with Mr. Davis serving as a panelist, on air traffic research will follow the briefing.
Kim, Y.; Shim, H. J.; Noh, T.
2006-07-01
To resolve the double-heterogeneity (DH) problem resulting from the TRISO fuel of high-temperature gas-cooled reactors (HTGRs), a synergistic combination of a deterministic method and the Monte Carlo method has been proposed. As the deterministic approach, the RPT (Reactivity-equivalent Physical Transformation) method is adopted. In the combined methodology, a reference k-infinite value is obtained by the Monte Carlo method for an initial state of a problem and it is used by the RPT method to transform the original DH problem into a conventional single-heterogeneous one, and the transformed problem is analyzed by the conventional deterministic methods. The combined methodology has been applied to the depletion analysis of typical HTGR fuels including both prismatic block and pebble. The reference solution is obtained using a Monte Carlo code MCCARD and the accuracy of the deterministic-only and the combined methods is evaluated. For the deterministic solution, the DRAGON and HELIOS codes were used. It has been shown that the combined method provides an accurate solution although the deterministic-only solution shows noticeable errors. For the pebble, the two deterministic codes cannot handle the DH problem. Nevertheless, we have shown that the solution of the DRAGON-MCCARD combined approach agrees well with the reference. (authors)
Timing of traffic lights and phase separation in two-dimensional traffic flow
NASA Astrophysics Data System (ADS)
Sun, Duo; Jiang, Rui; Wang, Bing-Hong
2010-02-01
In this paper, we study the effects of traffic light period in two-dimensional Biham-Middleton-Levine (BML) traffic flow model. It is found that a phase separation phenomenon, in which the system separates into coexistence of free flow and jam, could be observed in intermediate vehicle density range when traffic light period T⩾4. We have explained the reason of occurrence of phase separation and investigated its behavior in different traffic light period.
Modeling self-consistent multi-class dynamic traffic flow
NASA Astrophysics Data System (ADS)
Cho, Hsun-Jung; Lo, Shih-Ching
2002-09-01
In this study, we present a systematic self-consistent multiclass multilane traffic model derived from the vehicular Boltzmann equation and the traffic dispersion model. The multilane domain is considered as a two-dimensional space and the interaction among vehicles in the domain is described by a dispersion model. The reason we consider a multilane domain as a two-dimensional space is that the driving behavior of road users may not be restricted by lanes, especially motorcyclists. The dispersion model, which is a nonlinear Poisson equation, is derived from the car-following theory and the equilibrium assumption. Under the concept that all kinds of users share the finite section, the density is distributed on a road by the dispersion model. In addition, the dynamic evolution of the traffic flow is determined by the systematic gas-kinetic model derived from the Boltzmann equation. Multiplying Boltzmann equation by the zeroth, first- and second-order moment functions, integrating both side of the equation and using chain rules, we can derive continuity, motion and variance equation, respectively. However, the second-order moment function, which is the square of the individual velocity, is employed by previous researches does not have physical meaning in traffic flow. Although the second-order expansion results in the velocity variance equation, additional terms may be generated. The velocity variance equation we propose is derived from multiplying Boltzmann equation by the individual velocity variance. It modifies the previous model and presents a new gas-kinetic traffic flow model. By coupling the gas-kinetic model and the dispersion model, a self-consistent system is presented.
32 CFR 634.26 - Traffic law enforcement principles.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 32 National Defense 4 2011-07-01 2011-07-01 false Traffic law enforcement principles. 634.26... ENFORCEMENT AND CRIMINAL INVESTIGATIONS MOTOR VEHICLE TRAFFIC SUPERVISION Traffic Supervision § 634.26 Traffic law enforcement principles. (a) Traffic law enforcement should motivate drivers to operate...
14 CFR 93.123 - High density traffic airports.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 2 2010-01-01 2010-01-01 false High density traffic airports. 93.123... (CONTINUED) AIR TRAFFIC AND GENERAL OPERATING RULES SPECIAL AIR TRAFFIC RULES High Density Traffic Airports § 93.123 High density traffic airports. (a) Each of the following airports is designated as a...
32 CFR 634.44 - The traffic point system.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 4 2010-07-01 2010-07-01 true The traffic point system. 634.44 Section 634.44... CRIMINAL INVESTIGATIONS MOTOR VEHICLE TRAFFIC SUPERVISION Driving Records and the Traffic Point System § 634.44 The traffic point system. The traffic point system provides a uniform administrative device...
32 CFR 634.44 - The traffic point system.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 32 National Defense 4 2011-07-01 2011-07-01 false The traffic point system. 634.44 Section 634.44... CRIMINAL INVESTIGATIONS MOTOR VEHICLE TRAFFIC SUPERVISION Driving Records and the Traffic Point System § 634.44 The traffic point system. The traffic point system provides a uniform administrative device...
14 CFR 93.123 - High density traffic airports.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 2 2011-01-01 2011-01-01 false High density traffic airports. 93.123... (CONTINUED) AIR TRAFFIC AND GENERAL OPERATING RULES SPECIAL AIR TRAFFIC RULES High Density Traffic Airports § 93.123 High density traffic airports. (a) Each of the following airports is designated as a...
32 CFR 634.26 - Traffic law enforcement principles.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 4 2010-07-01 2010-07-01 true Traffic law enforcement principles. 634.26... ENFORCEMENT AND CRIMINAL INVESTIGATIONS MOTOR VEHICLE TRAFFIC SUPERVISION Traffic Supervision § 634.26 Traffic law enforcement principles. (a) Traffic law enforcement should motivate drivers to operate...
32 CFR 634.29 - Traffic accident investigation reports.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 4 2010-07-01 2010-07-01 true Traffic accident investigation reports. 634.29... ENFORCEMENT AND CRIMINAL INVESTIGATIONS MOTOR VEHICLE TRAFFIC SUPERVISION Traffic Supervision § 634.29 Traffic... records. Installation law enforcement officials will record traffic accident investigations on...
41 CFR 109-40.301 - Traffic management functions administration.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 41 Public Contracts and Property Management 3 2011-01-01 2011-01-01 false Traffic management... AVIATION, TRANSPORTATION, AND MOTOR VEHICLES 40-TRANSPORTATION AND TRAFFIC MANAGEMENT 40.3-Traffic Management § 109-40.301 Traffic management functions administration. The DOE traffic management functions...
14 CFR 93.123 - High density traffic airports.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 2 2013-01-01 2013-01-01 false High density traffic airports. 93.123... (CONTINUED) AIR TRAFFIC AND GENERAL OPERATING RULES SPECIAL AIR TRAFFIC RULES High Density Traffic Airports § 93.123 High density traffic airports. (a) Each of the following airports is designated as a...
14 CFR 93.123 - High density traffic airports.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 2 2014-01-01 2014-01-01 false High density traffic airports. 93.123... (CONTINUED) AIR TRAFFIC AND GENERAL OPERATING RULES SPECIAL AIR TRAFFIC RULES High Density Traffic Airports § 93.123 High density traffic airports. (a) Each of the following airports is designated as a...
14 CFR 93.123 - High density traffic airports.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 2 2012-01-01 2012-01-01 false High density traffic airports. 93.123... (CONTINUED) AIR TRAFFIC AND GENERAL OPERATING RULES SPECIAL AIR TRAFFIC RULES High Density Traffic Airports § 93.123 High density traffic airports. (a) Each of the following airports is designated as a...
Tractor accidents in Swedish traffic.
Pinzke, Stefan; Nilsson, Kerstin; Lundqvist, Peter
2012-01-01
The objective of this study is to reach a better understanding of accidents on Swedish roads involving tractors and to suggest ways of preventing them. In an earlier study we analyzed police-reported fatal accidents and accidents that led to physical injuries from 1992 to 2005. During each year of this period, tractors were involved in 128 traffic accidents on average, an average of 7 people were killed, 44 sustained serious injuries, and 143 sustained slight injuries. The number of fatalities in these tractor accidents was about 1.3% of all deaths in traffic accidents in Sweden. Cars were most often involved in the tractor accidents (58%) and 15% were single vehicle accidents. The mean age of the tractor driver involved was 39.8 years and young drivers (15-24 years) were overrepresented (30%). We are now increasing the data collected with the years 2006-2010 in order to study the changes in the number of accidents. Special attention will be given to the younger drivers and to single vehicle accidents. Based on the results we aim to develop suggestions for reducing road accidents, e.g. including measures for making farm vehicles more visible and improvement of the training provided at driving schools. PMID:22317543
Stochastic model of tumor-induced angiogenesis: Ensemble averages and deterministic equations
NASA Astrophysics Data System (ADS)
Terragni, F.; Carretero, M.; Capasso, V.; Bonilla, L. L.
2016-02-01
A recent conceptual model of tumor-driven angiogenesis including branching, elongation, and anastomosis of blood vessels captures some of the intrinsic multiscale structures of this complex system, yet allowing one to extract a deterministic integro-partial-differential description of the vessel tip density [Phys. Rev. E 90, 062716 (2014), 10.1103/PhysRevE.90.062716]. Here we solve the stochastic model, show that ensemble averages over many realizations correspond to the deterministic equations, and fit the anastomosis rate coefficient so that the total number of vessel tips evolves similarly in the deterministic and ensemble-averaged stochastic descriptions.
Bulgakov, N G; Maksimov, V N
2005-01-01
Specific application of deterministic analysis to investigate the contingencies of various components of natural biocenosis was illustrated by the example of fish production and biomass of phyto- and zooplankton. Deterministic analysis confirms the theoretic assumptions on food preferences of herbivorous fish: both silver and bighead carps avoided feeding on cyanobacteria. Being a facultative phytoplankton feeder, silver carp preferred microalgae to zooplankton. Deterministic analysis allowed us to demonstrate the contingency of the mean biomass of phyto- and zooplankton during both the whole fish production cycle and the individual periods. PMID:16004266
NASA Astrophysics Data System (ADS)
Leitao, P.; Francisco, R.; Llopart, X.; Tavernier, F.; Baron, S.; Bonacini, S.; Moreira, P.
2015-03-01
A radiation-tolerant CDR/PLL ASIC has been developed for the upcoming LHC upgrades, featuring clock Frequency Multiplication (FM) and Clock and Data Recovery (CDR), showing deterministic phase and low jitter. Two FM modes have been implemented: either generating 40, 60, 120 and 240 MHz clock outputs for GBT-FPGA applications or providing 40, 80, 160 and 320 MHz clocks for TTC and e-link applications. The CDR operates with 40, 80, 160 or 320 Mbit/s data rates while always generating clocks at 40, 80, 160 and 320 MHz, regardless of the data rate. All the outputs are phase programmable with a resolution of 195 ps or 260 ps, depending on the selected mode. The ASIC has been designed using radiation-tolerant techniques in a 130 nm CMOS technology and operates at a 1.2 V supply voltage.
Development of a First-of-a-Kind Deterministic Decision-Making Tool for Supervisory Control System
Cetiner, Sacit M; Kisner, Roger A; Muhlheim, Michael David; Fugate, David L
2015-07-01
Decision-making is the process of identifying and choosing alternatives where each alternative offers a different approach or path to move from a given state or condition to a desired state or condition. The generation of consistent decisions requires that a structured, coherent process be defined, immediately leading to a decision-making framework. The overall objective of the generalized framework is for it to be adopted into an autonomous decision-making framework and tailored to specific requirements for various applications. In this context, automation is the use of computing resources to make decisions and implement a structured decision-making process with limited or no human intervention. The overriding goal of automation is to replace or supplement human decision makers with reconfigurable decision- making modules that can perform a given set of tasks reliably. Risk-informed decision making requires a probabilistic assessment of the likelihood of success given the status of the plant/systems and component health, and a deterministic assessment between plant operating parameters and reactor protection parameters to prevent unnecessary trips and challenges to plant safety systems. The implementation of the probabilistic portion of the decision-making engine of the proposed supervisory control system was detailed in previous milestone reports. Once the control options are identified and ranked based on the likelihood of success, the supervisory control system transmits the options to the deterministic portion of the platform. The deterministic multi-attribute decision-making framework uses variable sensor data (e.g., outlet temperature) and calculates where it is within the challenge state, its trajectory, and margin within the controllable domain using utility functions to evaluate current and projected plant state space for different control decisions. Metrics to be evaluated include stability, cost, time to complete (action), power level, etc. The
Development of a Deterministic Ethernet Building blocks for Space Applications
NASA Astrophysics Data System (ADS)
Fidi, C.; Jakovljevic, Mirko
2015-09-01
The benefits of using commercially based networking standards and protocols have been widely discussed and are expected to include reduction in overall mission cost, shortened integration and test (I&T) schedules, increased operations flexibility, and hardware and software upgradeability/scalability with developments ongoing in the commercial world. The deterministic Ethernet technology TTEthernet [1] diploid on the NASA Orion spacecraft has demonstrated the use of the TTEthernet technology for a safety critical human space flight application during the Exploration Flight Test 1 (EFT-1). The TTEthernet technology used within the NASA Orion program has been matured for the use within this mission but did not lead to a broader use in space applications or an international space standard. Therefore TTTech has developed a new version which allows to scale the technology for different applications not only the high end missions allowing to decrease the size of the building blocks leading to a reduction of size weight and power enabling the use in smaller applications. TTTech is currently developing a full space products offering for its TTEthernet technology to allow the use in different space applications not restricted to launchers and human spaceflight. A broad space market assessment and the current ESA TRP7594 lead to the development of a space grade TTEthernet controller ASIC based on the ESA qualified Atmel AT1C8RHA95 process [2]. In this paper we will describe our current TTEthernet controller development towards a space qualified network component allowing future spacecrafts to operate in significant radiation environments while using a single onboard network for reliable commanding and data transfer.
"Eztrack": A single-vehicle deterministic tracking algorithm
Carrano, C J
2007-12-20
A variety of surveillance operations require the ability to track vehicles over a long period of time using sequences of images taken from a camera mounted on an airborne or similar platform. In order to be able to see and track a vehicle for any length of time, either a persistent surveillance imager is needed that can image wide fields of view over a long time-span or a highly maneuverable smaller field-of-view imager is needed that can follow the vehicle of interest. The algorithm described here was designed for the persistence surveillance case. In turns out that most vehicle tracking algorithms described in the literature[1,2,3,4] are designed for higher frame rates (> 5 FPS) and relatively short ground sampling distances (GSD) and resolutions ({approx} few cm to a couple tens of cm). But for our datasets, we are restricted to lower resolutions and GSD's ({ge}0.5 m) and limited frame-rates ({le}2.0 Hz). As a consequence, we designed our own simple approach in IDL which is a deterministic, motion-guided object tracker. The object tracking relies both on object features and path dynamics. The algorithm certainly has room for future improvements, but we have found it to be a useful tool in evaluating effects of frame-rate, resolution/GSD, and spectral content (eg. grayscale vs. color imaging ). A block diagram of the tracking approach is given in Figure 1. We describe each of the blocks of the diagram in the upcoming sections.
Accurate deterministic solutions for the classic Boltzmann shock profile
NASA Astrophysics Data System (ADS)
Yue, Yubei
The Boltzmann equation or Boltzmann transport equation is a classical kinetic equation devised by Ludwig Boltzmann in 1872. It is regarded as a fundamental law in rarefied gas dynamics. Rather than using macroscopic quantities such as density, temperature, and pressure to describe the underlying physics, the Boltzmann equation uses a distribution function in phase space to describe the physical system, and all the macroscopic quantities are weighted averages of the distribution function. The information contained in the Boltzmann equation is surprisingly rich, and the Euler and Navier-Stokes equations of fluid dynamics can be derived from it using series expansions. Moreover, the Boltzmann equation can reach regimes far from the capabilities of fluid dynamical equations, such as the realm of rarefied gases---the topic of this thesis. Although the Boltzmann equation is very powerful, it is extremely difficult to solve in most situations. Thus the only hope is to solve it numerically. But soon one finds that even a numerical simulation of the equation is extremely difficult, due to both the complex and high-dimensional integral in the collision operator, and the hyperbolic phase-space advection terms. For this reason, until few years ago most numerical simulations had to rely on Monte Carlo techniques. In this thesis I will present a new and robust numerical scheme to compute direct deterministic solutions of the Boltzmann equation, and I will use it to explore some classical gas-dynamical problems. In particular, I will study in detail one of the most famous and intrinsically nonlinear problems in rarefied gas dynamics, namely the accurate determination of the Boltzmann shock profile for a gas of hard spheres.
Reduced-Complexity Deterministic Annealing for Vector Quantizer Design
NASA Astrophysics Data System (ADS)
Demirciler, Kemal; Ortega, Antonio
2005-12-01
This paper presents a reduced-complexity deterministic annealing (DA) approach for vector quantizer (VQ) design by using soft information processing with simplified assignment measures. Low-complexity distributions are designed to mimic the Gibbs distribution, where the latter is the optimal distribution used in the standard DA method. These low-complexity distributions are simple enough to facilitate fast computation, but at the same time they can closely approximate the Gibbs distribution to result in near-optimal performance. We have also derived the theoretical performance loss at a given system entropy due to using the simple soft measures instead of the optimal Gibbs measure. We use thederived result to obtain optimal annealing schedules for the simple soft measures that approximate the annealing schedule for the optimal Gibbs distribution. The proposed reduced-complexity DA algorithms have significantly improved the quality of the final codebooks compared to the generalized Lloyd algorithm and standard stochastic relaxation techniques, both with and without the pairwise nearest neighbor (PNN) codebook initialization. The proposed algorithms are able to evade the local minima and the results show that they are not sensitive to the choice of the initial codebook. Compared to the standard DA approach, the reduced-complexity DA algorithms can operate over 100 times faster with negligible performance difference. For example, for the design of a 16-dimensional vector quantizer having a rate of 0.4375 bit/sample for Gaussian source, the standard DA algorithm achieved 3.60 dB performance in 16 483 CPU seconds, whereas the reduced-complexity DA algorithm achieved the same performance in 136 CPU seconds. Other than VQ design, the DA techniques are applicable to problems such as classification, clustering, and resource allocation.
Merging deterministic and probabilistic approaches to forecast volcanic scenarios
NASA Astrophysics Data System (ADS)
Peruzzo, E.; Bisconti, L.; Barsanti, M.; Flandoli, F.; Papale, P.
2009-04-01
Volcanoes are extremely complex systems largely inaccessible to direct observation. As a consequence, many quantities which are relevant in determining the physical and chemical processes occurring at volcanoes are largely uncertain. On the other hand, the demand for eruption scenario forecast at many hazardous volcanoes in the world is pressing, reflecting into the development and use of increasingly complex physical models and numerical codes. Such codes are capable of accounting for the extremely complex, non-linear behaviour of the volcanic processes, and for the roles of several quantities in determining volcanic scenarios and hazards. However, they often require enormous computer resources and imply long (order of days to weeks) CPU times even on the most advanced parallel computation systems available to-date. As a consequence, they can hardly be used to reasonably cover the spectrum of possible conditions expected at a given volcano. At this purpose, we have started the development of a mixed deterministic-probabilistic approach with the aim of substantially reducing (form order 10000 to 10) the number of simulations needed to adequately represent possible scenarios and their probability of occurrence, corresponding to a given set of probability distributions for the initial/boundary conditions characterizing the system. The core of the problem is to find a "best" discretization of the continuous density function describing the random variables input to the model. This is done through the stochastic quantization theory (Graf and Luschgy, 2000). The application of this theory to volcanic scenario forecast has been tested through both an oversimplified analytical model and a more complex numerical model for magma flow in volcanic conduits, the latter still running in relatively short times to allow comparison with Monte Carlo simulations. The final aim is to define proper strategies and paradigms for application to more complex, time-demanding codes
Concepts and algorithms for terminal-area traffic management
NASA Technical Reports Server (NTRS)
Erzberger, H.; Chapel, J. D.
1984-01-01
The nation's air-traffic-control system is the subject of an extensive modernization program, including the planned introduction of advanced automation techniques. This paper gives an overview of a concept for automating terminal-area traffic management. Four-dimensional (4D) guidance techniques, which play an essential role in the automated system, are reviewed. One technique, intended for on-board computer implementation, is based on application of optimal control theory. The second technique is a simplified approach to 4D guidance intended for ground computer implementation. It generates advisory messages to help the controller maintain scheduled landing times of aircraft not equipped with on-board 4D guidance systems. An operational system for the second technique, recently evaluated in a simulation, is also described.
NASA Astrophysics Data System (ADS)
Gill, Alexander T.
Every day, millions of cubic feet of natural gas is transported through interstate pipelines and consumed by customers all over the United States of America. Gas distributors, responsible for sending natural gas to individual customers, are eager for an estimate of how much natural gas will be used in the near future. GasHour(TM) software, a reliable forecasting tool from the Marquette University GasDay(TM) lab, has been providing highly accurate hourly forecasts over the past few years. Our goal is to improve current GasHour forecasts, and my thesis presents an approach to achieve that using a blending technique. This thesis includes detailed explanations of the multi-horizon forecasting technique employed by GasHour models. Several graphs are displayed to reveal the structure of hourly forecasts from GasHour. We present SMHF (Smoothing Multi-horizon Forecasts), a step-by-step method showing how a polynomial smoothing technique is applied to current GasHour predications. A slightly different approach of smoothing has also been introduced. We compare RMSEs of both GasHour forecasts and smoothed ones. Different comparisons resulting from different situations have been demonstrated as well. Several conclusions have been reached. Based on the results, blending techniques can improve current GasHour forecasts. We look forward to applying this blending technique to other fields of forecasting.
Jaramillo-Villegas, Jose A; Xue, Xiaoxiao; Wang, Pei-Hsun; Leaird, Daniel E; Weiner, Andrew M
2015-04-20
A path within the parameter space of detuning and pump power is demonstrated in order to obtain a single cavity soliton (CS) with certainty in SiN microring resonators in the anomalous dispersion regime. Once the single CS state is reached, it is possible to continue a path to compress it, broadening the corresponding single free spectral range (FSR) Kerr frequency comb. The first step to achieve this goal is to identify the stable regions in the parameter space via numerical simulations of the Lugiato-Lefever equation (LLE). Later, using this identification, we define a path from the stable modulation instability (SMI) region to the stable cavity solitons (SCS) region avoiding the chaotic and unstable regions. PMID:25968998
Method for generating all uniform π -pulse sequences used in deterministic dynamical decoupling
NASA Astrophysics Data System (ADS)
Qi, Haoyu; Dowling, Jonathan P.
2015-09-01
Dynamical decoupling has been actively investigated since Viola first suggested using a pulse sequence to protect a qubit from decoherence. Since then, many schemes of dynamical decoupling have been proposed to achieve high-order suppression, both analytically and numerically. However, hitherto, there has not been a systematic framework to understand all existing uniform π -pulse dynamical decoupling schemes. In this report, we use the projection pulse sequences as basic building blocks and concatenation as a way to combine them. We derived a concatenated-projection dynamical decoupling, a framework in which we can systematically construct pulse sequences to achieve arbitrary high suppression order. All previously known uniform dynamical decoupling sequences using π pulse can be fit into this framework. Understanding uniform dynamical decoupling as successive projections on the Hamiltonian will also give insights on how to invent new ways to construct better pulse sequences.
Method for generating all uniform π-pulse sequences used in deterministic dynamical decoupling
NASA Astrophysics Data System (ADS)
Qi, Haoyu; Dowling, Jonathan
Dynamical decoupling has been actively investigated since Viola first suggested using a pulse sequence to protect a qubit from decoherence. Since then, many schemes of dynamical decoupling have been proposed to achieve high-order suppression, both analytically and numerically. However, hitherto, there has not been a systematic framework to understand all existing uniform π-pulse dynamical decoupling schemes. In this report, we use the projection pulse sequences as basic building blocks and concatenation as a way to combine them. We derived a concatenated-projection dynamical decoupling, a framework in which we can systematically construct pulse sequences to achieve arbitrary high suppression order. All previously known uniform dynamical decoupling sequences using π pulse can be fit into this framework. Understanding uniform dynamical decoupling as successive projections on the Hamiltonian will also give insights on how to invent new ways to construct better pulse sequences. This work is supported by AirForce Office of Scientific Research, the US Army Research Office, and the National Science Foundation.
Highway Traffic Safety Manpower Functions Guide.
ERIC Educational Resources Information Center
Daugherty, Ronald D.; And Others
The purpose of the project, "Revision and Update of Traffic Safety Manpower Training Program Development Guide," was to develop the HIGHWAY TRAFFIC SAFETY MANPOWER FUNCTIONS GUIDE. This document provides an organizational schema illustrating the functions essential to be performed and the interrelationship of these functions to carry out highway…
A Serious Game for Traffic Accident Investigators
ERIC Educational Resources Information Center
Binsubaih, Ahmed; Maddock, Steve; Romano, Daniela
2006-01-01
In Dubai, traffic accidents kill one person every 37 hours and injure one person every 3 hours. Novice traffic accident investigators in the Dubai police force are expected to "learn by doing" in this intense environment. Currently, they use no alternative to the real world in order to practice. This paper argues for the use of an alternative…
Traffic Light Detection Using Conic Section Geometry
NASA Astrophysics Data System (ADS)
Hosseinyalmdary, S.; Yilmaz, A.
2016-06-01
Traffic lights detection and their state recognition is a crucial task that autonomous vehicles must reliably fulfill. Despite scientific endeavors, it still is an open problem due to the variations of traffic lights and their perception in image form. Unlike previous studies, this paper investigates the use of inaccurate and publicly available GIS databases such as OpenStreetMap. In addition, we are the first to exploit conic section geometry to improve the shape cue of the traffic lights in images. Conic section also enables us to estimate the pose of the traffic lights with respect to the camera. Our approach can detect multiple traffic lights in the scene, it also is able to detect the traffic lights in the absence of prior knowledge, and detect the traffics lights as far as 70 meters. The proposed approach has been evaluated for different scenarios and the results show that the use of stereo cameras significantly improves the accuracy of the traffic lights detection and pose estimation.
Automatic speech recognition in air traffic control
NASA Technical Reports Server (NTRS)
Karlsson, Joakim
1990-01-01
Automatic Speech Recognition (ASR) technology and its application to the Air Traffic Control system are described. The advantages of applying ASR to Air Traffic Control, as well as criteria for choosing a suitable ASR system are presented. Results from previous research and directions for future work at the Flight Transportation Laboratory are outlined.
Collegiate Aviation and FAA Air Traffic Control.
ERIC Educational Resources Information Center
Ruiz, Jose R.; Ruiz, Lorelei E.
2003-01-01
Based on a literature review this article describes the Air Traffic-Collegiate Training Initiative (AT-CTI) program, including objectives, the process by which postsecondary institutes become affiliated, advantages of affiliation, and the recruitment and employment of air traffic control graduates by the Federal Aviation Administration. (Contains…
California Guide to Traffic Safety Education.
ERIC Educational Resources Information Center
California State Dept. of Education, Sacramento.
The guide proposes an elementary through high school program encompassing many aspects of traffic safety. Chapter 1 presents definitions, instructional goals, behavioral objectives, and K-6 traffic safety concepts coupled with student performance indicators. Various elements of program administration are covered in Chapter 2. Chapter 3 includes…
Neural network system for traffic flow management
NASA Astrophysics Data System (ADS)
Gilmore, John F.; Elibiary, Khalid J.; Petersson, L. E. Rickard
1992-09-01
Atlanta will be the home of several special events during the next five years ranging from the 1996 Olympics to the 1994 Super Bowl. When combined with the existing special events (Braves, Falcons, and Hawks games, concerts, festivals, etc.), the need to effectively manage traffic flow from surface streets to interstate highways is apparent. This paper describes a system for traffic event response and management for intelligent navigation utilizing signals (TERMINUS) developed at Georgia Tech for adaptively managing special event traffic flows in the Atlanta, Georgia area. TERMINUS (the original name given Atlanta, Georgia based upon its role as a rail line terminating center) is an intelligent surface street signal control system designed to manage traffic flow in Metro Atlanta. The system consists of three components. The first is a traffic simulation of the downtown Atlanta area around Fulton County Stadium that models the flow of traffic when a stadium event lets out. Parameters for the surrounding area include modeling for events during various times of day (such as rush hour). The second component is a computer graphics interface with the simulation that shows the traffic flows achieved based upon intelligent control system execution. The final component is the intelligent control system that manages surface street light signals based upon feedback from control sensors that dynamically adapt the intelligent controller's decision making process. The intelligent controller is a neural network model that allows TERMINUS to control the configuration of surface street signals to optimize the flow of traffic away from special events.
Seven Traffic Signals in Two Minutes
2011-01-01
Topeka, KS has activated the first of three key traffic corridors to receive a "green light tunnel," a real-time adaptive traffic signal system that synchronizes signals to create a series of green lights for motorists. The result is fewer stops, less travel time and -- most importantly -- a lot of saved gasoline.
Minimizing the Delay at Traffic Lights
ERIC Educational Resources Information Center
Van Hecke, Tanja
2009-01-01
Vehicles holding at traffic lights is a typical queuing problem. At crossings the vehicles experience delay in both directions. Longer periods with green lights in one direction are disadvantageous for the vehicles coming from the other direction. The total delay for getting through the traffic point is what counts. This article presents an…
Bastián-Monarca, Nicolás A; Suárez, Enrique; Arenas, Jorge P
2016-04-15
In many countries such as Chile, there is scarce official information for generating accurate noise maps. Therefore, specific simplification methods are becoming a real need for the acoustic community in developing countries. Thus, the main purpose of this work was to evaluate and apply simplified methods to generate a cost-effective traffic noise map of a small city of Chile. The experimental design involved the simplification of the cartographic information on buildings by clustering the households within a block, and the classification of the vehicular traffic flows into categories to generate an inexpensive noise map. The streets have been classified according to the official road classification of the country. Segregation of vehicles from light, heavy and motorbikes is made to account for traffic flow. In addition, a number of road traffic noise models were compared with noise measurements and consequently the road traffic model RLS-90 was chosen to generate the noise map of the city using the Computer Aided Noise Abatement (CadnaA) software. It was observed a direct dependence between noise levels and traffic flow versus each category of street used. The methodology developed in this study appears to be convenient in developing countries to obtain accurate approximations to develop inexpensive traffic noise maps. PMID:26845180
Wei, Kun; Gao, Shilong; Zhong, Suchuan; Ma, Hong
2012-01-01
In dynamical systems theory, a system which can be described by differential equations is called a continuous dynamical system. In studies on genetic oscillation, most deterministic models at early stage are usually built on ordinary differential equations (ODE). Therefore, gene transcription which is a vital part in genetic oscillation is presupposed to be a continuous dynamical system by default. However, recent studies argued that discontinuous transcription might be more common than continuous transcription. In this paper, by appending the inserted silent interval lying between two neighboring transcriptional events to the end of the preceding event, we established that the running time for an intact transcriptional event increases and gene transcription thus shows slow dynamics. By globally replacing the original time increment for each state increment by a larger one, we introduced fractional differential equations (FDE) to describe such globally slow transcription. The impact of fractionization on genetic oscillation was then studied in two early stage models--the Goodwin oscillator and the Rössler oscillator. By constructing a "dual memory" oscillator--the fractional delay Goodwin oscillator, we suggested that four general requirements for generating genetic oscillation should be revised to be negative feedback, sufficient nonlinearity, sufficient memory and proper balancing of timescale. The numerical study of the fractional Rössler oscillator implied that the globally slow transcription tends to lower the chance of a coupled or more complex nonlinear genetic oscillatory system behaving chaotically.
A three-variable model of deterministic chaos in the Belousov-Zhabotinsky reaction
NASA Astrophysics Data System (ADS)
Györgyi, László; Field, Richard J.
1992-02-01
CHAOS is exhibited by a wide variety of systems governed by nonlinear dynamic laws1-3. Its most striking feature is an apparent randomness which seems to contradict its deterministic origin. The best-studied chaotic chemical system is the Belousov-Zhabotinsky (BZ) reaction4-6 in a continuous-flow stirred-tank reactor (CSTR). Here we present a simple mechanism for the BZ reaction which allows us to develop a description in terms of a set of differential equations containing only three variables, the minimum number required to generate chaos in a continuous (non-iterative) dynamical system2. In common with experiments, our model shows aperiodicity and transitions between periodicity and chaos near bifurcations between oscillatory and steady-state behaviour, which occur at both low and high CSTR flow rates. While remaining closely related to a real chaotic chemical system, our model is sufficiently simple to allow detailed mathematical analysis. It also reproduces many other features of the BZ reaction better than does the simple Oregonator7 (which cannot produce chaos).
A deterministic Lagrangian particle separation-based method for advective-diffusion problems
NASA Astrophysics Data System (ADS)
Wong, Ken T. M.; Lee, Joseph H. W.; Choi, K. W.
2008-12-01
A simple and robust Lagrangian particle scheme is proposed to solve the advective-diffusion transport problem. The scheme is based on relative diffusion concepts and simulates diffusion by regulating particle separation. This new approach generates a deterministic result and requires far less number of particles than the random walk method. For the advection process, particles are simply moved according to their velocity. The general scheme is mass conservative and is free from numerical diffusion. It can be applied to a wide variety of advective-diffusion problems, but is particularly suited for ecological and water quality modelling when definition of particle attributes (e.g., cell status for modelling algal blooms or red tides) is a necessity. The basic derivation, numerical stability and practical implementation of the NEighborhood Separation Technique (NEST) are presented. The accuracy of the method is demonstrated through a series of test cases which embrace realistic features of coastal environmental transport problems. Two field application examples on the tidal flushing of a fish farm and the dynamics of vertically migrating marine algae are also presented.
Inverting for a deterministic surface gravity wave using the sensitivity-kernel approach.
Roux, Philippe; Nicolas, Barbara
2014-04-01
The dynamic imaging of a deterministic gravity wave propagating at an air-water interface requires continuous sampling of every point at this interface. This sampling can be done acoustically using waves that propagate in the water column but have specular reflection points that fully scan the air-water interface. This study aims to perform this complex task experimentally, with identical ultrasonic source and receiver arrays that face each other in a 1-m-long, 5-cm-deep fluid waveguide, and with frequencies in the MHz range. The waveguide transfer matrix is recorded 100 times per second between the source-receiver arrays, while a gravity wave is generated at the air-water interface. Through the beamforming process, a large set of acoustic multi-reverberated beams are isolated and identified that interact with the air-water interface. The travel-time and amplitude modulations of each eigenbeam are measured when the surface gravity wave travels through the source-receiver plane. Linear inversion of the travel-time and amplitude perturbations is performed from a few thousand eigenbeams using diffraction-based sensitivity kernels. Inversion results using travel-times, amplitudes, or these two observables together, lead to accurate spatial-temporal patterns of the surface deformation. The advantages and limitations of the method are discussed. PMID:25234978
Efficiency of transport in periodic potentials: dichotomous noise contra deterministic force
NASA Astrophysics Data System (ADS)
Spiechowicz, J.; Łuczka, J.; Machura, L.
2016-05-01
We study the transport of an inertial Brownian particle moving in a symmetric and periodic one-dimensional potential, and subjected to both a symmetric, unbiased external harmonic force as well as biased dichotomic noise η (t) also known as a random telegraph signal or a two state continuous-time Markov process. In doing so, we concentrate on the previously reported regime (Spiechowicz et al 2014 Phys. Rev. E 90 032104) for which non-negative biased noise η (t) in the form of generalized white Poissonian noise can induce anomalous transport processes similar to those generated by a deterministic constant force F=< η (t)> but significantly more effective than F, i.e. the particle moves much faster, the velocity fluctuations are noticeably reduced and the transport efficiency is enhanced several times. Here, we confirm this result for the case of dichotomous fluctuations which, in contrast to white Poissonian noise, can assume positive as well as negative values and examine the role of thermal noise in the observed phenomenon. We focus our attention on the impact of bidirectionality of dichotomous fluctuations and reveal that the effect of nonequilibrium noise enhanced efficiency is still detectable. This result may explain transport phenomena occurring in strongly fluctuating environments of both physical and biological origin. Our predictions can be corroborated experimentally by use of a setup that consists of a resistively and capacitively shunted Josephson junction.
Inverting for a deterministic surface gravity wave using the sensitivity-kernel approach.
Roux, Philippe; Nicolas, Barbara
2014-04-01
The dynamic imaging of a deterministic gravity wave propagating at an air-water interface requires continuous sampling of every point at this interface. This sampling can be done acoustically using waves that propagate in the water column but have specular reflection points that fully scan the air-water interface. This study aims to perform this complex task experimentally, with identical ultrasonic source and receiver arrays that face each other in a 1-m-long, 5-cm-deep fluid waveguide, and with frequencies in the MHz range. The waveguide transfer matrix is recorded 100 times per second between the source-receiver arrays, while a gravity wave is generated at the air-water interface. Through the beamforming process, a large set of acoustic multi-reverberated beams are isolated and identified that interact with the air-water interface. The travel-time and amplitude modulations of each eigenbeam are measured when the surface gravity wave travels through the source-receiver plane. Linear inversion of the travel-time and amplitude perturbations is performed from a few thousand eigenbeams using diffraction-based sensitivity kernels. Inversion results using travel-times, amplitudes, or these two observables together, lead to accurate spatial-temporal patterns of the surface deformation. The advantages and limitations of the method are discussed.
78 FR 2711 - Air Traffic Procedures Advisory Committee
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-14
... Federal Aviation Administration Air Traffic Procedures Advisory Committee AGENCY: Federal Aviation... Federal Aviation Administration Air Traffic Procedures Advisory Committee (ATPAC) will be held to review present air traffic control procedures and practices for standardization, revision, clarification,...
77 FR 56698 - Air Traffic Procedures Advisory Committee
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-13
... Federal Aviation Administration Air Traffic Procedures Advisory Committee AGENCY: Federal Aviation... Federal Aviation Administration Air Traffic Procedures Advisory Committee (ATPAC) will be held to review present air traffic control procedures and practices for standardization, revision, clarification,...
77 FR 2603 - Air Traffic Procedures Advisory Committee
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-18
... Federal Aviation Administration Air Traffic Procedures Advisory Committee AGENCY: Federal Aviation... Federal Aviation Administration Air Traffic Procedures Advisory Committee (ATPAC) will be held to review present air traffic control procedures and practices for standardization, revision, clarification,...
76 FR 59481 - Air Traffic Procedures Advisory Committee
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-26
... Federal Aviation Administration Air Traffic Procedures Advisory Committee AGENCY: Federal Aviation... Federal Aviation Administration Air Traffic Procedures Advisory Committee (ATPAC) will be held to review present air traffic control procedures and practices for standardization, revision, clarification,...
77 FR 27835 - Air Traffic Procedures Advisory Committee
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-11
... Federal Aviation Administration Air Traffic Procedures Advisory Committee AGENCY: Federal Aviation... Federal Aviation Administration Air Traffic Procedures Advisory Committee (ATPAC) will be held to review present air traffic control procedures and practices for standardization, revision, clarification,...
78 FR 66098 - Air Traffic Procedures Advisory Committee
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-04
... Federal Aviation Administration Air Traffic Procedures Advisory Committee AGENCY: Federal Aviation... that a meeting of the Federal Aviation Administration Air Traffic Procedures Advisory Committee (ATPAC) will be held to review present air traffic control procedures and practices for...
75 FR 22892 - Air Traffic Procedures Advisory Committee
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-30
... Federal Aviation Administration Air Traffic Procedures Advisory Committee AGENCY: Federal Aviation... Federal Aviation Administration Air Traffic Procedures Advisory Committee (ATPAC) will be held to review present air traffic control procedures and practices for standardization, revision, clarification,...
75 FR 63255 - Air Traffic Procedures Advisory Committee Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-14
... Federal Aviation Administration Air Traffic Procedures Advisory Committee Meeting AGENCY: Federal Aviation... Federal Aviation Administration Air Traffic Procedures Advisory Committee (ATPAC) will be held to review present air traffic control procedures and practices for standardization, revision, clarification,...
76 FR 27168 - Air Traffic Procedures Advisory Committee
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-10
... Federal Aviation Administration Air Traffic Procedures Advisory Committee AGENCY: Federal Aviation... Federal Aviation Administration Air Traffic Procedures Advisory Committee (ATPAC) will be held to review present air traffic control procedures and practices for standardization, revision, clarification,...
75 FR 68022 - Air Traffic Procedures Advisory Committee
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-04
... Federal Aviation Administration Air Traffic Procedures Advisory Committee AGENCY: Federal Aviation... been issued for the Federal Aviation Administration Air Traffic Procedures Advisory Committee (ATPAC... Washington, DC, on October 29, 2010. Elizabeth Ray, Executive Director, Air Traffic Procedures...
Multi-scale dynamical behavior of spatially distributed systems: a deterministic point of view
NASA Astrophysics Data System (ADS)
Mangiarotti, S.; Le Jean, F.; Drapeau, L.; Huc, M.
2015-12-01
Physical and biophysical systems are spatially distributed systems. Their behavior can be observed or modelled spatially at various resolutions. In this work, a deterministic point of view is adopted to analyze multi-scale behavior taking a set of ordinary differential equation (ODE) as elementary part of the system.To perform analyses, scenes of study are thus generated based on ensembles of identical elementary ODE systems. Without any loss of generality, their dynamics is chosen chaotic in order to ensure sensitivity to initial conditions, that is, one fundamental property of atmosphere under instable conditions [1]. The Rössler system [2] is used for this purpose for both its topological and algebraic simplicity [3,4].Two cases are thus considered: the chaotic oscillators composing the scene of study are taken either independent, or in phase synchronization. Scale behaviors are analyzed considering the scene of study as aggregations (basically obtained by spatially averaging the signal) or as associations (obtained by concatenating the time series). The global modeling technique is used to perform the numerical analyses [5].One important result of this work is that, under phase synchronization, a scene of aggregated dynamics can be approximated by the elementary system composing the scene, but modifying its parameterization [6]. This is shown based on numerical analyses. It is then demonstrated analytically and generalized to a larger class of ODE systems. Preliminary applications to cereal crops observed from satellite are also presented.[1] Lorenz, Deterministic nonperiodic flow. J. Atmos. Sci., 20, 130-141 (1963).[2] Rössler, An equation for continuous chaos, Phys. Lett. A, 57, 397-398 (1976).[3] Gouesbet & Letellier, Global vector-field reconstruction by using a multivariate polynomial L2 approximation on nets, Phys. Rev. E 49, 4955-4972 (1994).[4] Letellier, Roulin & Rössler, Inequivalent topologies of chaos in simple equations, Chaos, Solitons
NASA Astrophysics Data System (ADS)
Zhong, Zhaopeng; Talamo, Alberto; Gohar, Yousry
2013-07-01
The effective delayed neutron fraction β plays an important role in kinetics and static analysis of the reactor physics experiments. It is used as reactivity unit referred to as "dollar". Usually, it is obtained by computer simulation due to the difficulty in measuring it experimentally. In 1965, Keepin proposed a method, widely used in the literature, for the calculation of the effective delayed neutron fraction β. This method requires calculation of the adjoint neutron flux as a weighting function of the phase space inner products and is easy to implement by deterministic codes. With Monte Carlo codes, the solution of the adjoint neutron transport equation is much more difficult because of the continuous-energy treatment of nuclear data. Consequently, alternative methods, which do not require the explicit calculation of the adjoint neutron flux, have been proposed. In 1997, Bretscher introduced the k-ratio method for calculating the effective delayed neutron fraction; this method is based on calculating the multiplication factor of a nuclear reactor core with and without the contribution of delayed neutrons. The multiplication factor set by the delayed neutrons (the delayed multiplication factor) is obtained as the difference between the total and the prompt multiplication factors. Using Monte Carlo calculation Bretscher evaluated the β as the ratio between the delayed and total multiplication factors (therefore the method is often referred to as the k-ratio method). In the present work, the k-ratio method is applied by Monte Carlo (MCNPX) and deterministic (PARTISN) codes. In the latter case, the ENDF/B nuclear data library of the fuel isotopes (235U and 238U) has been processed by the NJOY code with and without the delayed neutron data to prepare multi-group WIMSD neutron libraries for the lattice physics code DRAGON, which was used to generate the PARTISN macroscopic cross sections. In recent years Meulekamp and van der Marck in 2006 and Nauchi and Kameyama
Urquidi-Macdonald, M.; Van Voorhis, D.; Macdonald, D.D.
1995-12-01
Erosion-corrosion is flow-assisted corrosion that can cause wall thinning in fluid piping systems. Several key parameters, such as pH, temperature, flow rate, mass transfer coefficient (which is a function of the geometry and pipe configuration), and materials determine the rate at which damage develops. In this study, the authors generated an experimental data base from the open literature which they used to train and to test an Artificial Neural Network (ANN). They also developed a deterministic model which they used to make predictions. The predictions from the deterministic model, and from the ANN were compared to the experimental data collected and the results are reviewed and discussed. The artificial Neural Network was designed to learn from about 60% of the experimental data collected. The data contained as variables experimental single phase erosion-corrosion rates (mm/yr) (for several configurations of mild steel piping under various environmental and mechanical conditions including: pH, temperature, flow rate, mass transfer coefficient, and oxygen concentration). However, most of the data collected contains no information on the oxygen concentration in the solution, the hydrodynamic numbers characterizing the geometry, or flow velocity. Instead of the hydrodynamic characteristics, the mass transfer coefficient was given (the mass transfer coefficient will account for geometry and flow velocity effects). The experimental information usually does not contain detailed information on the material composition, or on the chemical composition of the solution. Accordingly, the number of variables used to train the ANN was limited.
Cortes, Jesus M.; Desroches, Mathieu; Rodrigues, Serafim; Veltz, Romain; Muñoz, Miguel A.; Sejnowski, Terrence J.
2013-01-01
Short-term synaptic plasticity strongly affects the neural dynamics of cortical networks. The Tsodyks and Markram (TM) model for short-term synaptic plasticity accurately accounts for a wide range of physiological responses at different types of cortical synapses. Here, we report a route to chaotic behavior via a Shilnikov homoclinic bifurcation that dynamically organizes some of the responses in the TM model. In particular, the presence of such a homoclinic bifurcation strongly affects the shape of the trajectories in the phase space and induces highly irregular transient dynamics; indeed, in the vicinity of the Shilnikov homoclinic bifurcation, the number of population spikes and their precise timing are unpredictable and highly sensitive to the initial conditions. Such an irregular deterministic dynamics has its counterpart in stochastic/network versions of the TM model: The existence of the Shilnikov homoclinic bifurcation generates complex and irregular spiking patterns and—acting as a sort of springboard—facilitates transitions between the down-state and unstable periodic orbits. The interplay between the (deterministic) homoclinic bifurcation and stochastic effects may give rise to some of the complex dynamics observed in neural systems. PMID:24062464
Design of automation tools for management of descent traffic
NASA Technical Reports Server (NTRS)
Erzberger, Heinz; Nedell, William
1988-01-01
The design of an automated air traffic control system based on a hierarchy of advisory tools for controllers is described. Compatibility of the tools with the human controller, a key objective of the design, is achieved by a judicious selection of tasks to be automated and careful attention to the design of the controller system interface. The design comprises three interconnected subsystems referred to as the Traffic Management Advisor, the Descent Advisor, and the Final Approach Spacing Tool. Each of these subsystems provides a collection of tools for specific controller positions and tasks. This paper focuses primarily on the Descent Advisor which provides automation tools for managing descent traffic. The algorithms, automation modes, and graphical interfaces incorporated in the design are described. Information generated by the Descent Advisor tools is integrated into a plan view traffic display consisting of a high-resolution color monitor. Estimated arrival times of aircraft are presented graphically on a time line, which is also used interactively in combination with a mouse input device to select and schedule arrival times. Other graphical markers indicate the location of the fuel-optimum top-of-descent point and the predicted separation distances of aircraft at a designated time-control point. Computer generated advisories provide speed and descent clearances which the controller can issue to aircraft to help them arrive at the feeder gate at the scheduled times or with specified separation distances. Two types of horizontal guidance modes, selectable by the controller, provide markers for managing the horizontal flightpaths of aircraft under various conditions. The entire system consisting of descent advisor algorithm, a library of aircraft performance models, national airspace system data bases, and interactive display software has been implemented on a workstation made by Sun Microsystems, Inc. It is planned to use this configuration in operational
A Distributed Trajectory-Oriented Approach to Managing Traffic Complexity
NASA Technical Reports Server (NTRS)
Idris, Husni; Wing, David J.; Vivona, Robert; Garcia-Chico, Jose-Luis
2007-01-01
In order to handle the expected increase in air traffic volume, the next generation air transportation system is moving towards a distributed control architecture, in which ground-based service providers such as controllers and traffic managers and air-based users such as pilots share responsibility for aircraft trajectory generation and management. While its architecture becomes more distributed, the goal of the Air Traffic Management (ATM) system remains to achieve objectives such as maintaining safety and efficiency. It is, therefore, critical to design appropriate control elements to ensure that aircraft and groundbased actions result in achieving these objectives without unduly restricting user-preferred trajectories. This paper presents a trajectory-oriented approach containing two such elements. One is a trajectory flexibility preservation function, by which aircraft plan their trajectories to preserve flexibility to accommodate unforeseen events. And the other is a trajectory constraint minimization function by which ground-based agents, in collaboration with air-based agents, impose just-enough restrictions on trajectories to achieve ATM objectives, such as separation assurance and flow management. The underlying hypothesis is that preserving trajectory flexibility of each individual aircraft naturally achieves the aggregate objective of avoiding excessive traffic complexity, and that trajectory flexibility is increased by minimizing constraints without jeopardizing the intended ATM objectives. The paper presents conceptually how the two functions operate in a distributed control architecture that includes self separation. The paper illustrates the concept through hypothetical scenarios involving conflict resolution and flow management. It presents a functional analysis of the interaction and information flow between the functions. It also presents an analytical framework for defining metrics and developing methods to preserve trajectory flexibility and
Zhang, Binbin; Chen, Jun; Jin, Long; Deng, Weili; Zhang, Lei; Zhang, Haitao; Zhu, Minhao; Yang, Weiqing; Wang, Zhong Lin
2016-06-28
Wireless traffic volume detectors play a critical role for measuring the traffic-flow in a real-time for current Intelligent Traffic System. However, as a battery-operated electronic device, regularly replacing battery remains a great challenge, especially in the remote area and wide distribution. Here, we report a self-powered active wireless traffic volume sensor by using a rotating-disk-based hybridized nanogenerator of triboelectric nanogenerator and electromagnetic generator as the sustainable power source. Operated at a rotating rate of 1000 rpm, the device delivered an output power of 17.5 mW, corresponding to a volume power density of 55.7 W/m(3) (Pd = P/V, see Supporting Information for detailed calculation) at a loading resistance of 700 Ω. The hybridized nanogenerator was demonstrated to effectively harvest energy from wind generated by a moving vehicle through the tunnel. And the delivered power is capable of triggering a counter via a wireless transmitter for real-time monitoring the traffic volume in the tunnel. This study further expands the applications of triboelectric nanogenerators for high-performance ambient mechanical energy harvesting and as sustainable power sources for driving wireless traffic volume sensors.
Acoustical and perceptual assessment of water sounds and their use over road traffic noise.
Galbrun, Laurent; Ali, Tahrir T
2013-01-01
This paper examines physical and perceptual properties of water sounds generated by small to medium sized water features that have applications for road traffic noise masking. A large variety of water sounds were produced in the laboratory by varying design parameters. Analysis showed that estimations can be made on how these parameters affect sound pressure levels, frequency content, and psychoacoustic properties. Comparisons with road traffic noise showed that there is a mismatch between the frequency responses of traffic noise and water sounds, with the exception of waterfalls with high flow rates, which can generate large low frequency levels comparable to traffic noise. Perceptual assessments were carried out in the context of peacefulness and relaxation, where both water sounds and noise from dense road traffic were audible. Results showed that water sounds should be similar or not less than 3 dB below the road traffic noise level (confirming previous research), and that stream sounds tend to be preferred to fountain sounds, which are in turn preferred to waterfall sounds. Analysis made on groups of sounds also indicated that low sharpness and large temporal variations were preferred on average, although no acoustical or psychoacoustical parameter correlated well with the individual sound preferences. PMID:23297897
Zhang, Binbin; Chen, Jun; Jin, Long; Deng, Weili; Zhang, Lei; Zhang, Haitao; Zhu, Minhao; Yang, Weiqing; Wang, Zhong Lin
2016-06-28
Wireless traffic volume detectors play a critical role for measuring the traffic-flow in a real-time for current Intelligent Traffic System. However, as a battery-operated electronic device, regularly replacing battery remains a great challenge, especially in the remote area and wide distribution. Here, we report a self-powered active wireless traffic volume sensor by using a rotating-disk-based hybridized nanogenerator of triboelectric nanogenerator and electromagnetic generator as the sustainable power source. Operated at a rotating rate of 1000 rpm, the device delivered an output power of 17.5 mW, corresponding to a volume power density of 55.7 W/m(3) (Pd = P/V, see Supporting Information for detailed calculation) at a loading resistance of 700 Ω. The hybridized nanogenerator was demonstrated to effectively harvest energy from wind generated by a moving vehicle through the tunnel. And the delivered power is capable of triggering a counter via a wireless transmitter for real-time monitoring the traffic volume in the tunnel. This study further expands the applications of triboelectric nanogenerators for high-performance ambient mechanical energy harvesting and as sustainable power sources for driving wireless traffic volume sensors. PMID:27232668
Acoustical and perceptual assessment of water sounds and their use over road traffic noise.
Galbrun, Laurent; Ali, Tahrir T
2013-01-01
This paper examines physical and perceptual properties of water sounds generated by small to medium sized water features that have applications for road traffic noise masking. A large variety of water sounds were produced in the laboratory by varying design parameters. Analysis showed that estimations can be made on how these parameters affect sound pressure levels, frequency content, and psychoacoustic properties. Comparisons with road traffic noise showed that there is a mismatch between the frequency responses of traffic noise and water sounds, with the exception of waterfalls with high flow rates, which can generate large low frequency levels comparable to traffic noise. Perceptual assessments were carried out in the context of peacefulness and relaxation, where both water sounds and noise from dense road traffic were audible. Results showed that water sounds should be similar or not less than 3 dB below the road traffic noise level (confirming previous research), and that stream sounds tend to be preferred to fountain sounds, which are in turn preferred to waterfall sounds. Analysis made on groups of sounds also indicated that low sharpness and large temporal variations were preferred on average, although no acoustical or psychoacoustical parameter correlated well with the individual sound preferences.
Data traffic reduction schemes for Cholesky factorization on asynchronous multiprocessor systems
NASA Technical Reports Server (NTRS)
Naik, Vijay K.; Patrick, Merrell L.
1989-01-01
Communication requirements of Cholesky factorization of dense and sparse symmetric, positive definite matrices are analyzed. The communication requirement is characterized by the data traffic generated on multiprocessor systems with local and shared memory. Lower bound proofs are given to show that when the load is uniformly distributed the data traffic associated with factoring an n x n dense matrix using n to the alpha power (alpha less than or equal 2) processors is omega(n to the 2 + alpha/2 power). For n x n sparse matrices representing a square root of n x square root of n regular grid graph the data traffic is shown to be omega(n to the 1 + alpha/2 power), alpha less than or equal 1. Partitioning schemes that are variations of block assignment scheme are described and it is shown that the data traffic generated by these schemes are asymptotically optimal. The schemes allow efficient use of up to O(n to the 2nd power) processors in the dense case and up to O(n) processors in the sparse case before the total data traffic reaches the maximum value of O(n to the 3rd power) and O(n to the 3/2 power), respectively. It is shown that the block based partitioning schemes allow a better utilization of the data accessed from shared memory and thus reduce the data traffic than those based on column-wise wrap around assignment schemes.
Effect of Traffic Position Accuracy for Conducting Safe Airport Surface Operations
NASA Technical Reports Server (NTRS)
Jones, Denise R.; Prinzel, Lawrence J., III; Bailey, Randall E.; Arthur, Jarvis J., III; Barnes, James R.
2014-01-01
The Next Generation Air Transportation System (NextGen) concept proposes many revolutionary operational concepts and technologies, such as display of traffic information and movements, airport moving maps (AMM), and proactive alerts of runway incursions and surface traffic conflicts, to deliver an overall increase in system capacity and safety. A piloted simulation study was conducted at the National Aeronautics and Space Administration (NASA) Langley Research Center to evaluate the ability to conduct safe and efficient airport surface operations while utilizing an AMM displaying traffic of various position accuracies as well as the effect of traffic position accuracy on airport conflict detection and resolution (CD&R) capability. Nominal scenarios and off-nominal conflict scenarios were conducted using 12 airline crews operating in a simulated Memphis International Airport terminal environment. The data suggest that all traffic should be shown on the airport moving map, whether qualified or unqualified, and conflict detection and resolution technologies provide significant safety benefits. Despite the presence of traffic information on the map, collisions or near collisions still occurred; when indications or alerts were generated in these same scenarios, the incidences were averted.
Deterministic Computer-Controlled Polishing Process for High-Energy X-Ray Optics
NASA Technical Reports Server (NTRS)
Khan, Gufran S.; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian
2010-01-01
A deterministic computer-controlled polishing process for large X-ray mirror mandrels is presented. Using tool s influence function and material removal rate extracted from polishing experiments, design considerations of polishing laps and optimized operating parameters are discussed
Recent Achievements of the Neo-Deterministic Seismic Hazard Assessment in the CEI Region
Panza, G. F.; Kouteva, M.; Vaccari, F.; Peresan, A.; Romanelli, F.; Cioflan, C. O.; Radulian, M.; Marmureanu, G.; Paskaleva, I.; Gribovszki, K.; Varga, P.; Herak, M.; Zaichenco, A.; Zivcic, M.
2008-07-08
A review of the recent achievements of the innovative neo-deterministic approach for seismic hazard assessment through realistic earthquake scenarios has been performed. The procedure provides strong ground motion parameters for the purpose of earthquake engineering, based on the deterministic seismic wave propagation modelling at different scales--regional, national and metropolitan. The main advantage of this neo-deterministic procedure is the simultaneous treatment of the contribution of the earthquake source and seismic wave propagation media to the strong motion at the target site/region, as required by basic physical principles. The neo-deterministic seismic microzonation procedure has been successfully applied to numerous metropolitan areas all over the world in the framework of several international projects. In this study some examples focused on CEI region concerning both regional seismic hazard assessment and seismic microzonation of the selected metropolitan areas are shown.
Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan D.; Falcao Salles, Joana
2015-03-17
Despite growing recognition that deterministic and stochastic factors simultaneously influence bacterial communities, little is known about mechanisms shifting their relative importance. To better understand underlying mechanisms, we developed a conceptual model linking ecosystem development during primary succession to shifts in the stochastic/deterministic balance. To evaluate the conceptual model we coupled spatiotemporal data on soil bacterial communities with environmental conditions spanning 105 years of salt marsh development. At the local scale there was a progression from stochasticity to determinism due to Na accumulation with increasing ecosystem age, supporting a main element of the conceptual model. At the regional-scale, soil organic matter (SOM) governed the relative influence of stochasticity and the type of deterministic ecological selection, suggesting scale-dependency in how deterministic ecological selection is imposed. Analysis of a new ecological simulation model supported these conceptual inferences. Looking forward, we propose an extended conceptual model that integrates primary and secondary succession in microbial systems.
A deterministic particle method for one-dimensional reaction-diffusion equations
NASA Technical Reports Server (NTRS)
Mascagni, Michael
1995-01-01
We derive a deterministic particle method for the solution of nonlinear reaction-diffusion equations in one spatial dimension. This deterministic method is an analog of a Monte Carlo method for the solution of these problems that has been previously investigated by the author. The deterministic method leads to the consideration of a system of ordinary differential equations for the positions of suitably defined particles. We then consider the time explicit and implicit methods for this system of ordinary differential equations and we study a Picard and Newton iteration for the solution of the implicit system. Next we solve numerically this system and study the discretization error both analytically and numerically. Numerical computation shows that this deterministic method is automatically adaptive to large gradients in the solution.
Enhancing traffic capacity of scale-free networks by link-directed strategy
NASA Astrophysics Data System (ADS)
Ma, Jinlong; Han, Weizhan; Guo, Qing; Zhang, Shuai
2016-08-01
The transport efficiency of a network is strongly related to the underlying structure. In this paper, we propose an efficient strategy named high-betweenness-first (HBF) for the purpose of improving the traffic handling capacity of scale-free networks by limiting a fraction of undirected links to be unidirectional ones based on the links’ betweenness. Compared with the high-degree-first (HDF) strategy, the traffic capacity can be more significantly enhanced under the proposed link-directed strategy with the shortest path (SP) routing protocol. Simulation results in the Barabási-Albert (BA) model for scale-free networks show that the critical generating rate Rc which can evaluate the overall traffic capacity of a network system is larger after applying the HBF strategy, especially with nonrandom direction-determining rules. Because of the strongly improved traffic capacity, this work is helpful to design and optimize modern communication networks such as the software defined network.
[Clinical examinations for the traffic accident patients].
Hitosugi, Masahito
2008-11-30
Traffic accident is a leading cause of unintentional death and about six-thousands annually died in Japan. As about one-million of persons suffer from traffic injuries, most of them seek medical attention. Therefore, medical staffs have to find the injuries accurately and treat immediately. Furthermore, the cause of accident should also be considered; why the accident was occurred, human error of the driver? To solve these problems, clinical examinations were needed. Medical staffs have to understand the characteristics of the traffic injuries: severe and multiple blunt injuries, popular injuries can be estimated with considering the pattern of the accident. Because some of the accidents are occurred when the driver is under the influence of alcohol and other drugs, screening of these subjects should be performed. Because the public is largely unaware of the preventable nature of traffic injuries, in addition to diagnose and treat accurately, we medical staffs have to attend on the primary prevention of the traffic injuries.
Traffic congestion in interconnected complex networks
NASA Astrophysics Data System (ADS)
Tan, Fei; Wu, Jiajing; Xia, Yongxiang; Tse, Chi K.
2014-06-01
Traffic congestion in isolated complex networks has been investigated extensively over the last decade. Coupled network models have recently been developed to facilitate further understanding of real complex systems. Analysis of traffic congestion in coupled complex networks, however, is still relatively unexplored. In this paper, we try to explore the effect of interconnections on traffic congestion in interconnected Barabási-Albert scale-free networks. We find that assortative coupling can alleviate traffic congestion more readily than disassortative and random coupling when the node processing capacity is allocated based on node usage probability. Furthermore, the optimal coupling probability can be found for assortative coupling. However, three types of coupling preferences achieve similar traffic performance if all nodes share the same processing capacity. We analyze interconnected Internet autonomous-system-level graphs of South Korea and Japan and obtain similar results. Some practical suggestions are presented to optimize such real-world interconnected networks accordingly.
Physics of traffic gridlock in a city
NASA Astrophysics Data System (ADS)
Kerner, Boris S.
2011-10-01
Based on simulations of stochastic three-phase and two-phase traffic flow models, we reveal that at a signalized city intersection under small link inflow rates at which a vehicle queue developed during the red phase of the light signal dissolves fully during the green phase, i.e., no traffic gridlock should be expected, nevertheless, spontaneous traffic breakdown with subsequent city gridlock occurs with some probability after a random time delay. In most cases, this traffic breakdown is initiated by a phase transition from free flow to a synchronized flow occurring upstream of the queue at the light signal. The probability of traffic breakdown at the light signal is an increasing function of the link inflow rate and duration of the red phase of the light signal.
Traffic jams, granular flow, and soliton selection
Kurtze, D.A.; Hong, D.C.
1995-07-01
The flow of traffic on a long section of road without entrances or exits can be modeled by continuum equations similar to those describing fluid flow. In a certain range of traffic density, steady flow becomes unstable against the growth of a cluster, or ``phantom`` traffic jam, which moves at a slower speed than the otherwise homogeneous flow. We show that near the onset of this instability, traffic flow is described by a perturbed Korteweg--de Vries (KdV) equation. The traffic jam can be identified with a soliton solution of the KdV equation. The perturbation terms select a unique member of the continuous family of KdV solitons. These results may also apply to the dynamics of granular relaxation.
Distributed traffic signal control using fuzzy logic
NASA Technical Reports Server (NTRS)
Chiu, Stephen
1992-01-01
We present a distributed approach to traffic signal control, where the signal timing parameters at a given intersection are adjusted as functions of the local traffic condition and of the signal timing parameters at adjacent intersections. Thus, the signal timing parameters evolve dynamically using only local information to improve traffic flow. This distributed approach provides for a fault-tolerant, highly responsive traffic management system. The signal timing at an intersection is defined by three parameters: cycle time, phase split, and offset. We use fuzzy decision rules to adjust these three parameters based only on local information. The amount of change in the timing parameters during each cycle is limited to a small fraction of the current parameters to ensure smooth transition. We show the effectiveness of this method through simulation of the traffic flow in a network of controlled intersections.
Neo-Deterministic and Probabilistic Seismic Hazard Assessments: a Comparative Analysis
NASA Astrophysics Data System (ADS)
Peresan, Antonella; Magrin, Andrea; Nekrasova, Anastasia; Kossobokov, Vladimir; Panza, Giuliano F.
2016-04-01
Objective testing is the key issue towards any reliable seismic hazard assessment (SHA). Different earthquake hazard maps must demonstrate their capability in anticipating ground shaking from future strong earthquakes before an appropriate use for different purposes - such as engineering design, insurance, and emergency management. Quantitative assessment of maps performances is an essential step also in scientific process of their revision and possible improvement. Cross-checking of probabilistic models with available observations and independent physics based models is recognized as major validation procedure. The existing maps from the classical probabilistic seismic hazard analysis (PSHA), as well as those from the neo-deterministic analysis (NDSHA), which have been already developed for several regions worldwide (including Italy, India and North Africa), are considered to exemplify the possibilities of the cross-comparative analysis in spotting out limits and advantages of different methods. Where the data permit, a comparative analysis versus the documented seismic activity observed in reality is carried out, showing how available observations about past earthquakes can contribute to assess performances of the different methods. Neo-deterministic refers to a scenario-based approach, which allows for consideration of a wide range of possible earthquake sources as the starting point for scenarios constructed via full waveforms modeling. The method does not make use of empirical attenuation models (i.e. Ground Motion Prediction Equations, GMPE) and naturally supplies realistic time series of ground shaking (i.e. complete synthetic seismograms), readily applicable to complete engineering analysis and other mitigation actions. The standard NDSHA maps provide reliable envelope estimates of maximum seismic ground motion from a wide set of possible scenario earthquakes, including the largest deterministically or historically defined credible earthquake. In addition
Winning Strategies in Congested Traffic
NASA Astrophysics Data System (ADS)
Járai-Szabó, Ferenc; Néda, Zoltán
2012-09-01
One-directional traffic on two-lanes is modeled in the framework of a spring-block type model. A fraction q of the cars are allowed to change lanes, following simple dynamical rules, while the other cars keep their initial lane. The advance of cars, starting from equivalent positions and following the two driving strategies is studied and compared. As a function of the parameter q the winning probability and the average gain in the advancement for the lane-changing strategy is computed. An interesting phase-transition like behavior is revealed and conclusions are drawn regarding the conditions when the lane changing strategy is the better option for the drivers.
Yildirim, Necmettin; Kazanci, Caner
2011-01-01
A brief introduction to mathematical modeling of biochemical regulatory reaction networks is presented. Both deterministic and stochastic modeling techniques are covered with examples from enzyme kinetics, coupled reaction networks with oscillatory dynamics and bistability. The Yildirim-Mackey model for lactose operon is used as an example to discuss and show how deterministic and stochastic methods can be used to investigate various aspects of this bacterial circuit. PMID:21187231
Deterministic methods in radiation transport. A compilation of papers presented February 4-5, 1992
Rice, A. F.; Roussin, R. W.
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.
Deterministic methods in radiation transport. A compilation of papers presented February 4--5, 1992
Rice, A.F.; Roussin, R.W.
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.
Implementation of Gy-Eq for deterministic effects limitation in shield design.
Wilson, John W; Kim, Myung-Hee Y; De Angelis, Giovanni; Cucinotta, Francis A; Yoshizawa, Nobuaki; Badavi, Francis F
2002-12-01
The NCRP has recently defined RBE values and a new quantity (Gy-Eq) for use in estimation of deterministic effects in space shielding and operations. The NCRP's RBE for neutrons is left ambiguous and not fully defined. In the present report we will suggest a complete definition of neutron RBE consistent with the NCRP recommendations and evaluate attenuation properties of deterministic effects (Gy-Eq) in comparison with other dosimetric quantities. PMID:12793740
Implementation of Gy-Eq for deterministic effects limitation in shield design
NASA Technical Reports Server (NTRS)
Wilson, John W.; Kim, Myung-Hee Y.; De Angelis, Giovanni; Cucinotta, Francis A.; Yoshizawa, Nobuaki; Badavi, Francis F.
2002-01-01
The NCRP has recently defined RBE values and a new quantity (Gy-Eq) for use in estimation of deterministic effects in space shielding and operations. The NCRP's RBE for neutrons is left ambiguous and not fully defined. In the present report we will suggest a complete definition of neutron RBE consistent with the NCRP recommendations and evaluate attenuation properties of deterministic effects (Gy-Eq) in comparison with other dosimetric quantities.