Generalized Deterministic Traffic Rules
NASA Astrophysics Data System (ADS)
Fuks, Henryk; Boccara, Nino
We study a family of deterministic models for highway traffic flow which generalize cellular automaton rule 184. This family is parameterized by the speed limit m and another parameter k that represents a "degree of aggressiveness" in driving, strictly related to the distance between two consecutive cars. We compare two driving strategies with identical maximum throughput: "conservative" driving with high speed limit and "aggressive" driving with low speed limit. Those two strategies are evaluated in terms of accident probability. We also discuss fundamental diagrams of generalized traffic rules and examine limitations of maximum achievable throughput. Possible modifications of the model are considered.
Deterministic models for traffic jams
NASA Astrophysics Data System (ADS)
Nagel, Kai; Herrmann, Hans J.
1993-10-01
We study several deterministic one-dimensional traffic models. For integer positions and velocities we find the typical high and low density phases separated by a simple transition. If positions and velocities are continuous variables the model shows self-organized critically driven by the slowest car.
Traffic chaotic dynamics modeling and analysis of deterministic network
NASA Astrophysics Data System (ADS)
Wu, Weiqiang; Huang, Ning; Wu, Zhitao
2016-07-01
Network traffic is an important and direct acting factor of network reliability and performance. To understand the behaviors of network traffic, chaotic dynamics models were proposed and helped to analyze nondeterministic network a lot. The previous research thought that the chaotic dynamics behavior was caused by random factors, and the deterministic networks would not exhibit chaotic dynamics behavior because of lacking of random factors. In this paper, we first adopted chaos theory to analyze traffic data collected from a typical deterministic network testbed — avionics full duplex switched Ethernet (AFDX, a typical deterministic network) testbed, and found that the chaotic dynamics behavior also existed in deterministic network. Then in order to explore the chaos generating mechanism, we applied the mean field theory to construct the traffic dynamics equation (TDE) for deterministic network traffic modeling without any network random factors. Through studying the derived TDE, we proposed that chaotic dynamics was one of the nature properties of network traffic, and it also could be looked as the action effect of TDE control parameters. A network simulation was performed and the results verified that the network congestion resulted in the chaotic dynamics for a deterministic network, which was identical with expectation of TDE. Our research will be helpful to analyze the traffic complicated dynamics behavior for deterministic network and contribute to network reliability designing and analysis.
Deterministic generation of remote entanglement with active quantum feedback
Martin, Leigh; Motzoi, Felix; Li, Hanhan; Sarovar, Mohan; Whaley, K. Birgitta
2015-12-10
We develop and study protocols for deterministic remote entanglement generation using quantum feedback, without relying on an entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can be modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Lastly, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.
Deterministic generation of remote entanglement with active quantum feedback
Martin, Leigh; Motzoi, Felix; Li, Hanhan; ...
2015-12-10
We develop and study protocols for deterministic remote entanglement generation using quantum feedback, without relying on an entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can bemore » modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Lastly, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.« less
Deterministic entanglement generation from driving through quantum phase transitions
NASA Astrophysics Data System (ADS)
Luo, Xin-Yu; Zou, Yi-Quan; Wu, Ling-Na; Liu, Qi; Han, Ming-Fei; Tey, Meng Khoon; You, Li
2017-02-01
Many-body entanglement is often created through the system evolution, aided by nonlinear interactions between the constituting particles. These very dynamics, however, can also lead to fluctuations and degradation of the entanglement if the interactions cannot be controlled. Here, we demonstrate near-deterministic generation of an entangled twin-Fock condensate of ~11,000 atoms by driving a rubidium-87 Bose-Einstein condensate undergoing spin mixing through two consecutive quantum phase transitions (QPTs). We directly observe number squeezing of 10.7 ± 0.6 decibels and normalized collective spin length of 0.99 ± 0.01. Together, these observations allow us to infer an entanglement-enhanced phase sensitivity of ~6 decibels beyond the standard quantum limit and an entanglement breadth of ~910 atoms. Our work highlights the power of generating large-scale useful entanglement by taking advantage of the different entanglement landscapes separated by QPTs.
Deterministic generation of a cluster state of entangled photons
NASA Astrophysics Data System (ADS)
Schwartz, I.; Cogan, D.; Schmidgall, E. R.; Don, Y.; Gantz, L.; Kenneth, O.; Lindner, N. H.; Gershoni, D.
2016-10-01
Photonic cluster states are a resource for quantum computation based solely on single-photon measurements. We use semiconductor quantum dots to deterministically generate long strings of polarization-entangled photons in a cluster state by periodic timed excitation of a precessing matter qubit. In each period, an entangled photon is added to the cluster state formed by the matter qubit and the previously emitted photons. In our prototype device, the qubit is the confined dark exciton, and it produces strings of hundreds of photons in which the entanglement persists over five sequential photons. The measured process map characterizing the device has a fidelity of 0.81 with that of an ideal device. Further feasible improvements of this device may reduce the resources needed for optical quantum information processing.
Deterministic generation of remote entanglement with active quantum feedback
NASA Astrophysics Data System (ADS)
Martin, Leigh; Motzoi, Felix; Li, Hanhan; Sarovar, Mohan; Whaley, K. Birgitta
2015-12-01
We consider the task of deterministically entangling two remote qubits using joint measurement and feedback, but no directly entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can be modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Finally, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.
Deterministic photonic cluster state generation from quantum dot molecules
NASA Astrophysics Data System (ADS)
Economou, Sophia; Gimeno-Segovia, Mercedes; Rudolph, Terry
2014-03-01
Currently, the most promising approach for photon-based quantum information processing is measurement-based, or one-way, quantum computing. In this scheme, a large entangled state of photons is prepared upfront and the computation is implemented with single-qubit measurements alone. Available approaches to generating the cluster state are probabilistic, which makes scalability challenging. We propose to generate the cluster state using a quantum dot molecule with one electron spin per quantum dot. The two spins are coupled by exchange interaction and are periodically pulsed to produce photons. We show that the entanglement created by free evolution between the spins is transferred to the emitted photons, and thus a 2D photonic ladder can be created. Our scheme only utilizes single-spin gates and measurement, and is thus fully consistent with available technology.
Traffic scenario generation technique for piloted simulation studies
NASA Technical Reports Server (NTRS)
Williams, David H.; Wells, Douglas C.
1985-01-01
Piloted simulation studies of cockpit traffic display concepts require the development of representative traffic scenarios. With the exception of specific aircraft interaction issues, most research questions can be addressed using traffic scenarios consisting of prerecorded aircraft movements merged together to form a desired traffic pattern. Prerecorded traffic scenarios have distinct research advantages, allowing control of traffic encounters with repeatability of scenarios between different test subjects. A technique is described for generation of prerecorded jet transport traffic scenarios suitable for use in piloted simulation studies. Individual flight profiles for the aircraft in the scenario are created interactively with a computer program designed specifically for this purpose. The profiles are then time-correlated and merged into a complete scenario. This technique was used to create traffic scenarios for the Denver, Colorado area with operations centered at Stapleton International Airport. Traffic scenarios for other areas may also be created using this technique, with appropriate modifications made to the navigation fix locations contained in the flight profile generation program.
Mesh generation and energy group condensation studies for the jaguar deterministic transport code
Kennedy, R. A.; Watson, A. M.; Iwueke, C. I.; Edwards, E. J.
2012-07-01
The deterministic transport code Jaguar is introduced, and the modeling process for Jaguar is demonstrated using a two-dimensional assembly model of the Hoogenboom-Martin Performance Benchmark Problem. This single assembly model is being used to test and analyze optimal modeling methodologies and techniques for Jaguar. This paper focuses on spatial mesh generation and energy condensation techniques. In this summary, the models and processes are defined as well as thermal flux solution comparisons with the Monte Carlo code MC21. (authors)
All-electrical deterministic single domain wall generation for on-chip applications
Guite, Chinkhanlun; Kerk, I. S.; Sekhar, M. Chandra; Ramu, M.; Goolaup, S.; Lew, W. S.
2014-01-01
Controlling domain wall (DW) generation and dynamics behaviour in ferromagnetic nanowire is critical to the engineering of domain wall-based non-volatile logic and magnetic memory devices. Previous research showed that DW generation suffered from a random or stochastic nature and that makes the realization of DW based device a challenging task. Conventionally, stabilizing a Néel DW requires a long pulsed current and the assistance of an external magnetic field. Here, we demonstrate a method to deterministically produce single DW without having to compromise the pulse duration. No external field is required to stabilize the DW. This is achieved by controlling the stray field magnetostatic interaction between a current-carrying strip line generated DW and the edge of the nanowire. The natural edge-field assisted domain wall generation process was found to be twice as fast as the conventional methods and requires less current density. Such deterministic DW generation method could potentially bring DW device technology, a step closer to on-chip application. PMID:25500734
Studies of Next Generation Air Traffic Control Specialists: Why Be an Air Traffic Controller?
2011-08-01
Millennials ” (Gimbel, 2007), descriptions of generational differences are a staple in the human resources (HR) trade press and corporate training. The...controllers, recruited from Gen-X and Millennials , than to the “Post-Strike” generation (largely Baby Boomers) and non-material factors such as the...air traffic coNtrol SpecialiStS: Why Be aN air traffic coNtroller? “Gen-X,” “Gen-Y,” “Baby Boomer,” “ Millennial ,” “The Greatest Generation ”: Labels
Koay, Cheng Guan; Hurley, Samuel A.; Meyerand, M. Elizabeth
2011-01-01
Purpose: Diffusion MRI measurements are typically acquired sequentially with unit gradient directions that are distributed uniformly on the unit sphere. The ordering of the gradient directions has significant effect on the quality of dMRI-derived quantities. Even though several methods have been proposed to generate optimal orderings of gradient directions, these methods are not widely used in clinical studies because of the two major problems. The first problem is that the existing methods for generating highly uniform and antipodally symmetric gradient directions are inefficient. The second problem is that the existing methods for generating optimal orderings of gradient directions are also highly inefficient. In this work, the authors propose two extremely efficient and deterministic methods to solve these two problems. Methods: The method for generating nearly uniform point set on the unit sphere (with antipodal symmetry) is based upon the notion that the spacing between two consecutive points on the same latitude should be equal to the spacing between two consecutive latitudes. The method for generating optimal ordering of diffusion gradient directions is based on the idea that each subset of incremental sample size, which is derived from the prescribed and full set of gradient directions, must be as uniform as possible in terms of the modified electrostatic energy designed for antipodally symmetric point set. Results: The proposed method outperformed the state-of-the-art method in terms of computational efficiency by about six orders of magnitude. Conclusions: Two extremely efficient and deterministic methods have been developed for solving the problem of optimal ordering of diffusion gradient directions. The proposed strategy is also applicable to optimal view-ordering in three-dimensional radial MRI. PMID:21928652
NASA Astrophysics Data System (ADS)
Qian, Jing; Zhang, Weiping
2017-03-01
We develop a scheme for deterministic generation of an entangled state between two atoms on different Rydberg states via a chirped adiabatic passage, which directly connects the initial ground and target entangled states and also does not request the normally needed blockade effect. The occupancy of intermediate states suffers from a strong reduction via two pulses with proper time-dependent detunings and the electromagnetically induced transparency condition. By solving the analytical expressions of eigenvalues and eigenstates of a two-atom system, we investigate the optimal parameters for guaranteeing the adiabatic condition. We present a detailed study for the effect of pulse duration, changing rate, different Rydberg interactions on the fidelity of the prepared entangled state with experimentally feasible parameters, which reveals a good agreement between the analytic and full numerical results.
Transforming the NAS: The Next Generation Air Traffic Control System
NASA Technical Reports Server (NTRS)
Erzberger, Heinz
2004-01-01
The next-generation air traffic control system must be designed to safely and efficiently accommodate the large growth of traffic expected in the near future. It should be sufficiently scalable to contend with the factor of 2 or more increase in demand expected by the year 2020. Analysis has shown that the current method of controlling air traffic cannot be scaled up to provide such levels of capacity. Therefore, to achieve a large increase in capacity while also giving pilots increased freedom to optimize their flight trajectories requires a fundamental change in the way air traffic is controlled. The key to achieving a factor of 2 or more increase in airspace capacity is to automate separation monitoring and control and to use an air-ground data link to send trajectories and clearances directly between ground-based and airborne systems. In addition to increasing capacity and offering greater flexibility in the selection of trajectories, this approach also has the potential to increase safety by reducing controller and pilot errors that occur in routine monitoring and voice communication tasks.
NASA Astrophysics Data System (ADS)
Wang, Xun; Liu, Zhirong; Huang, Kelin; Sun, Jingbo
2017-03-01
According to the theory of first-order Born approximation, analytical expressions for Gaussian Schell-model arrays (GSMA) beam scattered on a deterministic medium in the far-zone are derived. In terms of the analytical formula obtained, shifts of GSMA beam's scattered spectrum are numerically investigated. Results show that the scattering directions sx and sy, effective radius σ of the scattering medium, the initial beam transverse width σ0, correlation widths δx and δy of the source, and line width Γ0 of the incident spectrum closely influence the distributions of normalized scattered spectrum in the far-zone. These features of GSMA beam scattered spectrum could be used to obtain information about the structure of a deterministic medium.
Network Traffic Generator for Low-rate Small Network Equipment Software
Lanzisera, Steven
2013-05-28
Application that uses the Python low-level socket interface to pass network traffic between devices on the local side of a NAT router and the WAN side of the NAT router. This application is designed to generate traffic that complies with the Energy Star Small Network Equipment Test Method.
Generation of deterministic tsunami hazard maps in the Bay of Cadiz, south-west Spain
NASA Astrophysics Data System (ADS)
Álvarez-Gómez, J. A.; Otero, L.; Olabarrieta, M.; González, M.; Carreño, E.; Baptista, M. A.; Miranda, J. M.; Medina, R.; Lima, V.
2009-04-01
free surface elevation, maximum water depth, maximum current speed, maximum Froude number and maximum impact forces (hydrostatic and dynamic forces). The fault rupture and sea bottom displacement has been computed by means of the Okada equations. As result, a set of more than 100 deterministic thematic maps have been created in a GIS environment incorporating geographical data and high resolution orthorectified satellite images. These thematic maps form an atlas of inundation maps that will be distributed to different government authorities and civil protection and emergency agencies. The authors gratefully acknowledge the financial support provided by the EU under the frame of the European Project TRANSFER (Tsunami Risk And Strategies For the European Region), 6th Framework Programme.
Traffic Generator (TrafficGen) Version 1.4.2: Users Guide
2016-06-01
TrafficGen Graphical User Interface (GUI) 3 3.1 Anatomy of the User Interface 3 3.2 Scenario Configuration and MGEN Files 4 4. Working with...4.6.1 Node Name Conflict Resolution 11 4.7 Import Existing MGEN Files 12 4.8 Edit an Existing Node 12 4.9 Edit an Existing Flow 13 4.10 Edit an...Save a Scenario 18 4.19.1 Save a Scenario to a Different Name 18 4.20 Export Scenario Data 19 4.20.1 Export SDT File 19 4.20.2 Export MGEN Timeline
MMPP Traffic Generator for the Testing of the SCAR 2 Fast Packet Switch
NASA Technical Reports Server (NTRS)
Chren, William A., Jr.
1995-01-01
A prototype MWP Traffic Generator (TG) has been designed for testing of the COMSAT-supplied SCAR II Fast Packet Switch. By generating packets distributed according to a Markov-Modulated Poisson Process (MMPP) model. it allows the assessment of the switch performance under traffic conditions that are more realistic than could be generated using the COMSAT-supplied Traffic Generator Module. The MMPP model is widely believed to model accurately real-world superimposed voice and data communications traffic. The TG was designed to be as much as possible of a "drop-in" replacement for the COMSAT Traffic Generator Module. The latter fit on two Altera EPM7256EGC 192-pin CPLDs and produced traffic for one switch input port. No board changes are necessary because it has been partitioned to use the existing board traces. The TG, consisting of parts "TGDATPROC" and "TGRAMCTL" must merely be reprogrammed into the Altera devices of the same name. However, the 040 controller software must be modified to provide TG initialization data. This data will be given in Section II.
NASA Astrophysics Data System (ADS)
Argollo de Menezes, Marcio; Brigatti, Edgardo; Schwämmle, Veit
2013-08-01
Microbiological systems evolve to fulfil their tasks with maximal efficiency. The immune system is a remarkable example, where the distinction between self and non-self is made by means of molecular interaction between self-proteins and antigens, triggering affinity-dependent systemic actions. Specificity of this binding and the infinitude of potential antigenic patterns call for novel mechanisms to generate antibody diversity. Inspired by this problem, we develop a genetic algorithm where agents evolve their strings in the presence of random antigenic strings and reproduce with affinity-dependent rates. We ask what is the best strategy to generate diversity if agents can rearrange their strings a finite number of times. We find that endowing each agent with an inheritable cellular automaton rule for performing rearrangements makes the system more efficient in pattern-matching than if transformations are totally random. In the former implementation, the population evolves to a stationary state where agents with different automata rules coexist.
Methodology for Generating Conflict Scenarios by Time Shifting Recorded Traffic Data
NASA Technical Reports Server (NTRS)
Paglione, Mike; Oaks, Robert; Bilimoria, Karl D.
2003-01-01
A methodology is presented for generating conflict scenarios that can be used as test cases to estimate the operational performance of a conflict probe. Recorded air traffic data is time shifted to create traffic scenarios featuring conflicts with characteristic properties similar to those encountered in typical air traffic operations. First, a reference set of conflicts is obtained from trajectories that are computed using birth points and nominal flight plans extracted from recorded traffic data. Distributions are obtained for several primary properties (e.g., encounter angle) that are most likely to affect the performance of a conflict probe. A genetic algorithm is then utilized to determine the values of time shifts for the recorded track data so that the primary properties of conflicts generated by the time shifted data match those of the reference set. This methodology is successfully demonstrated using recorded traffic data for the Memphis Air Route Traffic Control Center; a key result is that the required time shifts are less than 5 min for 99% of the tracks. It is also observed that close matching of the primary properties used in this study additionally provides a good match for some other secondary properties.
Yang, W. S.; Lee, C. H.
2008-05-16
Under the fast reactor simulation program launched in April 2007, development of an advanced multigroup cross section generation code was initiated in July 2007, in conjunction with the development of the high-fidelity deterministic neutron transport code UNIC. The general objectives are to simplify the existing multi-step schemes and to improve the resolved and unresolved resonance treatments. Based on the review results of current methods and the fact that they have been applied successfully to fast critical experiment analyses and fast reactor designs for last three decades, the methodologies of the ETOE-2/MC{sup 2}-2/SDX code system were selected as the starting set of methodologies for multigroup cross section generation for fast reactor analysis. As the first step for coupling with the UNIC code and use in a parallel computing environment, the MC{sup 2}-2 code was updated by modernizing the memory structure and replacing old data management package subroutines and functions with FORTRAN 90 based routines. Various modifications were also made in the ETOE-2 and MC{sup 2}-2 codes to process the ENDF/B-VII.0 data properly. Using the updated ETOE-2/MC{sup 2}-2 code system, the ENDF/B-VII.0 data was successfully processed for major heavy and intermediate nuclides employed in sodium-cooled fast reactors. Initial verification tests of the MC{sup 2}-2 libraries generated from ENDF/B-VII.0 data were performed by inter-comparison of twenty-one group infinite dilute total cross sections obtained from MC{sup 2}-2, VIM, and NJOY. For almost all nuclides considered, MC{sup 2}-2 cross sections agreed very well with those from VIM and NJOY. Preliminary validation tests of the ENDF/B-VII.0 libraries of MC{sup 2}-2 were also performed using a set of sixteen fast critical benchmark problems. The deterministic results based on MC{sup 2}-2/TWODANT calculations were in good agreement with MCNP solutions within {approx}0.25% {Delta}{rho}, except a few small LANL fast assemblies
Lehr, D.; Dietrich, K.; Siefke, T.; Kley, E.-B.; Alaee, R.; Filter, R.; Lederer, F.; Rockstuhl, C.; Tünnermann, A.
2014-10-06
A double-patterning process for scalable, efficient, and deterministic nanoring array fabrication is presented. It enables gaps and features below a size of 20 nm. A writing time of 3 min/cm{sup 2} makes this process extremely appealing for scientific and industrial applications. Numerical simulations are in agreement with experimentally measured optical spectra. Therefore, a platform and a design tool for upcoming next generation plasmonic devices like hybrid plasmonic quantum systems are delivered.
Wei, Yu-Jia; He, Yu-Ming; Chen, Ming-Cheng; Hu, Yi-Nan; He, Yu; Wu, Dian; Schneider, Christian; Kamp, Martin; Höfling, Sven; Lu, Chao-Yang; Pan, Jian-Wei
2014-11-12
Single photons are attractive candidates of quantum bits (qubits) for quantum computation and are the best messengers in quantum networks. Future scalable, fault-tolerant photonic quantum technologies demand both stringently high levels of photon indistinguishability and generation efficiency. Here, we demonstrate deterministic and robust generation of pulsed resonance fluorescence single photons from a single semiconductor quantum dot using adiabatic rapid passage, a method robust against fluctuation of driving pulse area and dipole moments of solid-state emitters. The emitted photons are background-free, have a vanishing two-photon emission probability of 0.3% and a raw (corrected) two-photon Hong-Ou-Mandel interference visibility of 97.9% (99.5%), reaching a precision that places single photons at the threshold for fault-tolerant surface-code quantum computing. This single-photon source can be readily scaled up to multiphoton entanglement and used for quantum metrology, boson sampling, and linear optical quantum computing.
Trajectory Assessment and Modification Tools for Next Generation Air Traffic Management Operations
NASA Technical Reports Server (NTRS)
Brasil, Connie; Lee, Paul; Mainini, Matthew; Lee, Homola; Lee, Hwasoo; Prevot, Thomas; Smith, Nancy
2011-01-01
This paper reviews three Next Generation Air Transportation System (NextGen) based high fidelity air traffic control human-in-the-loop (HITL) simulations, with a focus on the expected requirement of enhanced automated trajectory assessment and modification tools to support future air traffic flow management (ATFM) planning positions. The simulations were conducted at the National Aeronautics and Space Administration (NASA) Ames Research Centers Airspace Operations Laboratory (AOL) in 2009 and 2010. The test airspace for all three simulations assumed the mid-term NextGenEn-Route high altitude environment utilizing high altitude sectors from the Kansas City and Memphis Air Route Traffic Control Centers. Trajectory assessment, modification and coordination decision support tools were developed at the AOL in order to perform future ATFM tasks. Overall tool usage results and user acceptability ratings were collected across three areas of NextGen operatoins to evaluate the tools. In addition to the usefulness and usability feedback, feasibility issues, benefits, and future requirements were also addressed. Overall, the tool sets were rated very useful and usable, and many elements of the tools received high scores and were used frequently and successfully. Tool utilization results in all three HITLs showed both user and system benefits including better airspace throughput, reduced controller workload, and highly effective communication protocols in both full Data Comm and mixed-equipage environments.
Fast and optimized methodology to generate road traffic emission inventories and their uncertainties
NASA Astrophysics Data System (ADS)
Blond, N.; Ho, B. Q.; Clappier, A.
2012-04-01
Road traffic emissions are one of the main sources of air pollution in the cities. They are also the main sources of uncertainties in the air quality numerical models used to forecast and define abatement strategies. Until now, the available models for generating road traffic emission always required a big effort, money and time. This inhibits decisions to preserve air quality, especially in developing countries where road traffic emissions are changing very fast. In this research, we developed a new model designed to fast produce road traffic emission inventories. This model, called EMISENS, combines the well-known top-down and bottom-up approaches to force them to be coherent. A Monte Carlo methodology is included for computing emission uncertainties and the uncertainty rate due to each input parameters. This paper presents the EMISENS model and a demonstration of its capabilities through an application over Strasbourg region (Alsace), France. Same input data as collected for Circul'air model (using bottom-up approach) which has been applied for many years to forecast and study air pollution by the Alsatian air quality agency, ASPA, are used to evaluate the impact of several simplifications that a user could operate . These experiments give the possibility to review older methodologies and evaluate EMISENS results when few input data are available to produce emission inventories, as in developing countries and assumptions need to be done. We show that same average fraction of mileage driven with a cold engine can be used for all the cells of the study domain and one emission factor could replace both cold and hot emission factors.
NASA Technical Reports Server (NTRS)
Khambatta, Cyrus F.
2007-01-01
A technique for automated development of scenarios for use in the Multi-Center Traffic Management Advisor (McTMA) software simulations is described. The resulting software is designed and implemented to automate the generation of simulation scenarios with the intent of reducing the time it currently takes using an observational approach. The software program is effective in achieving this goal. The scenarios created for use in the McTMA simulations are based on data taken from data files from the McTMA system, and were manually edited before incorporation into the simulations to ensure accuracy. Despite the software s overall favorable performance, several key software issues are identified. Proposed solutions to these issues are discussed. Future enhancements to the scenario generator software may address the limitations identified in this paper.
Scripted drives: A robust protocol for generating exposures to traffic-related air pollution
NASA Astrophysics Data System (ADS)
Patton, Allison P.; Laumbach, Robert; Ohman-Strickland, Pamela; Black, Kathy; Alimokhtari, Shahnaz; Lioy, Paul J.; Kipen, Howard M.
2016-10-01
Commuting in automobiles can contribute substantially to total traffic-related air pollution (TRAP) exposure, yet measuring commuting exposures for studies of health outcomes remains challenging. To estimate real-world TRAP exposures, we developed and evaluated the robustness of a scripted drive protocol on the NJ Turnpike and local roads between April 2007 and October 2014. Study participants were driven in a car with closed windows and open vents during morning rush hours on 190 days. Real-time measurements of PM2.5, PNC, CO, and BC, and integrated samples of NO2, were made in the car cabin. Exposure measures included in-vehicle concentrations on the NJ Turnpike and local roads and the differences and ratios of these concentrations. Median in-cabin concentrations were 11 μg/m3 PM2.5, 40 000 particles/cm3, 0.3 ppm CO, 4 μg/m3 BC, and 20.6 ppb NO2. In-cabin concentrations on the NJ Turnpike were higher than in-cabin concentrations on local roads by a factor of 1.4 for PM2.5, 3.5 for PNC, 1.0 for CO, and 4 for BC. Median concentrations of NO2 for full rides were 2.4 times higher than ambient concentrations. Results were generally robust relative to season, traffic congestion, ventilation setting, and study year, except for PNC and PM2.5, which had secular and seasonal trends. Ratios of concentrations were more stable than differences or absolute concentrations. Scripted drives can be used to generate reasonably consistent in-cabin increments of exposure to traffic-related air pollution.
Scripted drives: A robust protocol for generating exposures to traffic-related air pollution.
Patton, Allison P; Laumbach, Robert; Ohman-Strickland, Pamela; Black, Kathy; Alimokhtari, Shahnaz; Lioy, Paul; Kipen, Howard M
2016-10-01
Commuting in automobiles can contribute substantially to total traffic-related air pollution (TRAP) exposure, yet measuring commuting exposures for studies of health outcomes remains challenging. To estimate real-world TRAP exposures, we developed and evaluated the robustness of a scripted drive protocol on the NJ Turnpike and local roads between April 2007 and October 2014. Study participants were driven in a car with closed windows and open vents during morning rush hours on 190 days. Real-time measurements of PM2.5, PNC, CO, and BC, and integrated samples of NO2, were made in the car cabin. Exposure measures included in-vehicle concentrations on the NJ Turnpike and local roads and the differences and ratios of these concentrations. Median in-cabin concentrations were 11 μg/m(3) PM2.5, 40 000 particles/cm(3), 0.3 ppm CO, 4 μg/m(3) BC, and 20.6 ppb NO2. In-cabin concentrations on the NJ Turnpike were higher than in-cabin concentrations on local roads by a factor of 1.4 for PM2.5, 3.5 for PNC, 1.0 for CO, and 4 for BC. Median concentrations of NO2 for full rides were 2.4 times higher than ambient concentrations. Results were generally robust relative to season, traffic congestion, ventilation setting, and study year, except for PNC and PM2.5, which had secular and seasonal trends. Ratios of concentrations were more stable than differences or absolute concentrations. Scripted drives can be used for generating reasonably consistent in-cabin increments of exposure to traffic-related air pollution.
Williams, K.A.; Delene, J.G.; Fuller, L.C.; Bowers, H.I.
1987-06-01
The total busbar electric generating costs were estimated for locations in ten regions of the United States for base-load nuclear and coal-fired power plants with a startup date of January 2000. For the Midwest region a complete data set that specifies each parameter used to obtain the comparative results is supplied. When based on the reference set of input variables, the comparison of power generation costs is found to favor nuclear in most regions of the country. Nuclear power is most favored in the northeast and western regions where coal must be transported over long distances; however, coal-fired generation is most competitive in the north central region where large reserves of cheaply mineable coal exist. In several regions small changes in the reference variables could cause either option to be preferred. The reference data set reflects the better of recent electric utility construction cost experience (BE) for nuclear plants. This study assumes as its reference case a stable regulatory environment and improved planning and construction practices, resulting in nuclear plants typically built at the present BE costs. Today's BE nuclear-plant capital investment cost model is then being used as a surrogate for projected costs for the next generation of light-water reactor plants. An alternative analysis based on today's median experience (ME) nuclear-plant construction cost experience is also included. In this case, coal is favored in all ten regions, implying that typical nuclear capital investment costs must improve for nuclear to be competitive.
The spatial relationship between traffic-generated air pollution and noise in 2 US cities.
Allen, Ryan W; Davies, Hugh; Cohen, Martin A; Mallach, Gary; Kaufman, Joel D; Adar, Sara D
2009-04-01
Traffic-generated air pollution and noise have both been linked to cardiovascular morbidity. Since traffic is a shared source, there is potential for correlated exposures that may lead to confounding in epidemiologic studies. As part of the Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air), 2-week NO and NO(2) concentrations were measured at up to 105 locations, selected primarily to characterize gradients near major roads, in each of 9 US communities. We measured 5-min A-weighted equivalent continuous sound pressure levels (L(eq)) and ultrafine particle (UFP) counts at a subset of these NO/NO(2) monitoring locations in Chicago, IL (N=69 in December 2006; N=36 in April 2007) and Riverside County, CA (N=46 in April 2007). L(eq) and UFP were measured during non-"rush hour" periods (10:00-16:00) to maximize comparability between measurements. We evaluated roadway proximity exposure surrogates in relation to the measured levels, estimated noise-air pollution correlation coefficients, and evaluated the impact of regional-scale pollution gradients, wind direction, and roadway proximity on the correlations. Five-minute L(eq) measurements in December 2006 and April 2007 were highly correlated (r=0.84), and measurements made at different times of day were similar (coefficients of variation: 0.5-13%), indicating that 5-min measurements are representative of long-term L(eq). Binary and continuous roadway proximity metrics characterized L(eq) as well or better than NO or NO(2). We found strong regional-scale gradients in NO and NO(2), particularly in Chicago, but only weak regional-scale gradients in L(eq) and UFP. L(eq) was most consistently correlated with NO, but the correlations were moderate (0.20-0.60). After removing the influence of regional-scale gradients the correlations generally increased (L(eq)-NO: r=0.49-0.62), and correlations downwind of major roads (L(eq)-NO: r=0.53-0.74) were consistently higher than those upwind (0.35-0.65). There was not a
Studies of uncontrolled air traffic patterns, phase 1
NASA Technical Reports Server (NTRS)
Baxa, E. G., Jr.; Scharf, L. L.; Ruedger, W. H.; Modi, J. A.; Wheelock, S. L.; Davis, C. M.
1975-01-01
The general aviation air traffic flow patterns at uncontrolled airports are investigated and analyzed and traffic pattern concepts are developed to minimize the midair collision hazard in uncontrolled airspace. An analytical approach to evaluate midair collision hazard probability as a function of traffic densities is established which is basically independent of path structure. Two methods of generating space-time interrelationships between terminal area aircraft are presented; one is a deterministic model to generate pseudorandom aircraft tracks, the other is a statistical model in preliminary form. Some hazard measures are presented for selected traffic densities. It is concluded that the probability of encountering a hazard should be minimized independently of any other considerations and that the number of encounters involving visible-avoidable aircraft should be maximized at the expense of encounters in other categories.
NASA Astrophysics Data System (ADS)
Oh, Seok-Geun; Suh, Myoung-Seok
2016-03-01
The projection skills of five ensemble methods were analyzed according to simulation skills, training period, and ensemble members, using 198 sets of pseudo-simulation data (PSD) produced by random number generation assuming the simulated temperature of regional climate models. The PSD sets were classified into 18 categories according to the relative magnitude of bias, variance ratio, and correlation coefficient, where each category had 11 sets (including 1 truth set) with 50 samples. The ensemble methods used were as follows: equal weighted averaging without bias correction (EWA_NBC), EWA with bias correction (EWA_WBC), weighted ensemble averaging based on root mean square errors and correlation (WEA_RAC), WEA based on the Taylor score (WEA_Tay), and multivariate linear regression (Mul_Reg). The projection skills of the ensemble methods improved generally as compared with the best member for each category. However, their projection skills are significantly affected by the simulation skills of the ensemble member. The weighted ensemble methods showed better projection skills than non-weighted methods, in particular, for the PSD categories having systematic biases and various correlation coefficients. The EWA_NBC showed considerably lower projection skills than the other methods, in particular, for the PSD categories with systematic biases. Although Mul_Reg showed relatively good skills, it showed strong sensitivity to the PSD categories, training periods, and number of members. On the other hand, the WEA_Tay and WEA_RAC showed relatively superior skills in both the accuracy and reliability for all the sensitivity experiments. This indicates that WEA_Tay and WEA_RAC are applicable even for simulation data with systematic biases, a short training period, and a small number of ensemble members.
Deterministic Walks with Choice
Beeler, Katy E.; Berenhaut, Kenneth S.; Cooper, Joshua N.; Hunter, Meagan N.; Barr, Peter S.
2014-01-10
This paper studies deterministic movement over toroidal grids, integrating local information, bounded memory and choice at individual nodes. The research is motivated by recent work on deterministic random walks, and applications in multi-agent systems. Several results regarding passing tokens through toroidal grids are discussed, as well as some open questions.
A queuing model for road traffic simulation
Guerrouahane, N.; Aissani, D.; Bouallouche-Medjkoune, L.; Farhi, N.
2015-03-10
We present in this article a stochastic queuing model for the raod traffic. The model is based on the M/G/c/c state dependent queuing model, and is inspired from the deterministic Godunov scheme for the road traffic simulation. We first propose a variant of M/G/c/c state dependent model that works with density-flow fundamental diagrams rather than density-speed relationships. We then extend this model in order to consider upstream traffic demand as well as downstream traffic supply. Finally, we show how to model a whole raod by concatenating raod sections as in the deterministic Godunov scheme.
Deterministic Entangled Nanosource
2008-08-01
currently valid OMB control number . PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) 01-09-2008 2. REPORT TYPE...Final Report 3. DATES COVERED (From - To) Sep 2005 – Sep 2008 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER FA9550-05-1-0455...Deterministic Entangled Nanosource 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER Khitrova, Galina 5e. TASK
Creutz, M.
1986-03-01
A deterministic cellular automation rule is presented which simulates the Ising model. On each cell in addition to an Ising spin is a space-time parity bit and a variable playing the role of a momentum conjugate to the spin. The procedure permits study of nonequilibrium phenomena, heat flow, mixing, and time correlations. The algorithm can make full use of multispin coding, thus permitting fast programs involving parallel processing on serial machines.
Deterministic Entangled Nanosource
2008-08-01
control number PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) 01-09-2008 2. REPORT TYPE Final Report 3...DATES COVERED (From - To) Sep 2005 - Sep 200? 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER FA9550-05-1-0455 5b. GRANT NUMBER Deterministic...Entangled Nanosource 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER Khitrova, Galina 5f. WORK UNIT NUMBER 7. PERFORMING
Chengjiang Mao
1996-12-31
In typical AI systems, we employ so-called non-deterministic reasoning (NDR), which resorts to some systematic search with backtracking in the search spaces defined by knowledge bases (KBs). An eminent property of NDR is that it facilitates programming, especially programming for those difficult AI problems such as natural language processing for which it is difficult to find algorithms to tell computers what to do at every step. However, poor efficiency of NDR is still an open problem. Our work aims at overcoming this efficiency problem.
The Deterministic Information Bottleneck
NASA Astrophysics Data System (ADS)
Strouse, D. J.; Schwab, David
2015-03-01
A fundamental and ubiquitous task that all organisms face is prediction of the future based on past sensory experience. Since an individual's memory resources are limited and costly, however, there is a tradeoff between memory cost and predictive payoff. The information bottleneck (IB) method (Tishby, Pereira, & Bialek 2000) formulates this tradeoff as a mathematical optimization problem using an information theoretic cost function. IB encourages storing as few bits of past sensory input as possible while selectively preserving the bits that are most predictive of the future. Here we introduce an alternative formulation of the IB method, which we call the deterministic information bottleneck (DIB). First, we argue for an alternative cost function, which better represents the biologically-motivated goal of minimizing required memory resources. Then, we show that this seemingly minor change has the dramatic effect of converting the optimal memory encoder from stochastic to deterministic. Next, we propose an iterative algorithm for solving the DIB problem. Additionally, we compare the IB and DIB methods on a variety of synthetic datasets, and examine the performance of retinal ganglion cell populations relative to the optimal encoding strategy for each problem.
Basic model for traffic interweave
NASA Astrophysics Data System (ADS)
Huang, Ding-wei
2015-09-01
We propose a three-parameter traffic model. The system consists of a loop with two junctions. The three parameters control the inflow, the outflow (from the junctions,) and the interweave (in the loop.) The dynamics is deterministic. The boundary conditions are stochastic. We present preliminary results for a complete phase diagram and all possible phase transitions. We observe four distinct traffic phases: free flow, congestion, bottleneck, and gridlock. The proposed model is able to present economically a clear perspective to these four different phases. Free flow and congestion are caused by the traffic conditions in the junctions. Both bottleneck and gridlock are caused by the traffic interweave in the loop. Instead of directly related to conventional congestion, gridlock can be taken as an extreme limit of bottleneck. This model can be useful to clarify the characteristics of traffic phases. This model can also be extended for practical applications.
Maritime Dynamic Traffic Generator. Volume III: Density Data on World Maps
1975-06-01
AD-A012 498 MARITIME DYNAMIC TR/\\FFIC GENERATOR. VOLUME III: DENSITY DATA ON WORLD MAPS Franklin D. MacKenzie Transportation Systems Center Cambridqe... Transportation Systems Center i]. ContractorGrant No. Kendall Square (Camhridge MA 02142 13. Type of Report and Period Covered 12. Sponsorini"Agency Name and... Transportation • Systems Center to define and analyze requirements for navigation and communica .. ..... .. tion services throtgh a satellite for commercial
Wu, Jun; Ren, Cizao; Delfino, Ralph J.; Chung, Judith; Wilhelm, Michelle; Ritz, Beate
2009-01-01
Background Preeclampsia is a major complication of pregnancy that can lead to substantial maternal and perinatal morbidity, mortality, and preterm birth. Increasing evidence suggests that air pollution adversely affects pregnancy outcomes. Yet few studies have examined how local traffic-generated emissions affect preeclampsia in addition to preterm birth. Objectives We examined effects of residential exposure to local traffic-generated air pollution on preeclampsia and preterm delivery (PTD). Methods We identified 81,186 singleton birth records from four hospitals (1997–2006) in Los Angeles and Orange Counties, California (USA). We used a line-source dispersion model (CALINE4) to estimate individual exposure to local traffic-generated nitrogen oxides (NOx) and particulate matter < 2.5 μm in aerodynamic diameter (PM2.5) across the entire pregnancy. We used logistic regression to estimate effects of air pollution exposures on preeclampsia, PTD (gestational age < 37 weeks), moderate PTD (MPTD; gestational age < 35 weeks), and very PTD (VPTD; gestational age < 30 weeks). Results We observed elevated risks for preeclampsia and preterm birth from maternal exposure to local traffic-generated NOx and PM2.5. The risk of preeclampsia increased 33% [odds ratio (OR) = 1.33; 95% confidence interval (CI), 1.18–1.49] and 42% (OR = 1.42; 95% CI, 1.26–1.59) for the highest NOx and PM2.5 exposure quartiles, respectively. The risk of VPTD increased 128% (OR = 2.28; 95% CI, 2.15–2.42) and 81% (OR = 1.81; 95% CI, 1.71–1.92) for women in the highest NOx and PM2.5 exposure quartiles, respectively. Conclusion Exposure to local traffic-generated air pollution during pregnancy increases the risk of preeclampsia and preterm birth in Southern California women. These results provide further evidence that air pollution is associated with adverse reproductive outcomes. PMID:20049131
NASA Technical Reports Server (NTRS)
Roske-Hofstrand, Renate J.
1990-01-01
The man-machine interface and its influence on the characteristics of computer displays in automated air traffic is discussed. The graphical presentation of spatial relationships and the problems it poses for air traffic control, and the solution of such problems are addressed. Psychological factors involved in the man-machine interface are stressed.
Deterministic patterns in cell motility
NASA Astrophysics Data System (ADS)
Lavi, Ido; Piel, Matthieu; Lennon-Duménil, Ana-Maria; Voituriez, Raphaël; Gov, Nir S.
2016-12-01
Cell migration paths are generally described as random walks, associated with both intrinsic and extrinsic noise. However, complex cell locomotion is not merely related to such fluctuations, but is often determined by the underlying machinery. Cell motility is driven mechanically by actin and myosin, two molecular components that generate contractile forces. Other cell functions make use of the same components and, therefore, will compete with the migratory apparatus. Here, we propose a physical model of such a competitive system, namely dendritic cells whose antigen capture function and migratory ability are coupled by myosin II. The model predicts that this coupling gives rise to a dynamic instability, whereby cells switch from persistent migration to unidirectional self-oscillation, through a Hopf bifurcation. Cells can then switch to periodic polarity reversals through a homoclinic bifurcation. These predicted dynamic regimes are characterized by robust features that we identify through in vitro trajectories of dendritic cells over long timescales and distances. We expect that competition for limited resources in other migrating cell types can lead to similar deterministic migration modes.
Near real-time traffic routing
NASA Technical Reports Server (NTRS)
Yang, Chaowei (Inventor); Cao, Ying (Inventor); Xie, Jibo (Inventor); Zhou, Bin (Inventor)
2012-01-01
A near real-time physical transportation network routing system comprising: a traffic simulation computing grid and a dynamic traffic routing service computing grid. The traffic simulator produces traffic network travel time predictions for a physical transportation network using a traffic simulation model and common input data. The physical transportation network is divided into a multiple sections. Each section has a primary zone and a buffer zone. The traffic simulation computing grid includes multiple of traffic simulation computing nodes. The common input data includes static network characteristics, an origin-destination data table, dynamic traffic information data and historical traffic data. The dynamic traffic routing service computing grid includes multiple dynamic traffic routing computing nodes and generates traffic route(s) using the traffic network travel time predictions.
Master equation analysis of deterministic chemical chaos
NASA Astrophysics Data System (ADS)
Wang, Hongli; Li, Qianshu
1998-05-01
The underlying microscopic dynamics of deterministic chemical chaos was investigated in this paper. We analyzed the master equation for the Williamowski-Rössler model by direct stochastic simulation as well as in the generating function representation. Simulation within an ensemble revealed that in the chaotic regime the deterministic mass action kinetics is related neither to the ensemble mean nor to the most probable value within the ensemble. Cumulant expansion analysis of the master equation also showed that the molecular fluctuations do not admit bounded values but increase linearly in time infinitely, indicating the meaninglessness of the chaotic trajectories predicted by the phenomenological equations. These results proposed that the macroscopic description is no longer useful in the chaotic regime and a more microscopic description is necessary in this circumstance.
Self-stabilizing Deterministic Gathering
NASA Astrophysics Data System (ADS)
Dieudonné, Yoann; Petit, Franck
In this paper, we investigate the possibility to deterministically solve the gathering problem (GP) with weak robots (anonymous, autonomous, disoriented, oblivious, deaf, and dumb). We introduce strong multiplicity detection as the ability for the robots to detect the exact number of robots located at a given position. We show that with strong multiplicity detection, there exists a deterministic self-stabilizing algorithm solving GP for n robots if, and only if, n is odd.
Multimedia traffic monitoring system
NASA Astrophysics Data System (ADS)
Al-Sayegh, Osamah A.; Dashti, Ali E.
2000-10-01
Increasing congestion on roads and highways, and the problems associated with conventional traffic monitoring systems have generated an interest in new traffic surveillance systems, such as video image processing. These systems are expected to be more effective and more economical than conventional surveillance systems. In this paper, we describe the design of a traffic surveillance system, called Multimedia traffic Monitoring System. The system is based on a client/server model, with the following main modules: 1) video image capture module (VICM), 2) video image processing module (VIPM), and 3) database module (DBM). The VICM is used to capture the live feed from a digital camera. Depending on the mode of operation, VICM either: 1) sends the video images directly to the VIPM (on the same processing node), or 2) compresses the video images and sends them to the VIPM and/or the DBM on separate processing node(s). The main contribution of this paper is the design of a traffic monitoring system that uses image processing (VIPM) to estimate traffic flow. In the current implementation, VIPM estimates the number of vehicles per kilometer, while using 9 image sequences (at a rate of 4 frames per second). The VIPM algorithm generates a virtual grid and superimposes it on a part of the traffic scene. Motion and vehicle detection operators are carried out within each cell in the grid. Vehicle count is concluded based on the nine images of a sequence. The system is tested against a manual count of more than 40 image sequences (total of more than 365 traffic images) of various traffic situations. The results show that the system is able to determine the traffic flow with a precision of 1.5 vehicles per kilometer.
Mixed deterministic and probabilistic networks.
Mateescu, Robert; Dechter, Rina
2008-11-01
The paper introduces mixed networks, a new graphical model framework for expressing and reasoning with probabilistic and deterministic information. The motivation to develop mixed networks stems from the desire to fully exploit the deterministic information (constraints) that is often present in graphical models. Several concepts and algorithms specific to belief networks and constraint networks are combined, achieving computational efficiency, semantic coherence and user-interface convenience. We define the semantics and graphical representation of mixed networks, and discuss the two main types of algorithms for processing them: inference-based and search-based. A preliminary experimental evaluation shows the benefits of the new model.
Mixed deterministic and probabilistic networks
Dechter, Rina
2010-01-01
The paper introduces mixed networks, a new graphical model framework for expressing and reasoning with probabilistic and deterministic information. The motivation to develop mixed networks stems from the desire to fully exploit the deterministic information (constraints) that is often present in graphical models. Several concepts and algorithms specific to belief networks and constraint networks are combined, achieving computational efficiency, semantic coherence and user-interface convenience. We define the semantics and graphical representation of mixed networks, and discuss the two main types of algorithms for processing them: inference-based and search-based. A preliminary experimental evaluation shows the benefits of the new model. PMID:20981243
Fluid turbulence - Deterministic or statistical
NASA Astrophysics Data System (ADS)
Cheng, Sin-I.
The deterministic view of turbulence suggests that the classical theory of fluid turbulence may be treating the wrong entity. The paper explores the physical implications of such an abstract mathematical result, and provides a constructive computational demonstration of the deterministic and the wave nature of fluid turbulence. The associated pressure disturbance for restoring solenoidal velocity is the primary agent, and its reflection from solid surface(s) the dominant mechanism of turbulence production. Statistical properties and their modeling must address to the statistics of the uncertainties of initial boundary data of the ensemble.
Visualization of Traffic Accidents
NASA Technical Reports Server (NTRS)
Wang, Jie; Shen, Yuzhong; Khattak, Asad
2010-01-01
Traffic accidents have tremendous impact on society. Annually approximately 6.4 million vehicle accidents are reported by police in the US and nearly half of them result in catastrophic injuries. Visualizations of traffic accidents using geographic information systems (GIS) greatly facilitate handling and analysis of traffic accidents in many aspects. Environmental Systems Research Institute (ESRI), Inc. is the world leader in GIS research and development. ArcGIS, a software package developed by ESRI, has the capabilities to display events associated with a road network, such as accident locations, and pavement quality. But when event locations related to a road network are processed, the existing algorithm used by ArcGIS does not utilize all the information related to the routes of the road network and produces erroneous visualization results of event locations. This software bug causes serious problems for applications in which accurate location information is critical for emergency responses, such as traffic accidents. This paper aims to address this problem and proposes an improved method that utilizes all relevant information of traffic accidents, namely, route number, direction, and mile post, and extracts correct event locations for accurate traffic accident visualization and analysis. The proposed method generates a new shape file for traffic accidents and displays them on top of the existing road network in ArcGIS. Visualization of traffic accidents along Hampton Roads Bridge Tunnel is included to demonstrate the effectiveness of the proposed method.
Analysis of FBC deterministic chaos
Daw, C.S.
1996-06-01
It has recently been discovered that the performance of a number of fossil energy conversion devices such as fluidized beds, pulsed combustors, steady combustors, and internal combustion engines are affected by deterministic chaos. It is now recognized that understanding and controlling the chaotic elements of these devices can lead to significantly improved energy efficiency and reduced emissions. Application of these techniques to key fossil energy processes are expected to provide important competitive advantages for U.S. industry.
NASA Astrophysics Data System (ADS)
Nuckelt, J.; Schack, M.; Kürner, T.
2011-08-01
This paper presents a physical (PHY) layer simulator of the IEEE 802.11p standard for Wireless Access in Vehicular Environments (WAVE). This simulator allows the emulation of data transmission via different radio channels as well as the analysis of the resulting system behavior. The PHY layer simulator is part of an integrated simulation platform including a traffic model to generate realistic mobility of vehicles and a 3D ray-optical model to calculate the multipath propagation channel between transmitter and receiver. Besides deterministic channel modeling by means of ray-optical modeling, the simulator can also be used with stochastic channel models of typical vehicular scenarios. With the aid of this PHY layer simulator and the integrated channel models, the resulting performance of the system in terms of bit and packet error rates of different receiver designs can be analyzed in order to achieve a robust data transmission.
NASA Astrophysics Data System (ADS)
Treiber, Martin; Kesting, Arne; Helbing, Dirk
2006-07-01
We investigate the adaptation of the time headways in car-following models as a function of the local velocity variance, which is a measure of the inhomogeneity of traffic flow. We apply this mechanism to several car-following models and simulate traffic breakdowns in open systems with an on-ramp as bottleneck and in a closed ring road. Single-vehicle data and one-minute aggregated data generated by several virtual detectors show a semiquantitative agreement with microscopic and flow-density data from the Dutch freeway A9. This includes the observed distributions of the net time headways for free and congested traffic, the velocity variance as a function of density, and the fundamental diagram. The modal value of the time headway distribution is shifted by a factor of about 2 under congested conditions. Macroscopically, this corresponds to the capacity drop at the transition from free to congested traffic. The simulated fundamental diagram shows free, synchronized, and jammed traffic, and a wide scattering in the congested traffic regime. We explain this by a self-organized variance-driven process that leads to the spontaneous formation and decay of long-lived platoons even for a deterministic dynamics on a single lane.
Treiber, Martin; Kesting, Arne; Helbing, Dirk
2006-07-01
We investigate the adaptation of the time headways in car-following models as a function of the local velocity variance, which is a measure of the inhomogeneity of traffic flow. We apply this mechanism to several car-following models and simulate traffic breakdowns in open systems with an on-ramp as bottleneck and in a closed ring road. Single-vehicle data and one-minute aggregated data generated by several virtual detectors show a semiquantitative agreement with microscopic and flow-density data from the Dutch freeway A9. This includes the observed distributions of the net time headways for free and congested traffic, the velocity variance as a function of density, and the fundamental diagram. The modal value of the time headway distribution is shifted by a factor of about 2 under congested conditions. Macroscopically, this corresponds to the capacity drop at the transition from free to congested traffic. The simulated fundamental diagram shows free, synchronized, and jammed traffic, and a wide scattering in the congested traffic regime. We explain this by a self-organized variance-driven process that leads to the spontaneous formation and decay of long-lived platoons even for a deterministic dynamics on a single lane.
NASA Technical Reports Server (NTRS)
1992-01-01
Mestech's X-15 "Eye in the Sky," a traffic monitoring system, incorporates NASA imaging and robotic vision technology. A camera or "sensor box" is mounted in a housing. The sensor detects vehicles approaching an intersection and sends the information to a computer, which controls the traffic light according to the traffic rate. Jet Propulsion Laboratory technical support packages aided in the company's development of the system. The X-15's "smart highway" can also be used to count vehicles on a highway and compute the number in each lane and their speeds, important information for freeway control engineers. Additional applications are in airport and railroad operations. The system is intended to replace loop-type traffic detectors.
Causse, Mickaël; Alonso, Roland; Vachon, François; Parise, Robert; Orliaguet, Jean-Pierre; Tremblay, Sébastien; Terrier, Patrice
2014-01-01
This study aims to determine whether indirect touch device can be used to interact with graphical objects displayed on another screen in an air traffic control (ATC) context. The introduction of such a device likely requires an adaptation of the sensory-motor system. The operator has to simultaneously perform movements on the horizontal plane while assessing them on the vertical plane. Thirty-six right-handed participants performed movement training with either constant or variable practice and with or without visual feedback of the displacement of their actions. Participants then performed a test phase without visual feedback. Performance improved in both practice conditions, but accuracy was higher with visual feedback. During the test phase, movement time was longer for those who had practiced with feedback, suggesting an element of dependency. However, this 'cost' of feedback did not extend to movement accuracy. Finally, participants who had received variable training performed better in the test phase, but accuracy was still unsatisfactory. We conclude that continuous visual feedback on the stylus position is necessary if tablets are to be introduced in ATC.
Survivability of Deterministic Dynamical Systems
Hellmann, Frank; Schultz, Paul; Grabow, Carsten; Heitzig, Jobst; Kurths, Jürgen
2016-01-01
The notion of a part of phase space containing desired (or allowed) states of a dynamical system is important in a wide range of complex systems research. It has been called the safe operating space, the viability kernel or the sunny region. In this paper we define the notion of survivability: Given a random initial condition, what is the likelihood that the transient behaviour of a deterministic system does not leave a region of desirable states. We demonstrate the utility of this novel stability measure by considering models from climate science, neuronal networks and power grids. We also show that a semi-analytic lower bound for the survivability of linear systems allows a numerically very efficient survivability analysis in realistic models of power grids. Our numerical and semi-analytic work underlines that the type of stability measured by survivability is not captured by common asymptotic stability measures. PMID:27405955
State Traffic Data: Traffic Safety Facts, 2001.
ERIC Educational Resources Information Center
National Center for Statistics and Analysis (NHTSA), Washington, DC.
This brief provides statistical information on U.S. traffic accidents delineated by state. A map details the 2001 traffic fatalities by state and the percent change from 2000. Data tables include: (1) traffic fatalities and fatality rates, 2001; (2) traffic fatalities and percent change, 1975-2001; (3) alcohol involvement in fatal traffic crashes,…
Traffic camera system development
NASA Astrophysics Data System (ADS)
Hori, Toshi
1997-04-01
The intelligent transportation system has generated a strong need for the development of intelligent camera systems to meet the requirements of sophisticated applications, such as electronic toll collection (ETC), traffic violation detection and automatic parking lot control. In order to achieve the highest levels of accuracy in detection, these cameras must have high speed electronic shutters, high resolution, high frame rate, and communication capabilities. A progressive scan interline transfer CCD camera, with its high speed electronic shutter and resolution capabilities, provides the basic functions to meet the requirements of a traffic camera system. Unlike most industrial video imaging applications, traffic cameras must deal with harsh environmental conditions and an extremely wide range of light. Optical character recognition is a critical function of a modern traffic camera system, with detection and accuracy heavily dependent on the camera function. In order to operate under demanding conditions, communication and functional optimization is implemented to control cameras from a roadside computer. The camera operates with a shutter speed faster than 1/2000 sec. to capture highway traffic both day and night. Consequently camera gain, pedestal level, shutter speed and gamma functions are controlled by a look-up table containing various parameters based on environmental conditions, particularly lighting. Lighting conditions are studied carefully, to focus only on the critical license plate surface. A unique light sensor permits accurate reading under a variety of conditions, such as a sunny day, evening, twilight, storms, etc. These camera systems are being deployed successfully in major ETC projects throughout the world.
Deterministic weak localization in periodic structures.
Tian, C; Larkin, A
2005-12-09
In some perfect periodic structures classical motion exhibits deterministic diffusion. For such systems we present the weak localization theory. As a manifestation for the velocity autocorrelation function a universal power law decay is predicted to appear at four Ehrenfest times. This deterministic weak localization is robust against weak quenched disorders, which may be confirmed by coherent backscattering measurements of periodic photonic crystals.
Simulating synchronized traffic flow and wide moving jam based on the brake light rule
NASA Astrophysics Data System (ADS)
Xiang, Zheng-Tao; Li, Yu-Jin; Chen, Yu-Feng; Xiong, Li
2013-11-01
A new cellular automaton (CA) model based on brake light rules is proposed, which considers the influence of deterministic deceleration on randomization probability and deceleration extent. To describe the synchronized flow phase of Kerner’s three-phase theory in accordance with empirical data, we have changed some rules of vehicle motion with the aim to improve speed and acceleration vehicle behavior in synchronized flow simulated with earlier cellular automaton models with brake lights. The fundamental diagrams and spatial-temporal diagrams are analyzed, as well as the complexity of the traffic evolution, the emergence process of wide moving jam. Simulation results show that our new model can reproduce the three traffic phases: free flow, synchronized flow and wide moving jam. In addition, our new model can well describe the complexity of traffic evolution: (1) with initial homogeneous distribution and large densities, the traffic will evolve into multiple steady states, in which the numbers of wide moving jams are not invariable. (2) With initial homogeneous distribution and the middle range of density, the wide moving jam will emerge stochastically. (3) With initial mega-jam distribution and the density close to a point with the low value, the initial mega-jam will disappear stochastically. (4) For the cases with multiple wide moving jams, the process is analyzed involving the generation of narrow moving jam due to “pinch effect”, which leads to wide moving jam emergence.
Criticality in dynamic arrest: correspondence between glasses and traffic.
de Wijn, A S; Miedema, D M; Nienhuis, B; Schall, P
2012-11-30
Dynamic arrest is a general phenomenon across a wide range of dynamic systems including glasses, traffic flow, and dynamics in cells, but the universality of dynamic arrest phenomena remains unclear. We connect the emergence of traffic jams in a simple traffic flow model directly to the dynamic slowing down in kinetically constrained models for glasses. In kinetically constrained models, the formation of glass becomes a true (singular) phase transition in the limit T→0. Similarly, using the Nagel-Schreckenberg model to simulate traffic flow, we show that the emergence of jammed traffic acquires the signature of a sharp transition in the deterministic limit p→1, corresponding to overcautious driving. We identify a true dynamic critical point marking the onset of coexistence between free flowing and jammed traffic, and demonstrate its analogy to the kinetically constrained glass models. We find diverging correlations analogous to those at a critical point of thermodynamic phase transitions.
Sensitivity analysis in a Lassa fever deterministic mathematical model
NASA Astrophysics Data System (ADS)
Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman
2015-05-01
Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.
Deterministic quantum teleportation with atoms.
Riebe, M; Häffner, H; Roos, C F; Hänsel, W; Benhelm, J; Lancaster, G P T; Körber, T W; Becher, C; Schmidt-Kaler, F; James, D F V; Blatt, R
2004-06-17
Teleportation of a quantum state encompasses the complete transfer of information from one particle to another. The complete specification of the quantum state of a system generally requires an infinite amount of information, even for simple two-level systems (qubits). Moreover, the principles of quantum mechanics dictate that any measurement on a system immediately alters its state, while yielding at most one bit of information. The transfer of a state from one system to another (by performing measurements on the first and operations on the second) might therefore appear impossible. However, it has been shown that the entangling properties of quantum mechanics, in combination with classical communication, allow quantum-state teleportation to be performed. Teleportation using pairs of entangled photons has been demonstrated, but such techniques are probabilistic, requiring post-selection of measured photons. Here, we report deterministic quantum-state teleportation between a pair of trapped calcium ions. Following closely the original proposal, we create a highly entangled pair of ions and perform a complete Bell-state measurement involving one ion from this pair and a third source ion. State reconstruction conditioned on this measurement is then performed on the other half of the entangled pair. The measured fidelity is 75%, demonstrating unequivocally the quantum nature of the process.
Universality classes for deterministic surface growth
NASA Technical Reports Server (NTRS)
Krug, J.; Spohn, H.
1988-01-01
A scaling theory for the generalized deterministic Kardar-Parisi-Zhang (1986) equation with beta greater than 1, is developed to study the growth of a surface through deterministic local rules. A one-dimensional surface model corresponding to beta = 1 is presented and solved exactly. The model can be studied as a limiting case of ballistic deposition, or as the deterministic limit of the Eden (1961) model. The scaling exponents, the correlation functions, and the skewness of the surface are determined. The results are compared with those of Burgers' (1974) equation for the case of beta = 2.
Connecting deterministic and stochastic metapopulation models.
Barbour, A D; McVinish, R; Pollett, P K
2015-12-01
In this paper, we study the relationship between certain stochastic and deterministic versions of Hanski's incidence function model and the spatially realistic Levins model. We show that the stochastic version can be well approximated in a certain sense by the deterministic version when the number of habitat patches is large, provided that the presence or absence of individuals in a given patch is influenced by a large number of other patches. Explicit bounds on the deviation between the stochastic and deterministic models are given.
Classification of Automated Search Traffic
NASA Astrophysics Data System (ADS)
Buehrer, Greg; Stokes, Jack W.; Chellapilla, Kumar; Platt, John C.
As web search providers seek to improve both relevance and response times, they are challenged by the ever-increasing tax of automated search query traffic. Third party systems interact with search engines for a variety of reasons, such as monitoring a web site’s rank, augmenting online games, or possibly to maliciously alter click-through rates. In this paper, we investigate automated traffic (sometimes referred to as bot traffic) in the query stream of a large search engine provider. We define automated traffic as any search query not generated by a human in real time. We first provide examples of different categories of query logs generated by automated means. We then develop many different features that distinguish between queries generated by people searching for information, and those generated by automated processes. We categorize these features into two classes, either an interpretation of the physical model of human interactions, or as behavioral patterns of automated interactions. Using the these detection features, we next classify the query stream using multiple binary classifiers. In addition, a multiclass classifier is then developed to identify subclasses of both normal and automated traffic. An active learning algorithm is used to suggest which user sessions to label to improve the accuracy of the multiclass classifier, while also seeking to discover new classes of automated traffic. Performance analysis are then provided. Finally, the multiclass classifier is used to predict the subclass distribution for the search query stream.
NASA Technical Reports Server (NTRS)
1995-01-01
Intelligent Vision Systems, Inc. (InVision) needed image acquisition technology that was reliable in bad weather for its TDS-200 Traffic Detection System. InVision researchers used information from NASA Tech Briefs and assistance from Johnson Space Center to finish the system. The NASA technology used was developed for Earth-observing imaging satellites: charge coupled devices, in which silicon chips convert light directly into electronic or digital images. The TDS-200 consists of sensors mounted above traffic on poles or span wires, enabling two sensors to view an intersection; a "swing and sway" feature to compensate for movement of the sensors; a combination of electronic shutter and gain control; and sensor output to an image digital signal processor, still frame video and optionally live video.
Chaotic Ising-like dynamics in traffic signals
Suzuki, Hideyuki; Imura, Jun-ichi; Aihara, Kazuyuki
2013-01-01
The green and red lights of a traffic signal can be viewed as the up and down states of an Ising spin. Moreover, traffic signals in a city interact with each other, if they are controlled in a decentralised way. In this paper, a simple model of such interacting signals on a finite-size two-dimensional lattice is shown to have Ising-like dynamics that undergoes a ferromagnetic phase transition. Probabilistic behaviour of the model is realised by chaotic billiard dynamics that arises from coupled non-chaotic elements. This purely deterministic model is expected to serve as a starting point for considering statistical mechanics of traffic signals. PMID:23350034
Deterministic algorithm with agglomerative heuristic for location problems
NASA Astrophysics Data System (ADS)
Kazakovtsev, L.; Stupina, A.
2015-10-01
Authors consider the clustering problem solved with the k-means method and p-median problem with various distance metrics. The p-median problem and the k-means problem as its special case are most popular models of the location theory. They are implemented for solving problems of clustering and many practically important logistic problems such as optimal factory or warehouse location, oil or gas wells, optimal drilling for oil offshore, steam generators in heavy oil fields. Authors propose new deterministic heuristic algorithm based on ideas of the Information Bottleneck Clustering and genetic algorithms with greedy heuristic. In this paper, results of running new algorithm on various data sets are given in comparison with known deterministic and stochastic methods. New algorithm is shown to be significantly faster than the Information Bottleneck Clustering method having analogous preciseness.
Towards a quasi-deterministic single-photon source
NASA Astrophysics Data System (ADS)
Peters, N. A.; Arnold, K. J.; VanDevender, A. P.; Jeffrey, E. R.; Rangarajan, R.; Hosten, O.; Barreiro, J. T.; Altepeter, J. B.; Kwiat, P. G.
2006-08-01
A source of single photons allows secure quantum key distribution, in addition, to being a critical resource for linear optics quantum computing. We describe our progress on deterministically creating single photons from spontaneous parametric downconversion, an extension of the Pittman, Jacobs and Franson scheme [Phys. Rev A, v66, 042303 (2002)]. Their idea was to conditionally prepare single photons by measuring one member of a spontaneously emitted photon pair and storing the remaining conditionally prepared photon until a predetermined time, when it would be "deterministically" released from storage. Our approach attempts to improve upon this by recycling the pump pulse in order to decrease the possibility of multiple-pair generation, while maintaining a high probability of producing a single pair. Many of the challenges we discuss are central to other quantum information technologies, including the need for low-loss optical storage, switching and detection, and fast feed-forward control.
Deterministic noiseless amplification of coherent states
NASA Astrophysics Data System (ADS)
Hu, Meng-Jun; Zhang, Yong-Sheng
2015-08-01
A universal deterministic noiseless quantum amplifier has been shown to be impossible. However, probabilistic noiseless amplification of a certain set of states is physically permissible. Regarding quantum state amplification as quantum state transformation, we show that deterministic noiseless amplification of coherent states chosen from a proper set is attainable. The relation between input coherent states and gain of amplification for deterministic noiseless amplification is thus derived. Furthermore, we extend our result to more general situation and show that deterministic noiseless amplification of Gaussian states is also possible. As an example of application, we find that our amplification model can obtain better performance in homodyne detection to measure the phase of state selected from a certain set. Besides, other possible applications are also discussed.
Deterministic Execution of Ptides Programs
2013-05-15
are developed in Ptolemy , a design and simulation environment for heteroge- neous systems. This framework also contains a code generation framework... Ptolemy , a design and simulation environment for heteroge- neous systems. This framework also contains a code generation framework which is leveraged to...generation is implemented in Ptolemy II, [4], an academic tool for designing and experimenting with heterogeneous system models. The first section of
Virtualized Traffic: reconstructing traffic flows from discrete spatiotemporal data.
Sewall, Jason; van den Berg, Jur; Lin, Ming C; Manocha, Dinesh
2011-01-01
We present a novel concept, Virtualized Traffic, to reconstruct and visualize continuous traffic flows from discrete spatiotemporal data provided by traffic sensors or generated artificially to enhance a sense of immersion in a dynamic virtual world. Given the positions of each car at two recorded locations on a highway and the corresponding time instances, our approach can reconstruct the traffic flows (i.e., the dynamic motions of multiple cars over time) between the two locations along the highway for immersive visualization of virtual cities or other environments. Our algorithm is applicable to high-density traffic on highways with an arbitrary number of lanes and takes into account the geometric, kinematic, and dynamic constraints on the cars. Our method reconstructs the car motion that automatically minimizes the number of lane changes, respects safety distance to other cars, and computes the acceleration necessary to obtain a smooth traffic flow subject to the given constraints. Furthermore, our framework can process a continuous stream of input data in real time, enabling the users to view virtualized traffic events in a virtual world as they occur. We demonstrate our reconstruction technique with both synthetic and real-world input.
Deterministic phase retrieval employing spherical illumination
NASA Astrophysics Data System (ADS)
Martínez-Carranza, J.; Falaggis, K.; Kozacki, T.
2015-05-01
Deterministic Phase Retrieval techniques (DPRTs) employ a series of paraxial beam intensities in order to recover the phase of a complex field. These paraxial intensities are usually generated in systems that employ plane-wave illumination. This type of illumination allows a direct processing of the captured intensities with DPRTs for recovering the phase. Furthermore, it has been shown that intensities for DPRTs can be acquired from systems that use spherical illumination as well. However, this type of illumination presents a major setback for DPRTs: the captured intensities change their size for each position of the detector on the propagation axis. In order to apply the DPRTs, reescalation of the captured intensities has to be applied. This condition can increase the error sensitivity of the final phase result if it is not carried out properly. In this work, we introduce a novel system based on a Phase Light Modulator (PLM) for capturing the intensities when employing spherical illumination. The proposed optical system enables us to capture the diffraction pattern of under, in, and over-focus intensities. The employment of the PLM allows capturing the corresponding intensities without displacing the detector. Moreover, with the proposed optical system we can control accurately the magnification of the captured intensities. Thus, the stack of captured intensities can be used in DPRTs, overcoming the problems related with the resizing of the images. In order to prove our claims, the corresponding numerical experiments will be carried out. These simulations will show that the retrieved phases with spherical illumination are accurate and can be compared with those that employ plane wave illumination. We demonstrate that with the employment of the PLM, the proposed optical system has several advantages as: the optical system is compact, the beam size on the detector plane is controlled accurately, and the errors coming from mechanical motion can be suppressed easily.
Effect of Uncertainty on Deterministic Runway Scheduling
NASA Technical Reports Server (NTRS)
Gupta, Gautam; Malik, Waqar; Jung, Yoon C.
2012-01-01
Active runway scheduling involves scheduling departures for takeoffs and arrivals for runway crossing subject to numerous constraints. This paper evaluates the effect of uncertainty on a deterministic runway scheduler. The evaluation is done against a first-come- first-serve scheme. In particular, the sequence from a deterministic scheduler is frozen and the times adjusted to satisfy all separation criteria; this approach is tested against FCFS. The comparison is done for both system performance (throughput and system delay) and predictability, and varying levels of congestion are considered. The modeling of uncertainty is done in two ways: as equal uncertainty in availability at the runway as for all aircraft, and as increasing uncertainty for later aircraft. Results indicate that the deterministic approach consistently performs better than first-come-first-serve in both system performance and predictability.
Optimal partial deterministic quantum teleportation of qubits
Mista, Ladislav Jr.; Filip, Radim
2005-02-01
We propose a protocol implementing optimal partial deterministic quantum teleportation for qubits. This is a teleportation scheme realizing deterministically an optimal 1{yields}2 asymmetric universal cloning where one imperfect copy of the input state emerges at the sender's station while the other copy emerges at receiver's possibly distant station. The optimality means that the fidelities of the copies saturate the asymmetric cloning inequality. The performance of the protocol relies on the partial deterministic nondemolition Bell measurement that allows us to continuously control the flow of information among the outgoing qubits. We also demonstrate that the measurement is optimal two-qubit operation in the sense of the trade-off between the state disturbance and the information gain.
Real-Time Surface Traffic Adviser
NASA Technical Reports Server (NTRS)
Glass, Brian J. (Inventor); Spirkovska, Liljana (Inventor); McDermott, William J. (Inventor); Reisman, Ronald J. (Inventor); Gibson, James (Inventor); Iverson, David L. (Inventor)
2001-01-01
A real-time data management system which uses data generated at different rates by multiple heterogeneous incompatible data sources are presented. In one embodiment, the invention is as an airport surface traffic data management system (traffic adviser) that electronically interconnects air traffic control, airline, and airport operations user communities to facilitate information sharing and improve taxi queuing. The system uses an expert system to fuse dam from a variety of airline, airport operations, ramp control, and air traffic control sources, in order to establish, predict, and update reference data values for every aircraft surface operation.
Jamitons: Phantom Traffic Jams
ERIC Educational Resources Information Center
Kowszun, Jorj
2013-01-01
Traffic on motorways can slow down for no apparent reason. Sudden changes in speed by one or two drivers can create a chain reaction that causes a traffic jam for the vehicles that are following. This kind of phantom traffic jam is called a "jamiton" and the article discusses some of the ways in which traffic engineers produce…
A deterministic discrete ordinates transport proxy application
2014-06-03
Kripke is a simple 3D deterministic discrete ordinates (Sn) particle transport code that maintains the computational load and communications pattern of a real transport code. It is intended to be a research tool to explore different data layouts, new programming paradigms and computer architectures.
Deterministic Quantization by Dynamical Boundary Conditions
Dolce, Donatello
2010-06-15
We propose an unexplored quantization method. It is based on the assumption of dynamical space-time intrinsic periodicities for relativistic fields, which in turn can be regarded as dual to extra-dimensional fields. As a consequence we obtain a unified and consistent interpretation of Special Relativity and Quantum Mechanics in terms of Deterministic Geometrodynamics.
Deterministic geologic processes and stochastic modeling
Rautman, C.A.; Flint, A.L.
1991-12-31
Recent outcrop sampling at Yucca Mountain, Nevada, has produced significant new information regarding the distribution of physical properties at the site of a potential high-level nuclear waste repository. Consideration of the spatial distribution of measured values and geostatistical measures of spatial variability indicates that there are a number of widespread deterministic geologic features at the site that have important implications for numerical modeling of such performance aspects as ground water flow and radionuclide transport. These deterministic features have their origin in the complex, yet logical, interplay of a number of deterministic geologic processes, including magmatic evolution; volcanic eruption, transport, and emplacement; post-emplacement cooling and alteration; and late-stage (diagenetic) alteration. Because of geologic processes responsible for formation of Yucca Mountain are relatively well understood and operate on a more-or-less regional scale, understanding of these processes can be used in modeling the physical properties and performance of the site. Information reflecting these deterministic geologic processes may be incorporated into the modeling program explicitly, using geostatistical concepts such as soft information, or implicitly, through the adoption of a particular approach to modeling. It is unlikely that any single representation of physical properties at the site will be suitable for all modeling purposes. Instead, the same underlying physical reality will need to be described many times, each in a manner conducive to assessing specific performance issues.
Deterministic nanoassembly: Neutral or plasma route?
NASA Astrophysics Data System (ADS)
Levchenko, I.; Ostrikov, K.; Keidar, M.; Xu, S.
2006-07-01
It is shown that, owing to selective delivery of ionic and neutral building blocks directly from the ionized gas phase and via surface migration, plasma environments offer a better deal of deterministic synthesis of ordered nanoassemblies compared to thermal chemical vapor deposition. The results of hybrid Monte Carlo (gas phase) and adatom self-organization (surface) simulation suggest that higher aspect ratios and better size and pattern uniformity of carbon nanotip microemitters can be achieved via the plasma route.
Working Memory and Its Relation to Deterministic Sequence Learning
Martini, Markus; Furtner, Marco R.; Sachse, Pierre
2013-01-01
Is there a relation between working memory (WM) and incidental sequence learning? Nearly all of the earlier investigations in the role of WM capacity (WMC) in sequence learning suggest no correlations in incidental learning conditions. However, the theoretical view of WM and operationalization of WMC made strong progress in recent years. The current study related performance in a coordination and transformation task to sequence knowledge in a four-choice incidental deterministic serial reaction time (SRT) task and a subsequent free generation task. The response-to-stimulus interval (RSI) was varied between 0 ms and 300 ms. Our results show correlations between WMC and error rates in condition RSI 0 ms. For condition RSI 300 ms we found relations between WMC and sequence knowledge in the SRT task as well as between WMC and generation task performance. Theoretical implications of these findings for ongoing processes during sequence learning and retrieval of sequence knowledge are discussed. PMID:23409148
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul
2016-05-03
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d < 2κ. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. Lastly, this is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul
2016-05-03
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d
Improving ground-penetrating radar data in sedimentary rocks using deterministic deconvolution
Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.; Byrnes, A.P.
2003-01-01
Resolution is key to confidently identifying unique geologic features using ground-penetrating radar (GPR) data. Source wavelet "ringing" (related to bandwidth) in a GPR section limits resolution because of wavelet interference, and can smear reflections in time and/or space. The resultant potential for misinterpretation limits the usefulness of GPR. Deconvolution offers the ability to compress the source wavelet and improve temporal resolution. Unlike statistical deconvolution, deterministic deconvolution is mathematically simple and stable while providing the highest possible resolution because it uses the source wavelet unique to the specific radar equipment. Source wavelets generated in, transmitted through and acquired from air allow successful application of deterministic approaches to wavelet suppression. We demonstrate the validity of using a source wavelet acquired in air as the operator for deterministic deconvolution in a field application using "400-MHz" antennas at a quarry site characterized by interbedded carbonates with shale partings. We collected GPR data on a bench adjacent to cleanly exposed quarry faces in which we placed conductive rods to provide conclusive groundtruth for this approach to deconvolution. The best deconvolution results, which are confirmed by the conductive rods for the 400-MHz antenna tests, were observed for wavelets acquired when the transmitter and receiver were separated by 0.3 m. Applying deterministic deconvolution to GPR data collected in sedimentary strata at our study site resulted in an improvement in resolution (50%) and improved spatial location (0.10-0.15 m) of geologic features compared to the same data processed without deterministic deconvolution. The effectiveness of deterministic deconvolution for increased resolution and spatial accuracy of specific geologic features is further demonstrated by comparing results of deconvolved data with nondeconvolved data acquired along a 30-m transect immediately adjacent
NASA Astrophysics Data System (ADS)
Samson, E. C.; Wilson, K. E.; Newman, Z. L.; Anderson, B. P.
2016-02-01
We experimentally and numerically demonstrate deterministic creation and manipulation of a pair of oppositely charged singly quantized vortices in a highly oblate Bose-Einstein condensate (BEC). Two identical blue-detuned, focused Gaussian laser beams that pierce the BEC serve as repulsive obstacles for the superfluid atomic gas; by controlling the positions of the beams within the plane of the BEC, superfluid flow is deterministically established around each beam such that two vortices of opposite circulation are generated by the motion of the beams, with each vortex pinned to the in situ position of a laser beam. We study the vortex creation process, and show that the vortices can be moved about within the BEC by translating the positions of the laser beams. This technique can serve as a building block in future experimental techniques to create, on-demand, deterministic arrangements of few or many vortices within a BEC for precise studies of vortex dynamics and vortex interactions.
Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology
Gao, Fei; Li, Ye; Novak, Igor L.; Slepchenko, Boris M.
2016-01-01
Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium ‘sparks’ as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell. PMID:27959915
McQuinn, Ian H; Lesage, Véronique; Carrier, Dominic; Larrivée, Geneviève; Samson, Yves; Chartrand, Sylvain; Michaud, Robert; Theriault, James
2011-12-01
The threatened resident beluga population of the St. Lawrence Estuary shares the Saguenay-St. Lawrence Marine Park with significant anthropogenic noise sources, including marine commercial traffic and a well-established, vessel-based whale-watching industry. Frequency-dependent (FD) weighting was used to approximate beluga hearing sensitivity to determine how noise exposure varied in time and space at six sites of high beluga summer residency. The relative contribution of each source to acoustic habitat degradation was estimated by measuring noise levels throughout the summer and noise signatures of typical vessel classes with respect to traffic volume and sound propagation characteristics. Rigid-hulled inflatable boats were the dominant noise source with respect to estimated beluga hearing sensitivity in the studied habitats due to their high occurrence and proximity, high correlation with site-specific FD-weighted sound levels, and the dominance of mid-frequencies (0.3-23 kHz) in their noise signatures. Median C-weighted sound pressure level (SPL(RMS)) had a range of 19 dB re 1 μPa between the noisiest and quietest sites. Broadband SPL(RMS) exceeded 120 dB re 1 μPa 8-32% of the time depending on the site. Impacts of these noise levels on St. Lawrence beluga will depend on exposure recurrence and individual responsiveness.
Effects of changing orders in the update rules on traffic flow.
Xue, Yu; Dong, Li-Yun; Li, Lei; Dai, Shi-Qiang
2005-02-01
Based on the Nagel-Schreckenberg (NaSch) model of traffic flow, we study the effects of the orders of the evolutive rule on traffic flow. It has been found from simulation that the cellular automaton (CA) traffic model is very sensitively dependent on the orders of the evolutive rule. Changing the evolutive steps of the NaSch model will result in two modified models, called the SDNaSch model and the noise-first model, with different fundamental diagrams and jamming states. We analyze the mechanism of these two different traffic models and corresponding traffic behaviors in detail and compare the two modified model with the NaSch model. It is concluded that the order arrangement of the stochastic delay and deterministic deceleration indeed has remarkable effects on traffic flow.
CHAOS AND STOCHASTICITY IN DETERMINISTICALLY GENERATED MULTIFRACTAL MEASURES. (R824780)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Deterministic Folding in Stiff Elastic Membranes
NASA Astrophysics Data System (ADS)
Tallinen, T.; Åström, J. A.; Timonen, J.
2008-09-01
Crumpled membranes have been found to be characterized by complex patterns of spatially seemingly random facets separated by narrow ridges of high elastic energy. We demonstrate by numerical simulations that compression of stiff elastic membranes with small randomness in their initial configurations leads to either random ridge configurations (high entropy) or nearly deterministic folds (low elastic energy). For folding with symmetric ridge configurations to appear in part of the crumpling processes, the crumpling rate must be slow enough. Folding stops when the thickness of the folded structure becomes important, and crumpling continues thereafter as a random process.
Deterministic quantum computation with one photonic qubit
NASA Astrophysics Data System (ADS)
Hor-Meyll, M.; Tasca, D. S.; Walborn, S. P.; Ribeiro, P. H. Souto; Santos, M. M.; Duzzioni, E. I.
2015-07-01
We show that deterministic quantum computing with one qubit (DQC1) can be experimentally implemented with a spatial light modulator, using the polarization and the transverse spatial degrees of freedom of light. The scheme allows the computation of the trace of a high-dimension matrix, being limited by the resolution of the modulator panel and the technical imperfections. In order to illustrate the method, we compute the normalized trace of unitary matrices and implement the Deutsch-Jozsa algorithm. The largest matrix that can be manipulated with our setup is 1080 ×1920 , which is able to represent a system with approximately 21 qubits.
Additivity Principle in High-Dimensional Deterministic Systems
NASA Astrophysics Data System (ADS)
Saito, Keiji; Dhar, Abhishek
2011-12-01
The additivity principle (AP), conjectured by Bodineau and Derrida [Phys. Rev. Lett. 92, 180601 (2004)PRLTAO0031-900710.1103/PhysRevLett.92.180601], is discussed for the case of heat conduction in three-dimensional disordered harmonic lattices to consider the effects of deterministic dynamics, higher dimensionality, and different transport regimes, i.e., ballistic, diffusive, and anomalous transport. The cumulant generating function (CGF) for heat transfer is accurately calculated and compared with the one given by the AP. In the diffusive regime, we find a clear agreement with the conjecture even if the system is high dimensional. Surprisingly, even in the anomalous regime the CGF is also well fitted by the AP. Lower-dimensional systems are also studied and the importance of three dimensionality for the validity is stressed.
Additivity principle in high-dimensional deterministic systems.
Saito, Keiji; Dhar, Abhishek
2011-12-16
The additivity principle (AP), conjectured by Bodineau and Derrida [Phys. Rev. Lett. 92, 180601 (2004)], is discussed for the case of heat conduction in three-dimensional disordered harmonic lattices to consider the effects of deterministic dynamics, higher dimensionality, and different transport regimes, i.e., ballistic, diffusive, and anomalous transport. The cumulant generating function (CGF) for heat transfer is accurately calculated and compared with the one given by the AP. In the diffusive regime, we find a clear agreement with the conjecture even if the system is high dimensional. Surprisingly, even in the anomalous regime the CGF is also well fitted by the AP. Lower-dimensional systems are also studied and the importance of three dimensionality for the validity is stressed.
Integrating Clonal Selection and Deterministic Sampling for Efficient Associative Classification
Elsayed, Samir A. Mohamed; Rajasekaran, Sanguthevar; Ammar, Reda A.
2013-01-01
Traditional Associative Classification (AC) algorithms typically search for all possible association rules to find a representative subset of those rules. Since the search space of such rules may grow exponentially as the support threshold decreases, the rules discovery process can be computationally expensive. One effective way to tackle this problem is to directly find a set of high-stakes association rules that potentially builds a highly accurate classifier. This paper introduces AC-CS, an AC algorithm that integrates the clonal selection of the immune system along with deterministic data sampling. Upon picking a representative sample of the original data, it proceeds in an evolutionary fashion to populate only rules that are likely to yield good classification accuracy. Empirical results on several real datasets show that the approach generates dramatically less rules than traditional AC algorithms. In addition, the proposed approach is significantly more efficient than traditional AC algorithms while achieving a competitive accuracy. PMID:24500504
More on exact state reconstruction in deterministic digital control systems
NASA Technical Reports Server (NTRS)
Polites, Michael E.
1988-01-01
Presented is a special form of the Ideal State Reconstructor for deterministic digital control systems which is simpler to implement than the most general form. The Ideal State Reconstructor is so named because, if the plant parameters are known exactly, its output will exactly equal, not just approximate, the true state of the plant and accomplish this without any knowledge of the plant's initial state. Besides this, it adds no new states or eigenvalues to the system. Nor does it affect the plant equation for the system in any way; it affects the measurement equation only. It is characterized by the fact that discrete measurements are generated every T/N seconds and input into a multi-input/multi-output moving-average (MA) process. The output of this process is sampled every T seconds and utilized in reconstructing the state of the system.
NASA Astrophysics Data System (ADS)
Itoh, Kosuke; Nakada, Tsutomu
2013-04-01
Deterministic nonlinear dynamical processes are ubiquitous in nature. Chaotic sounds generated by such processes may appear irregular and random in waveform, but these sounds are mathematically distinguished from random stochastic sounds in that they contain deterministic short-time predictability in their temporal fine structures. We show that the human brain distinguishes deterministic chaotic sounds from spectrally matched stochastic sounds in neural processing and perception. Deterministic chaotic sounds, even without being attended to, elicited greater cerebral cortical responses than the surrogate control sounds after about 150 ms in latency after sound onset. Listeners also clearly discriminated these sounds in perception. The results support the hypothesis that the human auditory system is sensitive to the subtle short-time predictability embedded in the temporal fine structure of sounds.
Discrete Deterministic and Stochastic Petri Nets
NASA Technical Reports Server (NTRS)
Zijal, Robert; Ciardo, Gianfranco
1996-01-01
Petri nets augmented with timing specifications gained a wide acceptance in the area of performance and reliability evaluation of complex systems exhibiting concurrency, synchronization, and conflicts. The state space of time-extended Petri nets is mapped onto its basic underlying stochastic process, which can be shown to be Markovian under the assumption of exponentially distributed firing times. The integration of exponentially and non-exponentially distributed timing is still one of the major problems for the analysis and was first attacked for continuous time Petri nets at the cost of structural or analytical restrictions. We propose a discrete deterministic and stochastic Petri net (DDSPN) formalism with no imposed structural or analytical restrictions where transitions can fire either in zero time or according to arbitrary firing times that can be represented as the time to absorption in a finite absorbing discrete time Markov chain (DTMC). Exponentially distributed firing times are then approximated arbitrarily well by geometric distributions. Deterministic firing times are a special case of the geometric distribution. The underlying stochastic process of a DDSPN is then also a DTMC, from which the transient and stationary solution can be obtained by standard techniques. A comprehensive algorithm and some state space reduction techniques for the analysis of DDSPNs are presented comprising the automatic detection of conflicts and confusions, which removes a major obstacle for the analysis of discrete time models.
Deterministic prediction of surface wind speed variations
NASA Astrophysics Data System (ADS)
Drisya, G. V.; Kiplangat, D. C.; Asokan, K.; Satheesh Kumar, K.
2014-11-01
Accurate prediction of wind speed is an important aspect of various tasks related to wind energy management such as wind turbine predictive control and wind power scheduling. The most typical characteristic of wind speed data is its persistent temporal variations. Most of the techniques reported in the literature for prediction of wind speed and power are based on statistical methods or probabilistic distribution of wind speed data. In this paper we demonstrate that deterministic forecasting methods can make accurate short-term predictions of wind speed using past data, at locations where the wind dynamics exhibit chaotic behaviour. The predictions are remarkably accurate up to 1 h with a normalised RMSE (root mean square error) of less than 0.02 and reasonably accurate up to 3 h with an error of less than 0.06. Repeated application of these methods at 234 different geographical locations for predicting wind speeds at 30-day intervals for 3 years reveals that the accuracy of prediction is more or less the same across all locations and time periods. Comparison of the results with f-ARIMA model predictions shows that the deterministic models with suitable parameters are capable of returning improved prediction accuracy and capturing the dynamical variations of the actual time series more faithfully. These methods are simple and computationally efficient and require only records of past data for making short-term wind speed forecasts within practically tolerable margin of errors.
Deterministic Creation of Macroscopic Cat States
Lombardo, Daniel; Twamley, Jason
2015-01-01
Despite current technological advances, observing quantum mechanical effects outside of the nanoscopic realm is extremely challenging. For this reason, the observation of such effects on larger scale systems is currently one of the most attractive goals in quantum science. Many experimental protocols have been proposed for both the creation and observation of quantum states on macroscopic scales, in particular, in the field of optomechanics. The majority of these proposals, however, rely on performing measurements, making them probabilistic. In this work we develop a completely deterministic method of macroscopic quantum state creation. We study the prototypical optomechanical Membrane In The Middle model and show that by controlling the membrane’s opacity, and through careful choice of the optical cavity initial state, we can deterministically create and grow the spatial extent of the membrane’s position into a large cat state. It is found that by using a Bose-Einstein condensate as a membrane high fidelity cat states with spatial separations of up to ∼300 nm can be achieved. PMID:26345157
Deterministic forward scatter from surface gravity waves.
Deane, Grant B; Preisig, James C; Tindle, Chris T; Lavery, Andone; Stokes, M Dale
2012-12-01
Deterministic structures in sound reflected by gravity waves, such as focused arrivals and Doppler shifts, have implications for underwater acoustics and sonar, and the performance of underwater acoustic communications systems. A stationary phase analysis of the Helmholtz-Kirchhoff scattering integral yields the trajectory of focused arrivals and their relationship to the curvature of the surface wave field. Deterministic effects along paths up to 70 water depths long are observed in shallow water measurements of surface-scattered sound at the Martha's Vineyard Coastal Observatory. The arrival time and amplitude of surface-scattered pulses are reconciled with model calculations using measurements of surface waves made with an upward-looking sonar mounted mid-way along the propagation path. The root mean square difference between the modeled and observed pulse arrival amplitude and delay, respectively, normalized by the maximum range of amplitudes and delays, is found to be 0.2 or less for the observation periods analyzed. Cross-correlation coefficients for modeled and observed pulse arrival delays varied from 0.83 to 0.16 depending on surface conditions. Cross-correlation coefficients for normalized pulse energy for the same conditions were small and varied from 0.16 to 0.06. In contrast, the modeled and observed pulse arrival delay and amplitude statistics were in good agreement.
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; ...
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.
Operational Global Deterministic and Ensemble Wave Prediction Systems at Environment Canada
NASA Astrophysics Data System (ADS)
Bernier, Natacha; Peel, Syd; Bélanger, Jean-Marc; Roch, Michel; Lépine, Mario; Pellerin, Pierre; Henrique Alves, José; Tolman, Hendrik
2015-04-01
Canada's new global deterministic and ensemble wave prediction systems are presented together with an evaluation of their performance over a 5 month hindcast. Particular attention is paid to the Arctic Ocean where accurate forecasts are crucial for maintaining safe activities such as drilling, and vessel operation. The wave prediction systems are based on WAVEWATCHIII and are operated at grid spacings of 1/4° (deterministic) and 1/2 ° (ensemble). Both systems are run twice daily with lead times of 120h (5 days) for the deterministic systems and 240h (10 days) for the ensemble system. The wave prediction systems will be shown to have skill in forecasting significant wave height and peak period over the future several days. Beyond lead times of 120h, deterministic forecasts are extended using ensembles of wave forecasts to generate probabilistic forecasts for long-range events. New displays will be used to summarize the wealth of information generated by ensembles into depictions that could help support early warning systems.
AsSadhan, Basil; Moura, José M F
2014-07-01
Botnets are large networks of bots (compromised machines) that are under the control of a small number of bot masters. They pose a significant threat to Internet's communications and applications. A botnet relies on command and control (C2) communications channels traffic between its members for its attack execution. C2 traffic occurs prior to any attack; hence, the detection of botnet's C2 traffic enables the detection of members of the botnet before any real harm happens. We analyze C2 traffic and find that it exhibits a periodic behavior. This is due to the pre-programmed behavior of bots that check for updates to download them every T seconds. We exploit this periodic behavior to detect C2 traffic. The detection involves evaluating the periodogram of the monitored traffic. Then applying Walker's large sample test to the periodogram's maximum ordinate in order to determine if it is due to a periodic component or not. If the periodogram of the monitored traffic contains a periodic component, then it is highly likely that it is due to a bot's C2 traffic. The test looks only at aggregate control plane traffic behavior, which makes it more scalable than techniques that involve deep packet inspection (DPI) or tracking the communication flows of different hosts. We apply the test to two types of botnet, tinyP2P and IRC that are generated by SLINGbot. We verify the periodic behavior of their C2 traffic and compare it to the results we get on real traffic that is obtained from a secured enterprise network. We further study the characteristics of the test in the presence of injected HTTP background traffic and the effect of the duty cycle on the periodic behavior.
An efficient method to detect periodic behavior in botnet traffic by analyzing control plane traffic
AsSadhan, Basil; Moura, José M.F.
2013-01-01
Botnets are large networks of bots (compromised machines) that are under the control of a small number of bot masters. They pose a significant threat to Internet’s communications and applications. A botnet relies on command and control (C2) communications channels traffic between its members for its attack execution. C2 traffic occurs prior to any attack; hence, the detection of botnet’s C2 traffic enables the detection of members of the botnet before any real harm happens. We analyze C2 traffic and find that it exhibits a periodic behavior. This is due to the pre-programmed behavior of bots that check for updates to download them every T seconds. We exploit this periodic behavior to detect C2 traffic. The detection involves evaluating the periodogram of the monitored traffic. Then applying Walker’s large sample test to the periodogram’s maximum ordinate in order to determine if it is due to a periodic component or not. If the periodogram of the monitored traffic contains a periodic component, then it is highly likely that it is due to a bot’s C2 traffic. The test looks only at aggregate control plane traffic behavior, which makes it more scalable than techniques that involve deep packet inspection (DPI) or tracking the communication flows of different hosts. We apply the test to two types of botnet, tinyP2P and IRC that are generated by SLINGbot. We verify the periodic behavior of their C2 traffic and compare it to the results we get on real traffic that is obtained from a secured enterprise network. We further study the characteristics of the test in the presence of injected HTTP background traffic and the effect of the duty cycle on the periodic behavior. PMID:25685512
Deterministic polishing from theory to practice
NASA Astrophysics Data System (ADS)
Hooper, Abigail R.; Hoffmann, Nathan N.; Sarkas, Harry W.; Escolas, John; Hobbs, Zachary
2015-10-01
Improving predictability in optical fabrication can go a long way towards increasing profit margins and maintaining a competitive edge in an economic environment where pressure is mounting for optical manufacturers to cut costs. A major source of hidden cost is rework - the share of production that does not meet specification in the first pass through the polishing equipment. Rework substantially adds to the part's processing and labor costs as well as bottlenecks in production lines and frustration for managers, operators and customers. The polishing process consists of several interacting variables including: glass type, polishing pads, machine type, RPM, downforce, slurry type, baume level and even the operators themselves. Adjusting the process to get every variable under control while operating in a robust space can not only provide a deterministic polishing process which improves profitability but also produces a higher quality optic.
Inertia and scaling in deterministic lateral displacement.
Bowman, Timothy J; Drazer, German; Frechette, Joelle
2013-01-01
The ability to separate and analyze chemical species with high resolution, sensitivity, and throughput is central to the development of microfluidics systems. Deterministic lateral displacement (DLD) is a continuous separation method based on the transport of species through an array of obstacles. In the case of force-driven DLD (f-DLD), size-based separation can be modelled effectively using a simple particle-obstacle collision model. We use a macroscopic model to study f-DLD and demonstrate, via a simple scaling, that the method is indeed predominantly a size-based phenomenon at low Reynolds numbers. More importantly, we demonstrate that inertia effects provide the additional capability to separate same size particles but of different densities and could enhance separation at high throughput conditions. We also show that a direct conversion of macroscopic results to microfluidic settings is possible with a simple scaling based on the size of the obstacles that results in a universal curve.
Deterministic phase slips in mesoscopic superconducting rings
NASA Astrophysics Data System (ADS)
Petković, I.; Lollo, A.; Glazman, L. I.; Harris, J. G. E.
2016-11-01
The properties of one-dimensional superconductors are strongly influenced by topological fluctuations of the order parameter, known as phase slips, which cause the decay of persistent current in superconducting rings and the appearance of resistance in superconducting wires. Despite extensive work, quantitative studies of phase slips have been limited by uncertainty regarding the order parameter's free-energy landscape. Here we show detailed agreement between measurements of the persistent current in isolated flux-biased rings and Ginzburg-Landau theory over a wide range of temperature, magnetic field and ring size; this agreement provides a quantitative picture of the free-energy landscape. We also demonstrate that phase slips occur deterministically as the barrier separating two competing order parameter configurations vanishes. These results will enable studies of quantum and thermal phase slips in a well-characterized system and will provide access to outstanding questions regarding the nature of one-dimensional superconductivity.
Deterministic multi-zone ice accretion modeling
NASA Technical Reports Server (NTRS)
Yamaguchi, K.; Hansman, R. John, Jr.; Kazmierczak, Michael
1991-01-01
The focus here is on a deterministic model of the surface roughness transition behavior of glaze ice. The initial smooth/rough transition location, bead formation, and the propagation of the transition location are analyzed. Based on the hypothesis that the smooth/rough transition location coincides with the laminar/turbulent boundary layer transition location, a multizone model is implemented in the LEWICE code. In order to verify the effectiveness of the model, ice accretion predictions for simple cylinders calculated by the multizone LEWICE are compared to experimental ice shapes. The glaze ice shapes are found to be sensitive to the laminar surface roughness and bead thickness parameters controlling the transition location, while the ice shapes are found to be insensitive to the turbulent surface roughness.
Deterministic remote preparation via the Brown state
NASA Astrophysics Data System (ADS)
Ma, Song-Ya; Gao, Cong; Zhang, Pei; Qu, Zhi-Guo
2017-04-01
We propose two deterministic remote state preparation (DRSP) schemes by using the Brown state as the entangled channel. Firstly, the remote preparation of an arbitrary two-qubit state is considered. It is worth mentioning that the construction of measurement bases plays a key role in our scheme. Then, the remote preparation of an arbitrary three-qubit state is investigated. The proposed schemes can be extended to controlled remote state preparation (CRSP) with unit success probabilities. At variance with the existing CRSP schemes via the Brown state, the derived schemes have no restriction on the coefficients, while the success probabilities can reach 100%. It means the success probabilities are greatly improved. Moreover, we pay attention to the DRSP in noisy environments under two important decoherence models, the amplitude-damping noise and phase-damping noise.
Block variables for deterministic aperiodic sequences
NASA Astrophysics Data System (ADS)
Hörnquist, Michael
1997-10-01
We use the concept of block variables to obtain a measure of order/disorder for some one-dimensional deterministic aperiodic sequences. For the Thue - Morse sequence, the Rudin - Shapiro sequence and the period-doubling sequence it is possible to obtain analytical expressions in the limit of infinite sequences. For the Fibonacci sequence, we present some analytical results which can be supported by numerical arguments. It turns out that the block variables show a wide range of different behaviour, some of them indicating that some of the considered sequences are more `random' than other. However, the method does not give any definite answer to the question of which sequence is more disordered than the other and, in this sense, the results obtained are negative. We compare this with some other ways of measuring the amount of order/disorder in such systems, and there seems to be no direct correspondence between the measures.
Deterministic approaches to coherent diffractive imaging
NASA Astrophysics Data System (ADS)
Allen, L. J.; D'Alfonso, A. J.; Martin, A. V.; Morgan, A. J.; Quiney, H. M.
2016-01-01
In this review we will consider the retrieval of the wave at the exit surface of an object illuminated by a coherent probe from one or more measured diffraction patterns. These patterns may be taken in the near-field (often referred to as images) or in the far field (the Fraunhofer diffraction pattern, where the wave is the Fourier transform of that at the exit surface). The retrieval of the exit surface wave from such data is an inverse scattering problem. This inverse problem has historically been solved using nonlinear iterative methods, which suffer from convergence and uniqueness issues. Here we review deterministic approaches to obtaining the exit surface wave which ameliorate those problems.
Deterministic phase slips in mesoscopic superconducting rings
Petković, I.; Lollo, A.; Glazman, L. I.; Harris, J. G. E.
2016-01-01
The properties of one-dimensional superconductors are strongly influenced by topological fluctuations of the order parameter, known as phase slips, which cause the decay of persistent current in superconducting rings and the appearance of resistance in superconducting wires. Despite extensive work, quantitative studies of phase slips have been limited by uncertainty regarding the order parameter's free-energy landscape. Here we show detailed agreement between measurements of the persistent current in isolated flux-biased rings and Ginzburg–Landau theory over a wide range of temperature, magnetic field and ring size; this agreement provides a quantitative picture of the free-energy landscape. We also demonstrate that phase slips occur deterministically as the barrier separating two competing order parameter configurations vanishes. These results will enable studies of quantum and thermal phase slips in a well-characterized system and will provide access to outstanding questions regarding the nature of one-dimensional superconductivity. PMID:27882924
Deterministic-random separation in nonstationary regime
NASA Astrophysics Data System (ADS)
Abboud, D.; Antoni, J.; Sieg-Zieba, S.; Eltabach, M.
2016-02-01
In rotating machinery vibration analysis, the synchronous average is perhaps the most widely used technique for extracting periodic components. Periodic components are typically related to gear vibrations, misalignments, unbalances, blade rotations, reciprocating forces, etc. Their separation from other random components is essential in vibration-based diagnosis in order to discriminate useful information from masking noise. However, synchronous averaging theoretically requires the machine to operate under stationary regime (i.e. the related vibration signals are cyclostationary) and is otherwise jeopardized by the presence of amplitude and phase modulations. A first object of this paper is to investigate the nature of the nonstationarity induced by the response of a linear time-invariant system subjected to speed varying excitation. For this purpose, the concept of a cyclo-non-stationary signal is introduced, which extends the class of cyclostationary signals to speed-varying regimes. Next, a "generalized synchronous average'' is designed to extract the deterministic part of a cyclo-non-stationary vibration signal-i.e. the analog of the periodic part of a cyclostationary signal. Two estimators of the GSA have been proposed. The first one returns the synchronous average of the signal at predefined discrete operating speeds. A brief statistical study of it is performed, aiming to provide the user with confidence intervals that reflect the "quality" of the estimator according to the SNR and the estimated speed. The second estimator returns a smoothed version of the former by enforcing continuity over the speed axis. It helps to reconstruct the deterministic component by tracking a specific trajectory dictated by the speed profile (assumed to be known a priori).The proposed method is validated first on synthetic signals and then on actual industrial signals. The usefulness of the approach is demonstrated on envelope-based diagnosis of bearings in variable
NASA Technical Reports Server (NTRS)
1973-01-01
The traffic analyses and system requirements data generated in the study resulted in the development of two traffic models; the baseline traffic model and the new traffic model. The baseline traffic model provides traceability between the numbers and types of geosynchronous missions considered in the study and the entire spectrum of missions foreseen in the total national space program. The information presented pertaining to the baseline traffic model includes: (1) definition of the baseline traffic model, including identification of specific geosynchronous missions and their payload delivery schedules through 1990; (2) Satellite location criteria, including the resulting distribution of the satellite population; (3) Geosynchronous orbit saturation analyses, including the effects of satellite physical proximity and potential electromagnetic interference; and (4) Platform system requirements analyses, including satellite and mission equipment descriptions, the options and limitations in grouping satellites, and on-orbit servicing criteria (both remotely controlled and man-attended).
Calculation of photon pulse height distribution using deterministic and Monte Carlo methods
NASA Astrophysics Data System (ADS)
Akhavan, Azadeh; Vosoughi, Naser
2015-12-01
Radiation transport techniques which are used in radiation detection systems comprise one of two categories namely probabilistic and deterministic. However, probabilistic methods are typically used in pulse height distribution simulation by recreating the behavior of each individual particle, the deterministic approach, which approximates the macroscopic behavior of particles by solution of Boltzmann transport equation, is being developed because of its potential advantages in computational efficiency for complex radiation detection problems. In current work linear transport equation is solved using two methods including collided components of the scalar flux algorithm which is applied by iterating on the scattering source and ANISN deterministic computer code. This approach is presented in one dimension with anisotropic scattering orders up to P8 and angular quadrature orders up to S16. Also, multi-group gamma cross-section library required for this numerical transport simulation is generated in a discrete appropriate form. Finally, photon pulse height distributions are indirectly calculated by deterministic methods that approvingly compare with those from Monte Carlo based codes namely MCNPX and FLUKA.
Non-Deterministic Context and Aspect Choice in Russian.
ERIC Educational Resources Information Center
Koubourlis, Demetrius J.
In any given context, a Russian verb form may be either perfective or imperfective. Perfective aspect signals the completion or result of an action, whereas imperfective does not. Aspect choice is a function of context, and two types of context are distinguished: deterministic and non-deterministic. This paper is part of a larger study whose aim…
Use of deterministic models in sports and exercise biomechanics research.
Chow, John W; Knudson, Duane V
2011-09-01
A deterministic model is a modeling paradigm that determines the relationships between a movement outcome measure and the biomechanical factors that produce such a measure. This review provides an overview of the use of deterministic models in biomechanics research, a historical summary of this research, and an analysis of the advantages and disadvantages of using deterministic models. The deterministic model approach has been utilized in technique analysis over the last three decades, especially in swimming, athletics field events, and gymnastics. In addition to their applications in sports and exercise biomechanics, deterministic models have been applied successfully in research on selected motor skills. The advantage of the deterministic model approach is that it helps to avoid selecting performance or injury variables arbitrarily and to provide the necessary theoretical basis for examining the relative importance of various factors that influence the outcome of a movement task. Several disadvantages of deterministic models, such as the use of subjective measures for the performance outcome, were discussed. It is recommended that exercise and sports biomechanics scholars should consider using deterministic models to help identify meaningful dependent variables in their studies.
Trafficability and workability of soils
Technology Transfer Automated Retrieval System (TEKTRAN)
Trafficability and workability are soil capabilities supporting operations of agricultural machinery. Trafficability is a soil's capability to support agricultural traffic without degrading soils and ecosystems. Workability is a soil capability supporting tillage. Agriculture is associated with mech...
ERIC Educational Resources Information Center
Roman, Harry T.
2014-01-01
Traffic lights are an important part of the transportation infrastructure, regulating traffic flow and maintaining safety when crossing busy streets. When they go awry or become nonfunctional, a great deal of havoc and danger can be present. During power outages, the street lights go out all over the affected area. It would be good to be able to…
Optimal Deterministic Ring Exploration with Oblivious Asynchronous Robots
NASA Astrophysics Data System (ADS)
Lamani, Anissa; Potop-Butucaru, Maria Gradinariu; Tixeuil, Sébastien
We consider the problem of exploring an anonymous unoriented ring of size n by k identical, oblivious, asynchronous mobile robots, that are unable to communicate, yet have the ability to sense their environment and take decisions based on their local view. Previous works in this weak scenario prove that k must not divide n for a deterministic solution to exist. Also, it is known that the minimum number of robots (either deterministic or probabilistic) to explore a ring of size n is 4. An upper bound of 17 robots holds in the deterministic case while 4 probabilistic robots are sufficient. In this paper, we close the complexity gap in the deterministic setting, by proving that no deterministic exploration is feasible with less than five robots, and that five robots are sufficient for any n that is coprime with five. Our protocol completes exploration in O(n) robot moves, which is also optimal.
Traffic model by braking capability and response time
NASA Astrophysics Data System (ADS)
Lee, Hyun Keun; Kim, Jeenu; Kim, Youngho; Lee, Choong-Ki
2015-06-01
We propose a microscopic traffic model where the update velocity is determined by the deceleration capacity and response time. It is found that there is a class of collisions that cannot be distinguished by simply comparing the stop positions. The model generates the safe, comfortable, and efficient traffic flow in numerical simulations with a reasonable values of the parameters, and this is analytically supported. Our approach provides a new perspective in modeling traffic-flow safety and worrying situations like lane changing.
Applications of the 3-D Deterministic Transport Attila{reg_sign} for Core Safety Analysis
Lucas, D.S.; Gougar, D.; Roth, P.A.; Wareing, T.; Failla, G.; McGhee, J.; Barnett, A.
2004-10-06
An LDRD (Laboratory Directed Research and Development) project is ongoing at the Idaho National Engineering and Environmental Laboratory (INEEL) for applying the three-dimensional multi-group deterministic neutron transport code (Attila{reg_sign}) to criticality, flux and depletion calculations of the Advanced Test Reactor (ATR). This paper discusses the model development, capabilities of Attila, generation of the cross-section libraries, and comparisons to an ATR MCNP model and future.
Applications of the 3-D Deterministic Transport Code Attlla for Core Safety Analysis
D. S. Lucas
2004-10-01
An LDRD (Laboratory Directed Research and Development) project is ongoing at the Idaho National Engineering and Environmental Laboratory (INEEL) for applying the three-dimensional multi-group deterministic neutron transport code (Attila®) to criticality, flux and depletion calculations of the Advanced Test Reactor (ATR). This paper discusses the model development, capabilities of Attila, generation of the cross-section libraries, and comparisons to an ATR MCNP model and future.
Analysis of pinching in deterministic particle separation
NASA Astrophysics Data System (ADS)
Risbud, Sumedh; Luo, Mingxiang; Frechette, Joelle; Drazer, German
2011-11-01
We investigate the problem of spherical particles vertically settling parallel to Y-axis (under gravity), through a pinching gap created by an obstacle (spherical or cylindrical, center at the origin) and a wall (normal to X axis), to uncover the physics governing microfluidic separation techniques such as deterministic lateral displacement and pinched flow fractionation: (1) theoretically, by linearly superimposing the resistances offered by the wall and the obstacle separately, (2) computationally, using the lattice Boltzmann method for particulate systems and (3) experimentally, by conducting macroscopic experiments. Both, theory and simulations, show that for a given initial separation between the particle centre and the Y-axis, presence of a wall pushes the particles closer to the obstacle, than its absence. Experimentally, this is expected to result in an early onset of the short-range repulsive forces caused by solid-solid contact. We indeed observe such an early onset, which we quantify by measuring the asymmetry in the trajectories of the spherical particles around the obstacle. This work is partially supported by the National Science Foundation Grant Nos. CBET- 0731032, CMMI-0748094, and CBET-0954840.
3D deterministic lateral displacement separation systems
NASA Astrophysics Data System (ADS)
Du, Siqi; Drazer, German
2016-11-01
We present a simple modification to enhance the separation ability of deterministic lateral displacement (DLD) systems by expanding the two-dimensional nature of these devices and driving the particles into size-dependent, fully three-dimensional trajectories. Specifically, we drive the particles through an array of long cylindrical posts, such that they not only move parallel to the basal plane of the posts as in traditional two-dimensional DLD systems (in-plane motion), but also along the axial direction of the solid posts (out-of-plane motion). We show that the (projected) in-plane motion of the particles is completely analogous to that observed in 2D-DLD systems and the observed trajectories can be predicted based on a model developed in the 2D case. More importantly, we analyze the particles out-of-plane motion and observe significant differences in the net displacement depending on particle size. Therefore, taking advantage of both the in-plane and out-of-plane motion of the particles, it is possible to achieve the simultaneous fractionation of a polydisperse suspension into multiple streams. We also discuss other modifications to the obstacle array and driving forces that could enhance separation in microfluidic devices.
Deterministically Driven Avalanche Models of Solar Flares
NASA Astrophysics Data System (ADS)
Strugarek, Antoine; Charbonneau, Paul; Joseph, Richard; Pirot, Dorian
2014-08-01
We develop and discuss the properties of a new class of lattice-based avalanche models of solar flares. These models are readily amenable to a relatively unambiguous physical interpretation in terms of slow twisting of a coronal loop. They share similarities with other avalanche models, such as the classical stick-slip self-organized critical model of earthquakes, in that they are driven globally by a fully deterministic energy-loading process. The model design leads to a systematic deficit of small-scale avalanches. In some portions of model space, mid-size and large avalanching behavior is scale-free, being characterized by event size distributions that have the form of power-laws with index values, which, in some parameter regimes, compare favorably to those inferred from solar EUV and X-ray flare data. For models using conservative or near-conservative redistribution rules, a population of large, quasiperiodic avalanches can also appear. Although without direct counterparts in the observational global statistics of flare energy release, this latter behavior may be relevant to recurrent flaring in individual coronal loops. This class of models could provide a basis for the prediction of large solar flares.
Deterministic transfer function for transionospheric propagation
NASA Astrophysics Data System (ADS)
Roussel-Dupre, R.; Argo, P.
Recent interest in ground-to-satellite propagation of broadband signals has prompted investigation into the development of a transfer function for the ionosphere that includes effects such as dispersion, refraction, changes in polarization, reflection, absorption, and scattering. Depending on the application (e.g. geolocation), it may be necessary to incorporate all of these processes in order to extract the information of interest from the measured transionospheric signal. A transfer function for midlatitudes at VBF from 25 - 175 MHz is one of the goals of the BLACKBEARD program in characterizing propagation distortion. In support of this program we discuss in this paper an analytic model for the deterministic transfer function of the ionosphere that includes the effects of dispersion, refraction, and changes in polarization to second order in the parameter X = omega(sub pe)(exp 2)/(omega)(exp 2) where X is assumed to be small compared to one, (omega)(sub pe) is the peak plasma frequency of the ionosphere, and omega is the wave frequency. Analytic expressions for the total phase change, group delay, and polarization change in a spherical geometry assuming a radial, electron density profile are presented. A computer code ITF (Ionospheric Transfer Function) that makes use of the ICED (Ionospheric Conductivity and Electron Density) model to, venerate electron density profiles was developed to calculate the ionospheric transfer function along a specified transmitter-to-receiver path. Details of this code will be presented as well as comparisons made between ITF analytic results and ray-tracing calculations.
Electromagnetic field enhancement and light localization in deterministic aperiodic nanostructures
NASA Astrophysics Data System (ADS)
Gopinath, Ashwin
The control of light matter interaction in periodic and random media has been investigated in depth during the last few decades, yet structures with controlled degree of disorder such as Deterministic Aperiodic Nano Structures (DANS) have been relatively unexplored. DANS are characterized by non-periodic yet long-range correlated (deterministic) morphologies and can be generated by the mathematical rules of symbolic dynamics and number theory. In this thesis, I have experimentally investigated the unique light transport and localization properties in planar dielectric and metal (plasmonics) DANS. In particular, I have focused on the design, nanofabrication and optical characterization of DANS, formed by arranging metal/dielectric nanoparticles in an aperiodic lattice. This effort is directed towards development of on-chip nanophotonic applications with emphasis on label-free bio-sensing and enhanced light emission. The DANS designed as Surface Enhanced Raman Scattering (SERS) substrate is composed of multi-scale aperiodic nanoparticle arrays fabricated by e-beam lithography and are capable of reproducibly demonstrating enhancement factors as high as ˜107. Further improvement of SERS efficiency is achieved by combining DANS formed by top-down approach with bottom-up reduction of gold nanoparticles, to fabricate novel nanostructures called plasmonic "nano-galaxies" which increases the SERS enhancement factors by 2--3 orders of magnitude while preserving the reproducibility. In this thesis, along with presenting details of fabrication and SERS characterization of these "rationally designed" SERS substrates, I will also present results on using these substrates for detection of DNA nucleobases, as well as reproducible label-free detection of pathogenic bacteria with species specificity. In addition to biochemical detection, the combination of broadband light scattering behavior and the ability for the generation of reproducible high fields in DANS make these
Stochastic and Deterministic Assembly Processes in Subsurface Microbial Communities
Stegen, James C.; Lin, Xueju; Konopka, Allan; Fredrickson, Jim K.
2012-03-29
A major goal of microbial community ecology is to understand the forces that structure community composition. Deterministic selection by specific environmental factors is sometimes important, but in other cases stochastic or ecologically neutral processes dominate. Lacking is a unified conceptual framework aiming to understand why deterministic processes dominate in some contexts but not others. Here we work towards such a framework. By testing predictions derived from general ecological theory we aim to uncover factors that govern the relative influences of deterministic and stochastic processes. We couple spatiotemporal data on subsurface microbial communities and environmental parameters with metrics and null models of within and between community phylogenetic composition. Testing for phylogenetic signal in organismal niches showed that more closely related taxa have more similar habitat associations. Community phylogenetic analyses further showed that ecologically similar taxa coexist to a greater degree than expected by chance. Environmental filtering thus deterministically governs subsurface microbial community composition. More importantly, the influence of deterministic environmental filtering relative to stochastic factors was maximized at both ends of an environmental variation gradient. A stronger role of stochastic factors was, however, supported through analyses of phylogenetic temporal turnover. While phylogenetic turnover was on average faster than expected, most pairwise comparisons were not themselves significantly non-random. The relative influence of deterministic environmental filtering over community dynamics was elevated, however, in the most temporally and spatially variable environments. Our results point to general rules governing the relative influences of stochastic and deterministic processes across micro- and macro-organisms.
Software for Simulating Air Traffic
NASA Technical Reports Server (NTRS)
Sridhar, Banavar; Bilimoria, Karl; Grabbe, Shon; Chatterji, Gano; Sheth, Kapil; Mulfinger, Daniel
2006-01-01
Future Air Traffic Management Concepts Evaluation Tool (FACET) is a system of software for performing computational simulations for evaluating advanced concepts of advanced air-traffic management. FACET includes a program that generates a graphical user interface plus programs and databases that implement computational models of weather, airspace, airports, navigation aids, aircraft performance, and aircraft trajectories. Examples of concepts studied by use of FACET include aircraft self-separation for free flight; prediction of air-traffic-controller workload; decision support for direct routing; integration of spacecraft-launch operations into the U.S. national airspace system; and traffic- flow-management using rerouting, metering, and ground delays. Aircraft can be modeled as flying along either flight-plan routes or great-circle routes as they climb, cruise, and descend according to their individual performance models. The FACET software is modular and is written in the Java and C programming languages. The architecture of FACET strikes a balance between flexibility and fidelity; as a consequence, FACET can be used to model systemwide airspace operations over the contiguous U.S., involving as many as 10,000 aircraft, all on a single desktop or laptop computer running any of a variety of operating systems. Two notable applications of FACET include: (1) reroute conformance monitoring algorithms that have been implemented in one of the Federal Aviation Administration s nationally deployed, real-time, operational systems; and (2) the licensing and integration of FACET with the commercially available Flight Explorer, which is an Internet- based, real-time flight-tracking system.
Nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates
Melechko, Anatoli V.; McKnight, Timothy E. , Guillorn, Michael A.; Ilic, Bojan; Merkulov, Vladimir I.; Doktycz, Mitchel J.; Lowndes, Douglas H.; Simpson, Michael L.
2011-05-17
Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. A method includes depositing a catalyst particle on a surface of a substrate to define a deterministically located position; growing an aligned elongated nanostructure on the substrate, an end of the aligned elongated nanostructure coupled to the substrate at the deterministically located position; coating the aligned elongated nanostructure with a conduit material; removing a portion of the conduit material to expose the catalyst particle; removing the catalyst particle; and removing the elongated nanostructure to define a nanoconduit.
Surface plasmon field enhancements in deterministic aperiodic structures.
Shugayev, Roman
2010-11-22
In this paper we analyze optical properties and plasmonic field enhancements in large aperiodic nanostructures. We introduce extension of Generalized Ohm's Law approach to estimate electromagnetic properties of Fibonacci, Rudin-Shapiro, cluster-cluster aggregate and random deterministic clusters. Our results suggest that deterministic aperiodic structures produce field enhancements comparable to random morphologies while offering better understanding of field localizations and improved substrate design controllability. Generalized Ohm's law results for deterministic aperiodic structures are in good agreement with simulations obtained using discrete dipole method.
Al-Shargabi, Mohammed A.; Ismail, Abdulsamad S.
2016-01-01
Optical burst switching (OBS) networks have been attracting much consideration as a promising approach to build the next generation optical Internet. A solution for enhancing the Quality of Service (QoS) for high priority real time traffic over OBS with the fairness among the traffic types is absent in current OBS’ QoS schemes. In this paper we present a novel Real Time Quality of Service with Fairness Ratio (RT-QoSFR) scheme that can adapt the burst assembly parameters according to the traffic QoS needs in order to enhance the real time traffic QoS requirements and to ensure the fairness for other traffic. The results show that RT-QoSFR scheme is able to fulfill the real time traffic requirements (end to end delay, and loss rate) ensuring the fairness for other traffics under various conditions such as the type of real time traffic and traffic load. RT-QoSFR can guarantee that the delay of the real time traffic packets does not exceed the maximum packets transfer delay value. Furthermore, it can reduce the real time traffic packets loss, at the same time guarantee the fairness for non real time traffic packets by determining the ratio of real time traffic inside the burst to be 50–60%, 30–40%, and 10–20% for high, normal, and low traffic loads respectively. PMID:27583557
Al-Shargabi, Mohammed A; Shaikh, Asadullah; Ismail, Abdulsamad S
2016-01-01
Optical burst switching (OBS) networks have been attracting much consideration as a promising approach to build the next generation optical Internet. A solution for enhancing the Quality of Service (QoS) for high priority real time traffic over OBS with the fairness among the traffic types is absent in current OBS' QoS schemes. In this paper we present a novel Real Time Quality of Service with Fairness Ratio (RT-QoSFR) scheme that can adapt the burst assembly parameters according to the traffic QoS needs in order to enhance the real time traffic QoS requirements and to ensure the fairness for other traffic. The results show that RT-QoSFR scheme is able to fulfill the real time traffic requirements (end to end delay, and loss rate) ensuring the fairness for other traffics under various conditions such as the type of real time traffic and traffic load. RT-QoSFR can guarantee that the delay of the real time traffic packets does not exceed the maximum packets transfer delay value. Furthermore, it can reduce the real time traffic packets loss, at the same time guarantee the fairness for non real time traffic packets by determining the ratio of real time traffic inside the burst to be 50-60%, 30-40%, and 10-20% for high, normal, and low traffic loads respectively.
Deterministic versus stochastic trends: Detection and challenges
NASA Astrophysics Data System (ADS)
Fatichi, S.; Barbosa, S. M.; Caporali, E.; Silva, M. E.
2009-09-01
The detection of a trend in a time series and the evaluation of its magnitude and statistical significance is an important task in geophysical research. This importance is amplified in climate change contexts, since trends are often used to characterize long-term climate variability and to quantify the magnitude and the statistical significance of changes in climate time series, both at global and local scales. Recent studies have demonstrated that the stochastic behavior of a time series can change the statistical significance of a trend, especially if the time series exhibits long-range dependence. The present study examines the trends in time series of daily average temperature recorded in 26 stations in the Tuscany region (Italy). In this study a new framework for trend detection is proposed. First two parametric statistical tests, the Phillips-Perron test and the Kwiatkowski-Phillips-Schmidt-Shin test, are applied in order to test for trend stationary and difference stationary behavior in the temperature time series. Then long-range dependence is assessed using different approaches, including wavelet analysis, heuristic methods and by fitting fractionally integrated autoregressive moving average models. The trend detection results are further compared with the results obtained using nonparametric trend detection methods: Mann-Kendall, Cox-Stuart and Spearman's ρ tests. This study confirms an increase in uncertainty when pronounced stochastic behaviors are present in the data. Nevertheless, for approximately one third of the analyzed records, the stochastic behavior itself cannot explain the long-term features of the time series, and a deterministic positive trend is the most likely explanation.
Understanding Vertical Jump Potentiation: A Deterministic Model.
Suchomel, Timothy J; Lamont, Hugh S; Moir, Gavin L
2016-06-01
This review article discusses previous postactivation potentiation (PAP) literature and provides a deterministic model for vertical jump (i.e., squat jump, countermovement jump, and drop/depth jump) potentiation. There are a number of factors that must be considered when designing an effective strength-power potentiation complex (SPPC) focused on vertical jump potentiation. Sport scientists and practitioners must consider the characteristics of the subject being tested and the design of the SPPC itself. Subject characteristics that must be considered when designing an SPPC focused on vertical jump potentiation include the individual's relative strength, sex, muscle characteristics, neuromuscular characteristics, current fatigue state, and training background. Aspects of the SPPC that must be considered for vertical jump potentiation include the potentiating exercise, level and rate of muscle activation, volume load completed, the ballistic or non-ballistic nature of the potentiating exercise, and the rest interval(s) used following the potentiating exercise. Sport scientists and practitioners should design and seek SPPCs that are practical in nature regarding the equipment needed and the rest interval required for a potentiated performance. If practitioners would like to incorporate PAP as a training tool, they must take the athlete training time restrictions into account as a number of previous SPPCs have been shown to require long rest periods before potentiation can be realized. Thus, practitioners should seek SPPCs that may be effectively implemented in training and that do not require excessive rest intervals that may take away from valuable training time. Practitioners may decrease the necessary time needed to realize potentiation by improving their subject's relative strength.
ZERODUR: deterministic approach for strength design
NASA Astrophysics Data System (ADS)
Hartmann, Peter
2012-12-01
There is an increasing request for zero expansion glass ceramic ZERODUR substrates being capable of enduring higher operational static loads or accelerations. The integrity of structures such as optical or mechanical elements for satellites surviving rocket launches, filigree lightweight mirrors, wobbling mirrors, and reticle and wafer stages in microlithography must be guaranteed with low failure probability. Their design requires statistically relevant strength data. The traditional approach using the statistical two-parameter Weibull distribution suffered from two problems. The data sets were too small to obtain distribution parameters with sufficient accuracy and also too small to decide on the validity of the model. This holds especially for the low failure probability levels that are required for reliable applications. Extrapolation to 0.1% failure probability and below led to design strengths so low that higher load applications seemed to be not feasible. New data have been collected with numbers per set large enough to enable tests on the applicability of the three-parameter Weibull distribution. This distribution revealed to provide much better fitting of the data. Moreover it delivers a lower threshold value, which means a minimum value for breakage stress, allowing of removing statistical uncertainty by introducing a deterministic method to calculate design strength. Considerations taken from the theory of fracture mechanics as have been proven to be reliable with proof test qualifications of delicate structures made from brittle materials enable including fatigue due to stress corrosion in a straight forward way. With the formulae derived, either lifetime can be calculated from given stress or allowable stress from minimum required lifetime. The data, distributions, and design strength calculations for several practically relevant surface conditions of ZERODUR are given. The values obtained are significantly higher than those resulting from the two
Deterministic transfer function for transionospheric propagation
Roussel-Dupre, R.; Argo, P.
1992-01-01
Recent interest in ground-to-satellite propagation of broadband signals has prompted investigation into the development of a transfer function for the ionosphere that includes effects such as dispersion, refraction, changes in polarization, reflection, absorption, and scattering. Depending on the application (e.g. geolocation), it may be necessary to incorporate all of these processes in order to extract the information of interest from the measured transionospheric signal. A transfer function for midlatitudes at VBF from 25--175 MHz is one of the goals of the BLACKBEARD program in characterizing propagation distortion. In support of this program we discuss in this paper an analytic model for the deterministic transfer function of the ionosphere that includes the effects of dispersion, refraction, and changes in polarization to second order in the parameter X = {omega}{sub pe}{sup 2}/{omega}{sup 2} where X is assumed to be small compared to one, {omega}{sub pe} is the peak plasma frequency of the ionosphere, and {omega} is the wave frequency. Analytic expressions for the total phase change, group delay, and polarization change in a spherical geometry assuming a radial, electron density profile are presented. A computer code ITF (Ionospheric Transfer Function) that makes use of the ICED (Ionospheric Conductivity and Electron Density) model to ,venerate electron density profiles was developed to calculate the ionospheric transfer function along a specified transmitter-to-receiver path. Details of this code will be presented as well as comparisons made between ITF analytic results and ray-tracing calculations.
Deterministic transfer function for transionospheric propagation
Roussel-Dupre, R.; Argo, P.
1992-09-01
Recent interest in ground-to-satellite propagation of broadband signals has prompted investigation into the development of a transfer function for the ionosphere that includes effects such as dispersion, refraction, changes in polarization, reflection, absorption, and scattering. Depending on the application (e.g. geolocation), it may be necessary to incorporate all of these processes in order to extract the information of interest from the measured transionospheric signal. A transfer function for midlatitudes at VBF from 25--175 MHz is one of the goals of the BLACKBEARD program in characterizing propagation distortion. In support of this program we discuss in this paper an analytic model for the deterministic transfer function of the ionosphere that includes the effects of dispersion, refraction, and changes in polarization to second order in the parameter X = {omega}{sub pe}{sup 2}/{omega}{sup 2} where X is assumed to be small compared to one, {omega}{sub pe} is the peak plasma frequency of the ionosphere, and {omega} is the wave frequency. Analytic expressions for the total phase change, group delay, and polarization change in a spherical geometry assuming a radial, electron density profile are presented. A computer code ITF (Ionospheric Transfer Function) that makes use of the ICED (Ionospheric Conductivity and Electron Density) model to ,venerate electron density profiles was developed to calculate the ionospheric transfer function along a specified transmitter-to-receiver path. Details of this code will be presented as well as comparisons made between ITF analytic results and ray-tracing calculations.
NASA Technical Reports Server (NTRS)
1997-01-01
The high level requirement of the Air Traffic Network (ATN) project is to provide a mechanism for evaluating the impact of router scheduling modifications on a networks efficiency, without implementing the modifications in the live network.
The Total Exposure Model (TEM) uses deterministic and stochastic methods to estimate the exposure of a person performing daily activities of eating, drinking, showering, and bathing. There were 250 time histories generated, by subject with activities, for the three exposure ro...
Chang, T; Schiff, S J; Sauer, T; Gossard, J P; Burke, R E
1994-01-01
Long time series of monosynaptic Ia-afferent to alpha-motoneuron reflexes were recorded in the L7 or S1 ventral roots in the cat. Time series were collected before and after spinalization at T13 during constant amplitude stimulations of group Ia muscle afferents in the triceps surae muscle nerves. Using autocorrelation to analyze the linear correlation in the time series demonstrated oscillations in the decerebrate state (4/4) that were eliminated after spinalization (5/5). Three tests for determinism were applied to these series: 1) local flow, 2) local dispersion, and 3) nonlinear prediction. These algorithms were validated with time series generated from known deterministic equations. For each experimental and theoretical time series used, matched time-series of stochastic surrogate data were generated to serve as mathematical and statistical controls. Two of the time series collected in the decerebrate state (2/4) demonstrated evidence for deterministic structure. This structure could not be accounted for by the autocorrelation in the data, and was abolished following spinalization. None of the time series collected in the spinalized state (0/5) demonstrated evidence of determinism. Although monosynaptic reflex variability is generally stochastic in the spinalized state, this simple driven system may display deterministic behavior in the decerebrate state. Images FIGURE 1 PMID:7948680
Chang, T; Schiff, S J; Sauer, T; Gossard, J P; Burke, R E
1994-08-01
Long time series of monosynaptic Ia-afferent to alpha-motoneuron reflexes were recorded in the L7 or S1 ventral roots in the cat. Time series were collected before and after spinalization at T13 during constant amplitude stimulations of group Ia muscle afferents in the triceps surae muscle nerves. Using autocorrelation to analyze the linear correlation in the time series demonstrated oscillations in the decerebrate state (4/4) that were eliminated after spinalization (5/5). Three tests for determinism were applied to these series: 1) local flow, 2) local dispersion, and 3) nonlinear prediction. These algorithms were validated with time series generated from known deterministic equations. For each experimental and theoretical time series used, matched time-series of stochastic surrogate data were generated to serve as mathematical and statistical controls. Two of the time series collected in the decerebrate state (2/4) demonstrated evidence for deterministic structure. This structure could not be accounted for by the autocorrelation in the data, and was abolished following spinalization. None of the time series collected in the spinalized state (0/5) demonstrated evidence of determinism. Although monosynaptic reflex variability is generally stochastic in the spinalized state, this simple driven system may display deterministic behavior in the decerebrate state.
NASA Astrophysics Data System (ADS)
He, Chong; Chiam, Keng-Hwee; Chew, Lock Yue
2016-10-01
Ultradian cycles are frequently observed in biological systems. They serve important roles in regulating, for example, cell fate and the development of the organism. Many mathematical models have been developed to analyze their behavior. Generally, these models can be classified into two classes: Deterministic models that generate oscillatory behavior by incorporating time delays or Hopf bifurcations, and stochastic models that generate oscillatory behavior by noise driven resonance. However, it is still unclear which of these two mechanisms applies to cellular oscillations. In this paper, we show through theoretical analysis and numerical simulation that we can distinguish which of these two mechanisms govern cellular oscillations, by measuring statistics of oscillation amplitudes for cells of different sizes. We found that, for oscillations driven deterministically, the normalized average amplitude is constant with respect to cell size, while the coefficient of variation of the amplitude scales with cell size with an exponent of -0.5 . On the other hand, for oscillations driven stochastically, the coefficient of variation of the amplitude is constant with respect to cell size, while the normalized average amplitude scales with cell size with an exponent of -0.5 . Our results provide a theoretical basis to discern whether a particular oscillatory behavior is governed by a deterministic or stochastic mechanism.
Rapid detection of small oscillation faults via deterministic learning.
Wang, Cong; Chen, Tianrui
2011-08-01
Detection of small faults is one of the most important and challenging tasks in the area of fault diagnosis. In this paper, we present an approach for the rapid detection of small oscillation faults based on a recently proposed deterministic learning (DL) theory. The approach consists of two phases: the training phase and the test phase. In the training phase, the system dynamics underlying normal and fault oscillations are locally accurately approximated through DL. The obtained knowledge of system dynamics is stored in constant radial basis function (RBF) networks. In the diagnosis phase, rapid detection is implemented. Specially, a bank of estimators are constructed using the constant RBF neural networks to represent the training normal and fault modes. By comparing the set of estimators with the test monitored system, a set of residuals are generated, and the average L(1) norms of the residuals are taken as the measure of the differences between the dynamics of the monitored system and the dynamics of the training normal mode and oscillation faults. The occurrence of a test oscillation fault can be rapidly detected according to the smallest residual principle. A rigorous analysis of the performance of the detection scheme is also given. The novelty of the paper lies in that the modeling uncertainty and nonlinear fault functions are accurately approximated and then the knowledge is utilized to achieve rapid detection of small oscillation faults. Simulation studies are included to demonstrate the effectiveness of the approach.
Entrepreneurs, Chance, and the Deterministic Concentration of Wealth
Fargione, Joseph E.; Lehman, Clarence; Polasky, Stephen
2011-01-01
In many economies, wealth is strikingly concentrated. Entrepreneurs–individuals with ownership in for-profit enterprises–comprise a large portion of the wealthiest individuals, and their behavior may help explain patterns in the national distribution of wealth. Entrepreneurs are less diversified and more heavily invested in their own companies than is commonly assumed in economic models. We present an intentionally simplified individual-based model of wealth generation among entrepreneurs to assess the role of chance and determinism in the distribution of wealth. We demonstrate that chance alone, combined with the deterministic effects of compounding returns, can lead to unlimited concentration of wealth, such that the percentage of all wealth owned by a few entrepreneurs eventually approaches 100%. Specifically, concentration of wealth results when the rate of return on investment varies by entrepreneur and by time. This result is robust to inclusion of realities such as differing skill among entrepreneurs. The most likely overall growth rate of the economy decreases as businesses become less diverse, suggesting that high concentrations of wealth may adversely affect a country's economic growth. We show that a tax on large inherited fortunes, applied to a small portion of the most fortunate in the population, can efficiently arrest the concentration of wealth at intermediate levels. PMID:21814540
Estimating interdependences in networks of weakly coupled deterministic systems
NASA Astrophysics Data System (ADS)
de Feo, Oscar; Carmeli, Cristian
2008-02-01
The extraction of information from measured data about the interactions taking place in a network of systems is a key topic in modern applied sciences. This topic has been traditionally addressed by considering bivariate time series, providing methods which are sometimes difficult to extend to multivariate data, the limiting factor being the computational complexity. Here, we present a computationally viable method based on black-box modeling which, while theoretically applicable only when a deterministic hypothesis about the processes behind the recordings is plausible, proves to work also when this assumption is severely affected. Conceptually, the method is very simple and is composed of three independent steps: in the first step a state-space reconstruction is performed separately on each measured signal; in the second step, a local model, i.e., a nonlinear dynamical system, is fitted separately on each (reconstructed) measured signal; afterward, a linear model of the dynamical interactions is obtained by cross-relating the (reconstructed) measured variables to the dynamics unexplained by the local models. The method is successfully validated on numerically generated data. An assessment of its sensitivity to data length and modeling and measurement noise intensity, and of its applicability to large-scale systems, is also provided.
Entrepreneurs, chance, and the deterministic concentration of wealth.
Fargione, Joseph E; Lehman, Clarence; Polasky, Stephen
2011-01-01
In many economies, wealth is strikingly concentrated. Entrepreneurs--individuals with ownership in for-profit enterprises--comprise a large portion of the wealthiest individuals, and their behavior may help explain patterns in the national distribution of wealth. Entrepreneurs are less diversified and more heavily invested in their own companies than is commonly assumed in economic models. We present an intentionally simplified individual-based model of wealth generation among entrepreneurs to assess the role of chance and determinism in the distribution of wealth. We demonstrate that chance alone, combined with the deterministic effects of compounding returns, can lead to unlimited concentration of wealth, such that the percentage of all wealth owned by a few entrepreneurs eventually approaches 100%. Specifically, concentration of wealth results when the rate of return on investment varies by entrepreneur and by time. This result is robust to inclusion of realities such as differing skill among entrepreneurs. The most likely overall growth rate of the economy decreases as businesses become less diverse, suggesting that high concentrations of wealth may adversely affect a country's economic growth. We show that a tax on large inherited fortunes, applied to a small portion of the most fortunate in the population, can efficiently arrest the concentration of wealth at intermediate levels.
Benke, G. |; Brandt, J.; Chen, H.; Dastangoo, S.; Miller, G.J.
1996-05-01
Recent empirical studies of traffic measurements of packet switched networks have demonstrated that actual network traffic is self-similar, or long range dependent, in nature. That is, the measured traffic is bursty over a wide range of time intervals. Furthermore, the emergence of high-speed network backbones demands the study of accurate models of aggregated traffic to assess network performance. This paper provides a method for generation of self-similar traffic, which can be used to drive network simulation models. The authors present the results of a simulation study of a two-node ATM network configuration that supports the ATM Forum`s Available Bit Rate (ABR) service. In this study, the authors compare the state of the queue at the source router at the edge of the ATM network under both Poisson and self-similar traffic loading. These findings indicate an order of magnitude increase in queue length for self-similar traffic loading as compared to Poisson loading. Moreover, when background VBR traffic is present, self-similar ABR traffic causes more congestion at the ATM switches than does Poisson traffic.
A superstatistical model of vehicular traffic flow
NASA Astrophysics Data System (ADS)
Kosun, Caglar; Ozdemir, Serhan
2016-02-01
In the analysis of vehicular traffic flow, a myriad of techniques have been implemented. In this study, superstatistics is used in modeling the traffic flow on a highway segment. Traffic variables such as vehicular speeds, volume, and headway were collected for three days. For the superstatistical approach, at least two distinct time scales must exist, so that a superposition of nonequilibrium systems assumption could hold. When the slow dynamics of the vehicle speeds exhibit a Gaussian distribution in between the fluctuations of the system at large, one speaks of a relaxation to a local equilibrium. These Gaussian distributions are found with corresponding standard deviations 1 /√{ β }. This translates into a series of fluctuating beta values, hence the statistics of statistics, superstatistics. The traffic flow model has generated an inverse temperature parameter (beta) distribution as well as the speed distribution. This beta distribution has shown that the fluctuations in beta are distributed with respect to a chi-square distribution. It must be mentioned that two distinct Tsallis q values are specified: one is time-dependent and the other is independent. A ramification of these q values is that the highway segment and the traffic flow generate separate characteristics. This highway segment in question is not only nonadditive in nature, but a nonequilibrium driven system, with frequent relaxations to a Gaussian.
Deterministic coupling of delta-doped nitrogen vacancy centers to a nanobeam photonic crystal cavity
Lee, Jonathan C.; Cui, Shanying; Zhang, Xingyu; Russell, Kasey J.; Magyar, Andrew P.; Hu, Evelyn L.; Bracher, David O.; Ohno, Kenichi; McLellan, Claire A.; Alemán, Benjamin; Bleszynski Jayich, Ania; Andrich, Paolo; Awschalom, David; Aharonovich, Igor
2014-12-29
The negatively charged nitrogen vacancy center (NV) in diamond has generated significant interest as a platform for quantum information processing and sensing in the solid state. For most applications, high quality optical cavities are required to enhance the NV zero-phonon line (ZPL) emission. An outstanding challenge in maximizing the degree of NV-cavity coupling is the deterministic placement of NVs within the cavity. Here, we report photonic crystal nanobeam cavities coupled to NVs incorporated by a delta-doping technique that allows nanometer-scale vertical positioning of the emitters. We demonstrate cavities with Q up to ∼24 000 and mode volume V ∼ 0.47(λ/n){sup 3} as well as resonant enhancement of the ZPL of an NV ensemble with Purcell factor of ∼20. Our fabrication technique provides a first step towards deterministic NV-cavity coupling using spatial control of the emitters.
Deterministic and Advanced Statistical Modeling of Wind-Driven Sea
2015-07-06
COVERED (From - To) 01/09/2010-06/07/2015 4. TITLE AND SUBTITLE Deterministic and advanced statistical modeling of wind-driven sea 5a. CONTRACT...Technical Report Deterministic and advanced statistical modeling of wind-driven sea Vladimir Zakharov, Andrei Pushkarev Waves and Solitons LLC, 1719 W...Development of accurate and fast advanced statistical and dynamical nonlinear models of ocean surface waves, based on first physical principles, which will
Structural deterministic safety factors selection criteria and verification
NASA Technical Reports Server (NTRS)
Verderaime, V.
1992-01-01
Though current deterministic safety factors are arbitrarily and unaccountably specified, its ratio is rooted in resistive and applied stress probability distributions. This study approached the deterministic method from a probabilistic concept leading to a more systematic and coherent philosophy and criterion for designing more uniform and reliable high-performance structures. The deterministic method was noted to consist of three safety factors: a standard deviation multiplier of the applied stress distribution; a K-factor for the A- or B-basis material ultimate stress; and the conventional safety factor to ensure that the applied stress does not operate in the inelastic zone of metallic materials. The conventional safety factor is specifically defined as the ratio of ultimate-to-yield stresses. A deterministic safety index of the combined safety factors was derived from which the corresponding reliability proved the deterministic method is not reliability sensitive. The bases for selecting safety factors are presented and verification requirements are discussed. The suggested deterministic approach is applicable to all NASA, DOD, and commercial high-performance structures under static stresses.
The recursive deterministic perceptron neural network.
Tajine, Mohamed; Elizondo, David
1998-12-01
We introduce a feedforward multilayer neural network which is a generalization of the single layer perceptron topology (SLPT), called recursive deterministic perceptron (RDP). This new model is capable of solving any two-class classification problem, as opposed to the single layer perceptron which can only solve classification problems dealing with linearly separable sets (two subsets X and Y of R(d) are said to be linearly separable if there exists a hyperplane such that the elements of X and Y lie on the two opposite sides of R(d) delimited by this hyperplane). We propose several growing methods for constructing a RDP. These growing methods build a RDP by successively adding intermediate neurons (IN) to the topology (an IN corresponds to a SLPT). Thus, as a result, we obtain a multilayer perceptron topology, which together with the weights, are determined automatically by the constructing algorithms. Each IN augments the affine dimension of the set of input vectors. This augmentation is done by adding the output of each of these INs, as a new component, to every input vector. The construction of a new IN is made by selecting a subset from the set of augmented input vectors which is LS from the rest of this set. This process ends with LS classes in almost n-1 steps where n is the number of input vectors. For this construction, if we assume that the selected LS subsets are of maximum cardinality, the problem is proven to be NP-complete. We also introduce a generalization of the RDP model for classification of m classes (m>2) allowing to always separate m classes. This generalization is based on a new notion of linear separability for m classes, and it follows naturally from the RDP. This new model can be used to compute functions with a finite domain, and thus, to approximate continuous functions. We have also compared - over several classification problems - the percentage of test data correctly classified, or the topology of the 2 and m classes RDPs with that of
Single Ion Implantation and Deterministic Doping
Schenkel, Thomas
2010-06-11
The presence of single atoms, e.g. dopant atoms, in sub-100 nm scale electronic devices can affect the device characteristics, such as the threshold voltage of transistors, or the sub-threshold currents. Fluctuations of the number of dopant atoms thus poses a complication for transistor scaling. In a complementary view, new opportunities emerge when novel functionality can be implemented in devices deterministically doped with single atoms. The grand price of the latter might be a large scale quantum computer, where quantum bits (qubits) are encoded e.g. in the spin states of electrons and nuclei of single dopant atoms in silicon, or in color centers in diamond. Both the possible detrimental effects of dopant fluctuations and single atom device ideas motivate the development of reliable single atom doping techniques which are the subject of this chapter. Single atom doping can be approached with top down and bottom up techniques. Top down refers to the placement of dopant atoms into a more or less structured matrix environment, like a transistor in silicon. Bottom up refers to approaches to introduce single dopant atoms during the growth of the host matrix e.g. by directed self-assembly and scanning probe assisted lithography. Bottom up approaches are discussed in Chapter XYZ. Since the late 1960's, ion implantation has been a widely used technique to introduce dopant atoms into silicon and other materials in order to modify their electronic properties. It works particularly well in silicon since the damage to the crystal lattice that is induced by ion implantation can be repaired by thermal annealing. In addition, the introduced dopant atoms can be incorporated with high efficiency into lattice position in the silicon host crystal which makes them electrically active. This is not the case for e.g. diamond, which makes ion implantation doping to engineer the electrical properties of diamond, especially for n-type doping much harder then for silicon. Ion
Theory and Simulation for Traffic Characteristics on the Highway with a Slowdown Section.
Xu, Dejie; Mao, Baohua; Rong, Yaping; Wei, Wei
2015-01-01
We study the traffic characteristics on a single-lane highway with a slowdown section using the deterministic cellular automaton (CA) model. Based on the theoretical analysis, the relationships among local mean densities, velocities, traffic fluxes, and global densities are derived. The results show that two critical densities exist in the evolutionary process of traffic state, and they are significant demarcation points for traffic phase transition. Furthermore, the changing laws of the two critical densities with different length of limit section are also investigated. It is shown that only one critical density appears if a highway is not slowdown section; nevertheless, with the growing length of slowdown section, one critical density separates into two critical densities; if the entire highway is slowdown section, they finally merge into one. The contrastive analysis proves that the analytical results are consistent with the numerical ones.
Theory and Simulation for Traffic Characteristics on the Highway with a Slowdown Section
Xu, Dejie; Mao, Baohua; Rong, Yaping; Wei, Wei
2015-01-01
We study the traffic characteristics on a single-lane highway with a slowdown section using the deterministic cellular automaton (CA) model. Based on the theoretical analysis, the relationships among local mean densities, velocities, traffic fluxes, and global densities are derived. The results show that two critical densities exist in the evolutionary process of traffic state, and they are significant demarcation points for traffic phase transition. Furthermore, the changing laws of the two critical densities with different length of limit section are also investigated. It is shown that only one critical density appears if a highway is not slowdown section; nevertheless, with the growing length of slowdown section, one critical density separates into two critical densities; if the entire highway is slowdown section, they finally merge into one. The contrastive analysis proves that the analytical results are consistent with the numerical ones. PMID:26089864
NASA Astrophysics Data System (ADS)
Schreckenberg, Michael
2002-03-01
In the past decade the investigation of the complex behaviour of traffic dynamics became an active field of (interdisciplinary) research. On the one hand this is due to a fastly growing availability of 'experimental' data from measurements with various kinds of sensors, on the other hand due to an enormous improvement of the modelling techniques from statistical physics. This has led to the identification of several new phases of traffic flow and the characterization of the corresponding phase transitions between them. Nowadays many of the occurring dynamical phenomena are understood quite well although a complete understanding, especially of the interrelation between the models on the different scales (micro-, meso-, macroscopic), is still missing. Whereas earlier attempts tried to describe traffic flow in a hydrodynamical formulation the current microscopic models are able to take into not only the physically correct motion of single cars but also certain aspects of the driver's behaviour. It turns out that the simple car following theories cannot explain the complex structures found, e.g., in synchronized traffic, a new state found only recently. Here a more detailed analysis is necessary which goes far beyond the pure modelling of the motion of the cars in analogy to granular media (grains of sand, pills, corn, etc.). The detailed knowledge of traffic dynamics not of purely scientific interest but also absolutely necessary for practical applications. With the help of online data from measurements of flows and speeds it is possible to construct a complete picture of the actual traffic state with real time simulations. As a very efficient model ansatz cellular automata have been shown to be a reasonable compromise between simulation speed and descripiton accuracy. Beyond the reproduction of the actual state a reliable traffic forecast should be possible although the driver's reaction on the forecast still remains unclear.
Graphics development of DCOR: Deterministic combat model of Oak Ridge
Hunt, G.; Azmy, Y.Y.
1992-10-01
DCOR is a user-friendly computer implementation of a deterministic combat model developed at ORNL. To make the interpretation of the results more intuitive, a conversion of the numerical solution to a graphic animation sequence of battle evolution is desirable. DCOR uses a coarse computational spatial mesh superimposed on the battlefield. This research is aimed at developing robust methods for computing the position of the combative units over the continuum (and also pixeled) battlefield, from DCOR`s discrete-variable solution representing the density of each force type evaluated at gridpoints. Three main problems have been identified and solutions have been devised and implemented in a new visualization module of DCOR. First, there is the problem of distributing the total number of objects, each representing a combative unit of each force type, among the gridpoints at each time level of the animation. This problem is solved by distributing, for each force type, the total number of combative units, one by one, to the gridpoint with the largest calculated number of units. Second, there is the problem of distributing the number of units assigned to each computational gridpoint over the battlefield area attributed to that point. This problem is solved by distributing the units within that area by taking into account the influence of surrounding gridpoints using linear interpolation. Finally, time interpolated solutions must be generated to produce a sufficient number of frames to create a smooth animation sequence. Currently, enough frames may be generated either by direct computation via the PDE solver or by using linear programming techniques to linearly interpolate intermediate frames between calculated frames.
Virginia's traffic management system
Morris, J.; Marber, S. )
1992-07-01
This paper reports that Northern Virginia, like most other urban areas, faces the challenge of moving more and more vehicles on roads that are already overloaded. Traffic in Northern Virginia is continually increasing, but the development surrounding Interstate 395, 495, and 66 makes little room available for roadway expansion. Even if land were unlimited, the strict requirement of the Clean Air Act make building roads difficult. This paper reports that ensuring the most efficient use of the interstate highways is the goal of the Virginia Department of Transportation's (VDOT's) traffic management system (TMS). TMS is a computerized highway surveillance and control system that monitors 30 interstate miles on I-395, I-495, and I-66. The system helps squeeze the most use from these interstates by detecting and helping clear accidents or disabled vehicles and by smoothing traffic flow. TMS spots and helps clear an average of two incidents a day and prevents accidents caused by erratic traffic flow from ramps onto the main line. For motorists, these TMS functions translate into decreased travel time, vehicle operating costs, and air pollution. VDOT's TMS is the foundation for the intelligent vehicle-highway systems of tomorrow. It employs several elements that work together to improve traffic flow.
Modeling sulphur dioxide due to vehicular traffic using artificial neural network.
Singh, B K; Singh, A K; Prasad, S C
2009-10-01
The dispersion characteristics of vehicular exhaust are highly non-linear. The deterministic as well as numerical models are unable to predict these air pollutants precisely. Artificial neural network (ANN), having the capability to recognize the non-linearity present in the noisy data, has been used in the present work to model the emission concentration of sulphur dioxide from vehicular source in an urban area. ANN model is developed with different combinations of traffic and meteorological parameters. The model prediction reveals that the artificial neural network trained with both traffic and meteorological parameters together shows better performance in predicting SO2 concentration.
Chambers, David W
2005-01-01
Groups naturally promote their strengths and prefer values and rules that give them an identity and an advantage. This shows up as generational tensions across cohorts who share common experiences, including common elders. Dramatic cultural events in America since 1925 can help create an understanding of the differing value structures of the Silents, the Boomers, Gen Xers, and the Millennials. Differences in how these generations see motivation and values, fundamental reality, relations with others, and work are presented, as are some applications of these differences to the dental profession.
Hunt, G. ); Azmy, Y.Y. )
1992-10-01
DCOR is a user-friendly computer implementation of a deterministic combat model developed at ORNL. To make the interpretation of the results more intuitive, a conversion of the numerical solution to a graphic animation sequence of battle evolution is desirable. DCOR uses a coarse computational spatial mesh superimposed on the battlefield. This research is aimed at developing robust methods for computing the position of the combative units over the continuum (and also pixeled) battlefield, from DCOR's discrete-variable solution representing the density of each force type evaluated at gridpoints. Three main problems have been identified and solutions have been devised and implemented in a new visualization module of DCOR. First, there is the problem of distributing the total number of objects, each representing a combative unit of each force type, among the gridpoints at each time level of the animation. This problem is solved by distributing, for each force type, the total number of combative units, one by one, to the gridpoint with the largest calculated number of units. Second, there is the problem of distributing the number of units assigned to each computational gridpoint over the battlefield area attributed to that point. This problem is solved by distributing the units within that area by taking into account the influence of surrounding gridpoints using linear interpolation. Finally, time interpolated solutions must be generated to produce a sufficient number of frames to create a smooth animation sequence. Currently, enough frames may be generated either by direct computation via the PDE solver or by using linear programming techniques to linearly interpolate intermediate frames between calculated frames.
Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.
2004-01-01
We successfully applied deterministic deconvolution to real ground-penetrating radar (GPR) data by using the source wavelet that was generated in and transmitted through air as the operator. The GPR data were collected with 400-MHz antennas on a bench adjacent to a cleanly exposed quarry face. The quarry site is characterized by horizontally bedded carbonate strata with shale partings. In order to provide groundtruth for this deconvolution approach, 23 conductive rods were drilled into the quarry face at key locations. The steel rods provided critical information for: (1) correlation between reflections on GPR data and geologic features exposed in the quarry face, (2) GPR resolution limits, (3) accuracy of velocities calculated from common midpoint data and (4) identifying any multiples. Comparing the results of deconvolved data with non-deconvolved data demonstrates the effectiveness of deterministic deconvolution in low dielectric-loss media for increased accuracy of velocity models (improved at least 10-15% in our study after deterministic deconvolution), increased vertical and horizontal resolution of specific geologic features and more accurate representation of geologic features as confirmed from detailed study of the adjacent quarry wall. ?? 2004 Elsevier B.V. All rights reserved.
A deterministic, gigabit serial timing, synchronization and data link for the RHIC LLRF
Hayes, T.; Smith, K.S.; Severino, F.
2011-03-28
A critical capability of the new RHIC low level rf (LLRF) system is the ability to synchronize signals across multiple locations. The 'Update Link' provides this functionality. The 'Update Link' is a deterministic serial data link based on the Xilinx RocketIO protocol that is broadcast over fiber optic cable at 1 gigabit per second (Gbps). The link provides timing events and data packets as well as time stamp information for synchronizing diagnostic data from multiple sources. The new RHIC LLRF was designed to be a flexible, modular system. The system is constructed of numerous independent RF Controller chassis. To provide synchronization among all of these chassis, the Update Link system was designed. The Update Link system provides a low latency, deterministic data path to broadcast information to all receivers in the system. The Update Link system is based on a central hub, the Update Link Master (ULM), which generates the data stream that is distributed via fiber optic links. Downstream chassis have non-deterministic connections back to the ULM that allow any chassis to provide data that is broadcast globally.
Nagel, K.; Paczuski, M. |
1995-04-01
We study a single-lane traffic model that is based on human driving behavior. The outflow from a traffic jam self-organizes to a critical state of maximum throughput. Small perturbations of the outflow far downstream create emergent traffic jams with a power law distribution {ital P}({ital t}){similar_to}{ital t}{sup {minus}3/2} of lifetimes {ital t}. On varying the vehicle density in a closed system, this critical state separates lamellar and jammed regimes and exhibits 1/{ital f} noise in the power spectrum. Using random walk arguments, in conjunction with a cascade equation, we develop a phenomenological theory that predicts the critical exponents for this transition and explains the self-organizing behavior. These predictions are consistent with all of our numerical results.
Deterministic Modeling of the High Temperature Test Reactor
Ortensi, J.; Cogliati, J. J.; Pope, M. A.; Ferrer, R. M.; Ougouag, A. M.
2010-06-01
Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL’s current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is used in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green’s Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2–3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the
Parkinson's disease classification using gait analysis via deterministic learning.
Zeng, Wei; Liu, Fenglin; Wang, Qinghui; Wang, Ying; Ma, Limin; Zhang, Yu
2016-10-28
Gait analysis plays an important role in maintaining the well-being of human mobility and health care, and is a valuable tool for obtaining quantitative information on motor deficits in Parkinson's disease (PD). In this paper, we propose a method to classify (diagnose) patients with PD and healthy control subjects using gait analysis via deterministic learning theory. The classification approach consists of two phases: a training phase and a classification phase. In the training phase, gait characteristics represented by the gait dynamics are derived from the vertical ground reaction forces under the usual and self-selected paces of the subjects. The gait dynamics underlying gait patterns of healthy controls and PD patients are locally accurately approximated by radial basis function (RBF) neural networks. The obtained knowledge of approximated gait dynamics is stored in constant RBF networks. The gait patterns of healthy controls and PD patients constitute a training set. In the classification phase, a bank of dynamical estimators is constructed for all the training gait patterns. Prior knowledge of gait dynamics represented by the constant RBF networks is embedded in the estimators. By comparing the set of estimators with a test gait pattern of a certain PD patient to be classified (diagnosed), a set of classification errors are generated. The average L1 norms of the errors are taken as the classification measure between the dynamics of the training gait patterns and the dynamics of the test PD gait pattern according to the smallest error principle. When the gait patterns of 93 PD patients and 73 healthy controls are classified with five-fold cross-validation method, the accuracy, sensitivity and specificity of the results are 96.39%, 96.77% and 95.89%, respectively. Based on the results, it may be claimed that the features and the classifiers used in the present study could effectively separate the gait patterns between the groups of PD patients and healthy
2013 Traffic Safety Culture Index
... death in the United States. 2013 Traffic Safety Culture Index January 2014 607 14th Street, NW, Suite ... org | 202-638-5944 Title 2013 Traffic Safety Culture Index (January 2014) About the Sponsor AAA Foundation ...
Expanding Regional Airport Usage to Accommodate Increased Air Traffic Demand
NASA Technical Reports Server (NTRS)
Russell, Carl R.
2009-01-01
Small regional airports present an underutilized source of capacity in the national air transportation system. This study sought to determine whether a 50 percent increase in national operations could be achieved by limiting demand growth at large hub airports and instead growing traffic levels at the surrounding regional airports. This demand scenario for future air traffic in the United States was generated and used as input to a 24-hour simulation of the national airspace system. Results of the demand generation process and metrics predicting the simulation results are presented, in addition to the actual simulation results. The demand generation process showed that sufficient runway capacity exists at regional airports to offload a significant portion of traffic from hub airports. Predictive metrics forecast a large reduction of delays at most major airports when demand is shifted. The simulation results then show that offloading hub traffic can significantly reduce nationwide delays.
ERIC Educational Resources Information Center
Dickman, Frances Baker, Ed.
1988-01-01
Seven papers discuss current issues and applied social research concerning alcohol traffic safety. Prevention, policy input, methodology, planning strategies, anti-drinking/driving programs, social-programmatic orientations of Mothers Against Drunk Driving, Kansas Driving Under the Influence Law, New Jersey Driving While Impaired Programs,…
Surface Traffic Management Research
NASA Technical Reports Server (NTRS)
Jung, Yoo Chul
2012-01-01
This presentation discusses an overview of the surface traffic management research conducted by NASA Ames. The concept and human-in-the-loop simulation of the Spot and Runway Departure Advisor (SARDA), an integrated decision support tool for the tower controllers and airline ramp operators, is also discussed.
ERIC Educational Resources Information Center
Edwards, Arthur W.
1977-01-01
The importance of energy conservation is developed in this simulation. Children draw an automobile and then are asked to drive it through the classroom roadways. When a traffic jam results, students offer ways to eliminate it. The importance of mass transportation and car pools is stressed by the teacher. (MA)
Deterministic teleportation of electrons in a quantum dot nanostructure.
de Visser, R L; Blaauboer, M
2006-06-23
We present a proposal for deterministic quantum teleportation of electrons in a semiconductor nanostructure consisting of a single and a double quantum dot. The central issue addressed in this Letter is how to design and implement the most efficient--in terms of the required number of single and two-qubit operations--deterministic teleportation protocol for this system. Using a group-theoretical analysis, we show that deterministic teleportation requires a minimum of three single-qubit rotations and two entangling (square root SWAP) operations. These can be implemented for spin qubits in quantum dots using electron-spin resonance (for single-spin rotations) and exchange interaction (for square root SWAP operations).
Deterministic sensing matrices in compressive sensing: a survey.
Nguyen, Thu L N; Shin, Yoan
2013-01-01
Compressive sensing is a sampling method which provides a new approach to efficient signal compression and recovery by exploiting the fact that a sparse signal can be suitably reconstructed from very few measurements. One of the most concerns in compressive sensing is the construction of the sensing matrices. While random sensing matrices have been widely studied, only a few deterministic sensing matrices have been considered. These matrices are highly desirable on structure which allows fast implementation with reduced storage requirements. In this paper, a survey of deterministic sensing matrices for compressive sensing is presented. We introduce a basic problem in compressive sensing and some disadvantage of the random sensing matrices. Some recent results on construction of the deterministic sensing matrices are discussed.
Estimating the epidemic threshold on networks by deterministic connections
Li, Kezan Zhu, Guanghu; Fu, Xinchu; Small, Michael
2014-12-15
For many epidemic networks some connections between nodes are treated as deterministic, while the remainder are random and have different connection probabilities. By applying spectral analysis to several constructed models, we find that one can estimate the epidemic thresholds of these networks by investigating information from only the deterministic connections. Nonetheless, in these models, generic nonuniform stochastic connections and heterogeneous community structure are also considered. The estimation of epidemic thresholds is achieved via inequalities with upper and lower bounds, which are found to be in very good agreement with numerical simulations. Since these deterministic connections are easier to detect than those stochastic connections, this work provides a feasible and effective method to estimate the epidemic thresholds in real epidemic networks.
Highway traffic noise prediction based on GIS
NASA Astrophysics Data System (ADS)
Zhao, Jianghua; Qin, Qiming
2014-05-01
Before building a new road, we need to predict the traffic noise generated by vehicles. Traditional traffic noise prediction methods are based on certain locations and they are not only time-consuming, high cost, but also cannot be visualized. Geographical Information System (GIS) can not only solve the problem of manual data processing, but also can get noise values at any point. The paper selected a road segment from Wenxi to Heyang. According to the geographical overview of the study area and the comparison between several models, we combine the JTG B03-2006 model and the HJ2.4-2009 model to predict the traffic noise depending on the circumstances. Finally, we interpolate the noise values at each prediction point and then generate contours of noise. By overlaying the village data on the noise contour layer, we can get the thematic maps. The use of GIS for road traffic noise prediction greatly facilitates the decision-makers because of GIS spatial analysis function and visualization capabilities. We can clearly see the districts where noise are excessive, and thus it becomes convenient to optimize the road line and take noise reduction measures such as installing sound barriers and relocating villages and so on.
Onset of traffic congestion in complex networks.
Zhao, Liang; Lai, Ying-Cheng; Park, Kwangho; Ye, Nong
2005-02-01
Free traffic flow on a complex network is key to its normal and efficient functioning. Recent works indicate that many realistic networks possess connecting topologies with a scale-free feature: the probability distribution of the number of links at nodes, or the degree distribution, contains a power-law component. A natural question is then how the topology influences the dynamics of traffic flow on a complex network. Here we present two models to address this question, taking into account the network topology, the information-generating rate, and the information-processing capacity of individual nodes. For each model, we study four kinds of networks: scale-free, random, and regular networks and Cayley trees. In the first model, the capacity of packet delivery of each node is proportional to its number of links, while in the second model, it is proportional to the number of shortest paths passing through the node. We find, in both models, that there is a critical rate of information generation, below which the network traffic is free but above which traffic congestion occurs. Theoretical estimates are given for the critical point. For the first model, scale-free networks and random networks are found to be more tolerant to congestion. For the second model, the congestion condition is independent of network size and topology, suggesting that this model may be practically useful for designing communication protocols.
Identifying MMORPG Bots: A Traffic Analysis Approach
NASA Astrophysics Data System (ADS)
Chen, Kuan-Ta; Jiang, Jhih-Wei; Huang, Polly; Chu, Hao-Hua; Lei, Chin-Laung; Chen, Wen-Chin
2008-12-01
Massively multiplayer online role playing games (MMORPGs) have become extremely popular among network gamers. Despite their success, one of MMORPG's greatest challenges is the increasing use of game bots, that is, autoplaying game clients. The use of game bots is considered unsportsmanlike and is therefore forbidden. To keep games in order, game police, played by actual human players, often patrol game zones and question suspicious players. This practice, however, is labor-intensive and ineffective. To address this problem, we analyze the traffic generated by human players versus game bots and propose general solutions to identify game bots. Taking Ragnarok Online as our subject, we study the traffic generated by human players and game bots. We find that their traffic is distinguishable by 1) the regularity in the release time of client commands, 2) the trend and magnitude of traffic burstiness in multiple time scales, and 3) the sensitivity to different network conditions. Based on these findings, we propose four strategies and two ensemble schemes to identify bots. Finally, we discuss the robustness of the proposed methods against countermeasures of bot developers, and consider a number of possible ways to manage the increasingly serious bot problem.
Traffic Safety Facts, 2001: Pedestrians.
ERIC Educational Resources Information Center
National Highway Traffic Safety Administration (DOT), Washington, DC.
This document provides statistical information on U.S. traffic accidents involving pedestrians. Data tables include: (1) trends in pedestrian and total traffic fatalities, 1991-2001; (2) pedestrians killed and injured, by age group, 2001; (3) non-occupant traffic fatalities, 1991-2001; (4) pedestrian fatalities, by time of day and day of week,…
Pedestrians. Traffic Safety Facts, 2000.
ERIC Educational Resources Information Center
National Highway Traffic Safety Administration (DOT), Washington, DC.
This document provides statistical information on U.S. traffic accidents involving pedestrians. Data tables include: (1) trends in pedestrian and total traffic fatalities, 1990-2000; (2) pedestrians killed and injured, by age group, 2000; (3) non-occupant traffic fatalities, 1990-2000; (4) pedestrian fatalities, by time of day and day of week,…
Traffic Safety Facts, 2001: Pedalcylists.
ERIC Educational Resources Information Center
National Highway Traffic Safety Administration (DOT), Washington, DC.
This document provides statistical information on traffic accidents involving U.S. bicyclists. Data include: (1) trends in pedalcyclist and total traffic fatalities, 1991-2001; (2) non-occupant traffic fatalities, 1991-2001; (3) pedalcyclists killed and injured, and fatality and injury rates, by age and sex, 2000 [2001 population data by age group…
Pedalcylists. Traffic Safety Facts, 2000.
ERIC Educational Resources Information Center
National Highway Traffic Safety Administration (DOT), Washington, DC.
This document provides statistical information on traffic accidents involving U.S. bicyclists. Data include: (1) trends in pedalcyclist and total traffic fatalities, 1990-2000; (2) non-occupant traffic fatalities, 1990-2000; (3) pedalcyclists killed and injured, and fatality and injury rates, by age and sex, 2000; and (4) pedalcyclist traffic…
The seismic traffic footprint: Tracking trains, aircraft, and cars seismically
NASA Astrophysics Data System (ADS)
Riahi, Nima; Gerstoft, Peter
2015-04-01
Although naturally occurring vibrations have proven useful to probe the subsurface, the vibrations caused by traffic have not been explored much. Such data, however, are less sensitive to weather and low visibility compared to some common out-of-road traffic sensing systems. We study traffic-generated seismic noise measured by an array of 5200 geophones that covered a 7 × 10 km area in Long Beach (California, USA) with a receiver spacing of 100 m. This allows us to look into urban vibrations below the resolution of a typical city block. The spatiotemporal structure of the anthropogenic seismic noise intensity reveals the Blue Line Metro train activity, departing and landing aircraft in Long Beach Airport and their acceleration, and gives clues about traffic movement along the I-405 highway at night. As low-cost, stand-alone seismic sensors are becoming more common, these findings indicate that seismic data may be useful for traffic monitoring.
Traffic congestion and the lifetime of networks with moving nodes
NASA Astrophysics Data System (ADS)
Yang, Xianxia; Li, Jie; Pu, Cunlai; Yan, Meichen; Sharafat, Rajput Ramiz; Yang, Jian; Gakis, Konstantinos; Pardalos, Panos M.
2017-01-01
For many power-limited networks, such as wireless sensor networks and mobile ad hoc networks, maximizing the network lifetime is the first concern in the related designing and maintaining activities. We study the network lifetime from the perspective of network science. In our model, nodes are initially assigned a fixed amount of energy moving in a square area and consume the energy when delivering packets. We obtain four different traffic regimes: no, slow, fast, and absolute congestion regimes, which are basically dependent on the packet generation rate. We derive the network lifetime by considering the specific regime of the traffic flow. We find that traffic congestion inversely affects network lifetime in the sense that high traffic congestion results in short network lifetime. We also discuss the impacts of factors such as communication radius, node moving speed, routing strategy, etc., on network lifetime and traffic congestion.
Traffic congestion and the lifetime of networks with moving nodes.
Yang, Xianxia; Li, Jie; Pu, Cunlai; Yan, Meichen; Sharafat, Rajput Ramiz; Yang, Jian; Gakis, Konstantinos; Pardalos, Panos M
2017-01-01
For many power-limited networks, such as wireless sensor networks and mobile ad hoc networks, maximizing the network lifetime is the first concern in the related designing and maintaining activities. We study the network lifetime from the perspective of network science. In our model, nodes are initially assigned a fixed amount of energy moving in a square area and consume the energy when delivering packets. We obtain four different traffic regimes: no, slow, fast, and absolute congestion regimes, which are basically dependent on the packet generation rate. We derive the network lifetime by considering the specific regime of the traffic flow. We find that traffic congestion inversely affects network lifetime in the sense that high traffic congestion results in short network lifetime. We also discuss the impacts of factors such as communication radius, node moving speed, routing strategy, etc., on network lifetime and traffic congestion.
Complexity of Monte Carlo and deterministic dose-calculation methods.
Börgers, C
1998-03-01
Grid-based deterministic dose-calculation methods for radiotherapy planning require the use of six-dimensional phase space grids. Because of the large number of phase space dimensions, a growing number of medical physicists appear to believe that grid-based deterministic dose-calculation methods are not competitive with Monte Carlo methods. We argue that this conclusion may be premature. Our results do suggest, however, that finite difference or finite element schemes with orders of accuracy greater than one will probably be needed if such methods are to compete well with Monte Carlo methods for dose calculations.
Deterministic and efficient quantum cryptography based on Bell's theorem
Chen Zengbing; Pan Jianwei; Zhang Qiang; Bao Xiaohui; Schmiedmayer, Joerg
2006-05-15
We propose a double-entanglement-based quantum cryptography protocol that is both efficient and deterministic. The proposal uses photon pairs with entanglement both in polarization and in time degrees of freedom; each measurement in which both of the two communicating parties register a photon can establish one and only one perfect correlation, and thus deterministically create a key bit. Eavesdropping can be detected by violation of local realism. A variation of the protocol shows a higher security, similar to the six-state protocol, under individual attacks. Our scheme allows a robust implementation under the current technology.
Inherent Conservatism in Deterministic Quasi-Static Structural Analysis
NASA Technical Reports Server (NTRS)
Verderaime, V.
1997-01-01
The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.
Deterministic extinction by mixing in cyclically competing species
NASA Astrophysics Data System (ADS)
Feldager, Cilie W.; Mitarai, Namiko; Ohta, Hiroki
2017-03-01
We consider a cyclically competing species model on a ring with global mixing at finite rate, which corresponds to the well-known Lotka-Volterra equation in the limit of infinite mixing rate. Within a perturbation analysis of the model from the infinite mixing rate, we provide analytical evidence that extinction occurs deterministically at sufficiently large but finite values of the mixing rate for any species number N ≥3 . Further, by focusing on the cases of rather small species numbers, we discuss numerical results concerning the trajectories toward such deterministic extinction, including global bifurcations caused by changing the mixing rate.
Investigating the Use of 3-D Deterministic Transport for Core Safety Analysis
H. D. Gougar; D. Scott
2004-04-01
An LDRD (Laboratory Directed Research and Development) project is underway at the Idaho National Laboratory (INL) to demonstrate the feasibility of using a three-dimensional multi-group deterministic neutron transport code (Attila®) to perform global (core-wide) criticality, flux and depletion calculations for safety analysis of the Advanced Test Reactor (ATR). This paper discusses the ATR, model development, capabilities of Attila, generation of the cross-section libraries, comparisons to experimental results for Advanced Fuel Cycle (AFC) concepts, and future work planned with Attila.
Seismic hazard assessment of Western Coastal Province of Saudi Arabia: deterministic approach
NASA Astrophysics Data System (ADS)
Rehman, Faisal; El-Hady, Sherif M.; Atef, Ali H.; Harbi, Hussein M.
2016-10-01
Seismic hazard assessment is carried out by utilizing deterministic approach to evaluate the maximum expected earthquake ground motions along the Western Coastal Province of Saudi Arabia. The analysis is accomplished by incorporating seismotectonic source model, determination of earthquake magnitude ( M max), set of appropriate ground motion predictive equations (GMPE), and logic tree sequence. The logic tree sequence is built up to assign weight to ground motion scaling relationships. Contour maps of ground acceleration are generated at different spectral periods. These maps show that the largest ground motion values are emerged in northern and southern regions of the western coastal province in Saudi Arabia in comparison with the central region.
Risk estimates for deterministic health effects of inhaled weapons grade plutonium.
Scott, Bobby R; Peterson, Vern L
2003-09-01
Risk estimates for deterministic effects of inhaled weapons-grade plutonium (WG Pu) are needed to evaluate potential serious harm to (1) U.S. Department of Energy nuclear workers from accidental or other work-place releases of WG Pu; and (2) the public from terrorist actions resulting in the release of WG Pu to the environment. Deterministic health effects (the most serious radiobiological consequences to humans) can arise when large amounts of WG Pu are taken into the body. Inhalation is considered the most likely route of intake during work-place accidents or during a nuclear terrorism incident releasing WG Pu to the environment. Our current knowledge about radiation-related harm is insufficient for generating precise estimates of risk for a given WG Pu exposure scenario. This relates largely to uncertainties associated with currently available risk and dosimetry models. Thus, rather than generating point estimates of risk, distributions that account for variability/uncertainty are needed to properly characterize potential harm to humans from a given WG Pu exposure scenario. In this manuscript, we generate and summarize risk distributions for deterministic radiation effects in the lungs of nuclear workers from inhaled WG Pu particles (standard isotopic mix). These distributions were developed using NUREG/CR-4214 risk models and time-dependent, dose conversion factor data based on Publication 30 of the International Commission on Radiological Protection. Dose conversion factors based on ICRP Publication 30 are more relevant to deterministic effects than are the dose conversion factors based on ICRP Publication 66, which relate to targets for stochastic effects. Risk distributions that account for NUREG/CR-4214 parameter and model uncertainties were generated using the Monte Carlo method. Risks were evaluated for both lethality (from radiation pneumonitis) and morbidity (due to radiation-induced respiratory dysfunction) and were found to depend strongly on absorbed
Traffic and emission simulation in China based on statistical methodology
NASA Astrophysics Data System (ADS)
Liu, Huan; He, Kebin; Barth, Matthew
2011-02-01
To better understand how the traffic control can affect vehicle emissions, a novel TRaffic And Vehicle Emission Linkage (TRAVEL) approach was developed based on local traffic activity and emission data. This approach consists of a two-stage mapping from general traffic information to traffic flow patterns, and then to the aggregated emission rates. 39 traffic flow patterns and corresponding emission rates for light-duty and heavy-duty vehicles considering emission standards classification are generated. As a case study, vehicle activity and emissions during the Beijing Olympics were simulated and compared to BAU scenario. Approximately 42-65% of the gaseous pollutants and 24% of the particle pollutants from cars, taxies and buses were reduced. These results are validated by traffic and air quality monitoring data during the Olympics, as well as other emission inventory studies. This approach improves the ability to fast predict emission variation from traffic control measurements in several typical Chinese cities. Comments related to application of this approach with both advantages and limitations are included.
Application of tabu search to deterministic and stochastic optimization problems
NASA Astrophysics Data System (ADS)
Gurtuna, Ozgur
During the past two decades, advances in computer science and operations research have resulted in many new optimization methods for tackling complex decision-making problems. One such method, tabu search, forms the basis of this thesis. Tabu search is a very versatile optimization heuristic that can be used for solving many different types of optimization problems. Another research area, real options, has also gained considerable momentum during the last two decades. Real options analysis is emerging as a robust and powerful method for tackling decision-making problems under uncertainty. Although the theoretical foundations of real options are well-established and significant progress has been made in the theory side, applications are lagging behind. A strong emphasis on practical applications and a multidisciplinary approach form the basic rationale of this thesis. The fundamental concepts and ideas behind tabu search and real options are investigated in order to provide a concise overview of the theory supporting both of these two fields. This theoretical overview feeds into the design and development of algorithms that are used to solve three different problems. The first problem examined is a deterministic one: finding the optimal servicing tours that minimize energy and/or duration of missions for servicing satellites around Earth's orbit. Due to the nature of the space environment, this problem is modeled as a time-dependent, moving-target optimization problem. Two solution methods are developed: an exhaustive method for smaller problem instances, and a method based on tabu search for larger ones. The second and third problems are related to decision-making under uncertainty. In the second problem, tabu search and real options are investigated together within the context of a stochastic optimization problem: option valuation. By merging tabu search and Monte Carlo simulation, a new method for studying options, Tabu Search Monte Carlo (TSMC) method, is
Comparison of deterministic and Monte Carlo methods in shielding design.
Oliveira, A D; Oliveira, C
2005-01-01
In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions.
Risk-based versus deterministic explosives safety criteria
Wright, R.E.
1996-12-01
The Department of Defense Explosives Safety Board (DDESB) is actively considering ways to apply risk-based approaches in its decision- making processes. As such, an understanding of the impact of converting to risk-based criteria is required. The objectives of this project are to examine the benefits and drawbacks of risk-based criteria and to define the impact of converting from deterministic to risk-based criteria. Conclusions will be couched in terms that allow meaningful comparisons of deterministic and risk-based approaches. To this end, direct comparisons of the consequences and impacts of both deterministic and risk-based criteria at selected military installations are made. Deterministic criteria used in this report are those in DoD 6055.9-STD, `DoD Ammunition and Explosives Safety Standard.` Risk-based criteria selected for comparison are those used by the government of Switzerland, `Technical Requirements for the Storage of Ammunition (TLM 75).` The risk-based criteria used in Switzerland were selected because they have been successfully applied for over twenty-five years.
A Deterministic Annealing Approach to Clustering AIRS Data
NASA Technical Reports Server (NTRS)
Guillaume, Alexandre; Braverman, Amy; Ruzmaikin, Alexander
2012-01-01
We will examine the validity of means and standard deviations as a basis for climate data products. We will explore the conditions under which these two simple statistics are inadequate summaries of the underlying empirical probability distributions by contrasting them with a nonparametric, method called Deterministic Annealing technique
Deterministic dense coding and faithful teleportation with multipartite graph states
Huang, C.-Y.; Yu, I-C.; Lin, F.-L.; Hsu, L.-Y.
2009-05-15
We propose schemes to perform the deterministic dense coding and faithful teleportation with multipartite graph states. We also find the sufficient and necessary condition of a viable graph state for the proposed schemes. That is, for the associated graph, the reduced adjacency matrix of the Tanner-type subgraph between senders and receivers should be invertible.
Deterministic retrieval of complex Green's functions using hard X rays.
Vine, D J; Paganin, D M; Pavlov, K M; Uesugi, K; Takeuchi, A; Suzuki, Y; Yagi, N; Kämpfe, T; Kley, E-B; Förster, E
2009-01-30
A massively parallel deterministic method is described for reconstructing shift-invariant complex Green's functions. As a first experimental implementation, we use a single phase contrast x-ray image to reconstruct the complex Green's function associated with Bragg reflection from a thick perfect crystal. The reconstruction is in excellent agreement with a classic prediction of dynamical diffraction theory.
A Unit on Deterministic Chaos for Student Teachers
ERIC Educational Resources Information Center
Stavrou, D.; Assimopoulos, S.; Skordoulis, C.
2013-01-01
A unit aiming to introduce pre-service teachers of primary education to the limited predictability of deterministic chaotic systems is presented. The unit is based on a commercial chaotic pendulum system connected with a data acquisition interface. The capabilities and difficulties in understanding the notion of limited predictability of 18…
NASA Astrophysics Data System (ADS)
Beckenbauer, Thomas
Road traffic is the most interfering noise source in developed countries. According to a publication of the European Union (EU) at the end of the twentieth century [1], about 40% of the population in 15 EU member states is exposed to road traffic noise at mean levels exceeding 55 dB(A). Nearly 80 million people, 20% of the population, are exposed to levels exceeding 65 dB(A) during daytime and more than 30% of the population is exposed to levels exceeding 55 dB(A) during night time. Such high noise levels cause health risks and social disorders (aggressiveness, protest, and helplessness), interference of communication and disturbance of sleep; the long- and short-term consequences cause adverse cardiovascular effects, detrimental hormonal responses (stress hormones), and possible disturbance of the human metabolism (nutrition) and the immune system. Even performance at work and school could be impaired.
NASA Astrophysics Data System (ADS)
Davis, L. C.
2015-03-01
The Texas A&M Transportation Institute estimated that traffic congestion cost the United States 121 billion in 2011 (the latest data available). The cost is due to wasted time and fuel. In addition to accidents and road construction, factors contributing to congestion include large demand, instability of high-density free flow and selfish behavior of drivers, which produces self-organized traffic bottlenecks. Extensive data collected on instrumented highways in various countries have led to a better understanding of traffic dynamics. From these measurements, Boris Kerner and colleagues developed a new theory called three-phase theory. They identified three major phases of flow observed in the data: free flow, synchronous flow and wide moving jams. The intermediate phase is called synchronous because vehicles in different lanes tend to have similar velocities. This congested phase, characterized by lower velocities yet modestly high throughput, frequently occurs near on-ramps and lane reductions. At present there are only two widely used methods of congestion mitigation: ramp metering and the display of current travel-time information to drivers. To find more effective methods to reduce congestion, researchers perform large-scale simulations using models based on the new theories. An algorithm has been proposed to realize Wardrop equilibria with real-time route information. Such equilibria have equal travel time on alternative routes between a given origin and destination. An active area of current research is the dynamics of connected vehicles, which communicate wirelessly with other vehicles and the surrounding infrastructure. These systems show great promise for improving traffic flow and safety.
A Novel Biobjective Risk-Based Model for Stochastic Air Traffic Network Flow Optimization Problem
Cai, Kaiquan; Jia, Yaoguang; Zhu, Yanbo; Xiao, Mingming
2015-01-01
Network-wide air traffic flow management (ATFM) is an effective way to alleviate demand-capacity imbalances globally and thereafter reduce airspace congestion and flight delays. The conventional ATFM models assume the capacities of airports or airspace sectors are all predetermined. However, the capacity uncertainties due to the dynamics of convective weather may make the deterministic ATFM measures impractical. This paper investigates the stochastic air traffic network flow optimization (SATNFO) problem, which is formulated as a weighted biobjective 0-1 integer programming model. In order to evaluate the effect of capacity uncertainties on ATFM, the operational risk is modeled via probabilistic risk assessment and introduced as an extra objective in SATNFO problem. Computation experiments using real-world air traffic network data associated with simulated weather data show that presented model has far less constraints compared to stochastic model with nonanticipative constraints, which means our proposed model reduces the computation complexity. PMID:26180842
A Novel Biobjective Risk-Based Model for Stochastic Air Traffic Network Flow Optimization Problem.
Cai, Kaiquan; Jia, Yaoguang; Zhu, Yanbo; Xiao, Mingming
2015-01-01
Network-wide air traffic flow management (ATFM) is an effective way to alleviate demand-capacity imbalances globally and thereafter reduce airspace congestion and flight delays. The conventional ATFM models assume the capacities of airports or airspace sectors are all predetermined. However, the capacity uncertainties due to the dynamics of convective weather may make the deterministic ATFM measures impractical. This paper investigates the stochastic air traffic network flow optimization (SATNFO) problem, which is formulated as a weighted biobjective 0-1 integer programming model. In order to evaluate the effect of capacity uncertainties on ATFM, the operational risk is modeled via probabilistic risk assessment and introduced as an extra objective in SATNFO problem. Computation experiments using real-world air traffic network data associated with simulated weather data show that presented model has far less constraints compared to stochastic model with nonanticipative constraints, which means our proposed model reduces the computation complexity.
Hybrid Monte Carlo/deterministic methods for radiation shielding problems
NASA Astrophysics Data System (ADS)
Becker, Troy L.
For the past few decades, the most common type of deep-penetration (shielding) problem simulated using Monte Carlo methods has been the source-detector problem, in which a response is calculated at a single location in space. Traditionally, the nonanalog Monte Carlo methods used to solve these problems have required significant user input to generate and sufficiently optimize the biasing parameters necessary to obtain a statistically reliable solution. It has been demonstrated that this laborious task can be replaced by automated processes that rely on a deterministic adjoint solution to set the biasing parameters---the so-called hybrid methods. The increase in computational power over recent years has also led to interest in obtaining the solution in a region of space much larger than a point detector. In this thesis, we propose two methods for solving problems ranging from source-detector problems to more global calculations---weight windows and the Transform approach. These techniques employ sonic of the same biasing elements that have been used previously; however, the fundamental difference is that here the biasing techniques are used as elements of a comprehensive tool set to distribute Monte Carlo particles in a user-specified way. The weight window achieves the user-specified Monte Carlo particle distribution by imposing a particular weight window on the system, without altering the particle physics. The Transform approach introduces a transform into the neutron transport equation, which results in a complete modification of the particle physics to produce the user-specified Monte Carlo distribution. These methods are tested in a three-dimensional multigroup Monte Carlo code. For a basic shielding problem and a more realistic one, these methods adequately solved source-detector problems and more global calculations. Furthermore, they confirmed that theoretical Monte Carlo particle distributions correspond to the simulated ones, implying that these methods
Traffic Games: Modeling Freeway Traffic with Game Theory
Cortés-Berrueco, Luis E.; Gershenson, Carlos; Stephens, Christopher R.
2016-01-01
We apply game theory to a vehicular traffic model to study the effect of driver strategies on traffic flow. The resulting model inherits the realistic dynamics achieved by a two-lane traffic model and aims to incorporate phenomena caused by driver-driver interactions. To achieve this goal, a game-theoretic description of driver interaction was developed. This game-theoretic formalization allows one to model different lane-changing behaviors and to keep track of mobility performance. We simulate the evolution of cooperation, traffic flow, and mobility performance for different modeled behaviors. The analysis of these results indicates a mobility optimization process achieved by drivers’ interactions. PMID:27855176
Traffic Games: Modeling Freeway Traffic with Game Theory.
Cortés-Berrueco, Luis E; Gershenson, Carlos; Stephens, Christopher R
2016-01-01
We apply game theory to a vehicular traffic model to study the effect of driver strategies on traffic flow. The resulting model inherits the realistic dynamics achieved by a two-lane traffic model and aims to incorporate phenomena caused by driver-driver interactions. To achieve this goal, a game-theoretic description of driver interaction was developed. This game-theoretic formalization allows one to model different lane-changing behaviors and to keep track of mobility performance. We simulate the evolution of cooperation, traffic flow, and mobility performance for different modeled behaviors. The analysis of these results indicates a mobility optimization process achieved by drivers' interactions.
Vardoulakis, Sotiris; Chalabi, Zaid; Fletcher, Tony; Grundy, Chris; Leonardi, Giovanni S
2008-05-15
In urban areas, road traffic is a major source of carcinogenic polycyclic aromatic hydrocarbons (PAH), thus any changes in traffic patterns are expected to affect PAH concentrations in ambient air. Exposure to PAH and other traffic-related air pollutants has often been quantified in a deterministic manner that disregards the various sources of uncertainty in the modelling systems used. In this study, we developed a generic method for handling uncertainty in population exposure models. The method was applied to quantify the uncertainty in population exposure to benzo[a]pyrene (BaP) before and after the implementation of a traffic management intervention. This intervention would affect the movement of vehicles in the studied area and consequently alter traffic emissions, pollutant concentrations and population exposure. Several models, including an emission calculator, a dispersion model and a Geographic Information System were used to quantify the impact of the traffic management intervention. We established four exposure zones defined by distance of residence postcode centroids from major road or intersection. A stochastic method was used to quantify the uncertainty in the population exposure model. The method characterises uncertainty using probability measures and propagates it applying Monte Carlo analysis. The overall model predicted that the traffic management scheme would lead to a minor reduction in mean population exposure to BaP in the studied area. However, the uncertainty associated with the exposure estimates was much larger than this reduction. The proposed method is generic and provides realistic estimates of population exposure to traffic-related pollutants, as well as characterises the uncertainty in these estimates. This method can be used within a decision support tool to evaluate the impact of alternative traffic management policies.
Deterministically Polarized Fluorescence from Single Dye Molecules Aligned in Liquid Crystal Host
Lukishova, S.G.; Schmid, A.W.; Knox, R.; Freivald, P.; Boyd, R. W.; Stroud, Jr., C. R.; Marshall, K.L.
2005-09-30
We demonstrated for the first time to our konwledge deterministically polarized fluorescence from single dye molecules. Planar aligned nematic liquid crystal hosts provide deterministic alignment of single dye molecules in a preferred direction.
Zhang, Jie; Draxl, Caroline; Hopson, Thomas; Monache, Luca Delle; Vanvyve, Emilie; Hodge, Bri-Mathias
2015-10-01
Numerical weather prediction (NWP) models have been widely used for wind resource assessment. Model runs with higher spatial resolution are generally more accurate, yet extremely computational expensive. An alternative approach is to use data generated by a low resolution NWP model, in conjunction with statistical methods. In order to analyze the accuracy and computational efficiency of different types of NWP-based wind resource assessment methods, this paper performs a comparison of three deterministic and probabilistic NWP-based wind resource assessment methodologies: (i) a coarse resolution (0.5 degrees x 0.67 degrees) global reanalysis data set, the Modern-Era Retrospective Analysis for Research and Applications (MERRA); (ii) an analog ensemble methodology based on the MERRA, which provides both deterministic and probabilistic predictions; and (iii) a fine resolution (2-km) NWP data set, the Wind Integration National Dataset (WIND) Toolkit, based on the Weather Research and Forecasting model. Results show that: (i) as expected, the analog ensemble and WIND Toolkit perform significantly better than MERRA confirming their ability to downscale coarse estimates; (ii) the analog ensemble provides the best estimate of the multi-year wind distribution at seven of the nine sites, while the WIND Toolkit is the best at one site; (iii) the WIND Toolkit is more accurate in estimating the distribution of hourly wind speed differences, which characterizes the wind variability, at five of the available sites, with the analog ensemble being best at the remaining four locations; and (iv) the analog ensemble computational cost is negligible, whereas the WIND Toolkit requires large computational resources. Future efforts could focus on the combination of the analog ensemble with intermediate resolution (e.g., 10-15 km) NWP estimates, to considerably reduce the computational burden, while providing accurate deterministic estimates and reliable probabilistic assessments.
Deterministic ion beam material adding technology for high-precision optical surfaces.
Liao, Wenlin; Dai, Yifan; Xie, Xuhui; Zhou, Lin
2013-02-20
Although ion beam figuring (IBF) provides a highly deterministic method for the precision figuring of optical components, several problems still need to be addressed, such as the limited correcting capability for mid-to-high spatial frequency surface errors and low machining efficiency for pit defects on surfaces. We propose a figuring method named deterministic ion beam material adding (IBA) technology to solve those problems in IBF. The current deterministic optical figuring mechanism, which is dedicated to removing local protuberances on optical surfaces, is enriched and developed by the IBA technology. Compared with IBF, this method can realize the uniform convergence of surface errors, where the particle transferring effect generated in the IBA process can effectively correct the mid-to-high spatial frequency errors. In addition, IBA can rapidly correct the pit defects on the surface and greatly improve the machining efficiency of the figuring process. The verification experiments are accomplished on our experimental installation to validate the feasibility of the IBA method. First, a fused silica sample with a rectangular pit defect is figured by using IBA. Through two iterations within only 47.5 min, this highly steep pit is effectively corrected, and the surface error is improved from the original 24.69 nm root mean square (RMS) to the final 3.68 nm RMS. Then another experiment is carried out to demonstrate the correcting capability of IBA for mid-to-high spatial frequency surface errors, and the final results indicate that the surface accuracy and surface quality can be simultaneously improved.
Economical Video Monitoring of Traffic
NASA Technical Reports Server (NTRS)
Houser, B. C.; Paine, G.; Rubenstein, L. D.; Parham, O. Bruce, Jr.; Graves, W.; Bradley, C.
1986-01-01
Data compression allows video signals to be transmitted economically on telephone circuits. Telephone lines transmit television signals to remote traffic-control center. Lines also carry command signals from center to TV camera and compressor at highway site. Video system with television cameras positioned at critical points on highways allows traffic controllers to determine visually, almost immediately, exact cause of traffic-flow disruption; e.g., accidents, breakdowns, or spills, almost immediately. Controllers can then dispatch appropriate emergency services and alert motorists to minimize traffic backups.
Realistic Data-Driven Traffic Flow Animation Using Texture Synthesis.
Chao, Qianwen; Deng, Zhigang; Ren, Jiaping; Ye, Qianqian; Jin, Xiaogang
2017-01-11
We present a novel data-driven approach to populate virtual road networks with realistic traffic flows. Specifically, given a limited set of vehicle trajectories as the input samples, our approach first synthesizes a large set of vehicle trajectories. By taking the spatio-temporal information of traffic flows as a 2D texture, the generation of new traffic flows can be formulated as a texture synthesis process, which is solved by minimizing a newly developed traffic texture energy. The synthesized output captures the spatio-temporal dynamics of the input traffic flows, and the vehicle interactions in it strictly follow traffic rules. After that, we position the synthesized vehicle trajectory data to virtual road networks using a cage-based registration scheme, where a few traffic-specific constraints are enforced to maintain each vehicle's original spatial location and synchronize its motion in concert with its neighboring vehicles. Our approach is intuitive to control and scalable to the complexity of virtual road networks. We validated our approach through many experiments and paired comparison user studies.
An intelligent traffic controller
Kagolanu, K.; Fink, R.; Smartt, H.; Powell, R.; Larsen, E.
1995-12-01
A controller with advanced control logic can significantly improve traffic flows at intersections. In this vein, this paper explores fuzzy rules and algorithms to improve the intersection operation by rationalizing phase changes and green times. The fuzzy logic for control is enhanced by the exploration of neural networks for families of membership functions and for ideal cost functions. The concepts of fuzzy logic control are carried forth into the controller architecture. Finally, the architecture and the modules are discussed. In essence, the control logic and architecture of an intelligent controller are explored.
Enhancing traffic performance in hierarchical DHT system by exploiting network proximity
NASA Astrophysics Data System (ADS)
Zhong, Haifeng; Wu, Wei; Pei, Canhao; Zhang, Chengfeng
2009-08-01
Nowadays P2P systems have become increasingly popular for object distribution and file sharing, and the majority of Internet traffic is generated by P2P file sharing applications. However, those applications usually ignored the underlying proximity of physical nodes and regionalization of file accessing. As a result, they generate a large amount of unnecessary interdomain transit traffic and increase response latency. In this paper, we proposed a new traffic control approach to enhance p2p traffic locality and reduce the cross-group transfer. Using analysis, we show that the method substantially improves node transfer efficiency and significantly reduces file access latency compared with native P2P applications.
Approaches to implementing deterministic models in a probabilistic framework
Talbott, D.V.
1995-04-01
The increasing use of results from probabilistic risk assessments in the decision-making process makes it ever more important to eliminate simplifications in probabilistic models that might lead to conservative results. One area in which conservative simplifications are often made is modeling the physical interactions that occur during the progression of an accident sequence. This paper demonstrates and compares different approaches for incorporating deterministic models of physical parameters into probabilistic models; parameter range binning, response curves, and integral deterministic models. An example that combines all three approaches in a probabilistic model for the handling of an energetic material (i.e. high explosive, rocket propellant,...) is then presented using a directed graph model.
Deterministic control of ferroelastic switching in multiferroic materials.
Balke, N; Choudhury, S; Jesse, S; Huijben, M; Chu, Y H; Baddorf, A P; Chen, L Q; Ramesh, R; Kalinin, S V
2009-12-01
Multiferroic materials showing coupled electric, magnetic and elastic orderings provide a platform to explore complexity and new paradigms for memory and logic devices. Until now, the deterministic control of non-ferroelectric order parameters in multiferroics has been elusive. Here, we demonstrate deterministic ferroelastic switching in rhombohedral BiFeO(3) by domain nucleation with a scanning probe. We are able to select among final states that have the same electrostatic energy, but differ dramatically in elastic or magnetic order, by applying voltage to the probe while it is in lateral motion. We also demonstrate the controlled creation of a ferrotoroidal order parameter. The ability to control local elastic, magnetic and torroidal order parameters with an electric field will make it possible to probe local strain and magnetic ordering, and engineer various magnetoelectric, domain-wall-based and strain-coupled devices.
Deterministic error correction for nonlocal spatial-polarization hyperentanglement.
Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu
2016-02-10
Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication.
On the secure obfuscation of deterministic finite automata.
Anderson, William Erik
2008-06-01
In this paper, we show how to construct secure obfuscation for Deterministic Finite Automata, assuming non-uniformly strong one-way functions exist. We revisit the software protection approaches originally proposed by [5, 10, 12, 17] and revise them to the current obfuscation setting of Barak et al. [2]. Under this model, we introduce an efficient oracle that retains some 'small' secret about the original program. Using this secret, we can construct an obfuscator and two-party protocol that securely obfuscates Deterministic Finite Automata against malicious adversaries. The security of this model retains the strong 'virtual black box' property originally proposed in [2] while incorporating the stronger condition of dependent auxiliary inputs in [15]. Additionally, we show that our techniques remain secure under concurrent self-composition with adaptive inputs and that Turing machines are obfuscatable under this model.
Deterministic remote two-qubit state preparation in dissipative environments
NASA Astrophysics Data System (ADS)
Li, Jin-Fang; Liu, Jin-Ming; Feng, Xun-Li; Oh, C. H.
2016-05-01
We propose a new scheme for efficient remote preparation of an arbitrary two-qubit state, introducing two auxiliary qubits and using two Einstein-Podolsky-Rosen (EPR) states as the quantum channel in a non-recursive way. At variance with all existing schemes, our scheme accomplishes deterministic remote state preparation (RSP) with only one sender and the simplest entangled resource (say, EPR pairs). We construct the corresponding quantum logic circuit using a unitary matrix decomposition procedure and analytically obtain the average fidelity of the deterministic RSP process for dissipative environments. Our studies show that, while the average fidelity gradually decreases to a stable value without any revival in the Markovian regime, it decreases to the same stable value with a dampened revival amplitude in the non-Markovian regime. We also find that the average fidelity's approximate maximal value can be preserved for a long time if the non-Markovian and the detuning conditions are satisfied simultaneously.
Nano transfer and nanoreplication using deterministically grown sacrificial nanotemplates
Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E [Greenback, TN; Guillorn, Michael A [Ithaca, NY; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TX; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN
2012-03-27
Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoconduit material coupled to a surface of the substrate. The substrate defines an aperture and the nanoconduit material defines a nanoconduit that is i) contiguous with the aperture and ii) aligned substantially non-parallel to a plane defined by the surface of the substrate.
A deterministic algorithm for constrained enumeration of transmembrane protein folds.
Brown, William Michael; Young, Malin M.; Sale, Kenneth L.; Faulon, Jean-Loup Michel; Schoeniger, Joseph S.
2004-07-01
A deterministic algorithm for enumeration of transmembrane protein folds is presented. Using a set of sparse pairwise atomic distance constraints (such as those obtained from chemical cross-linking, FRET, or dipolar EPR experiments), the algorithm performs an exhaustive search of secondary structure element packing conformations distributed throughout the entire conformational space. The end result is a set of distinct protein conformations, which can be scored and refined as part of a process designed for computational elucidation of transmembrane protein structures.
Uniform Deterministic Discrete Method for three dimensional systems
NASA Astrophysics Data System (ADS)
Li, Ben-Wen; Tao, Wen-Quan; Nie, Yu-Hong
1997-06-01
For radiative direct exchange areas in three dimensional system, the Uniform Deterministic Discrete Method (UDDM) was adopted. The spherical surface dividing method for sending area element and the regular icosahedron for sending volume element can meet with the direct exchange area computation of any kind of zone pairs. The numerical examples of direct exchange area in three dimensional system with nonhomogeneous attenuation coefficients indicated that the UDDM can give very high numerical accuracy.
Probabilistic versus deterministic hazard assessment in liquefaction susceptible zones
NASA Astrophysics Data System (ADS)
Daminelli, Rosastella; Gerosa, Daniele; Marcellini, Alberto; Tento, Alberto
2015-04-01
Probabilistic seismic hazard assessment (PSHA), usually adopted in the framework of seismic codes redaction, is based on Poissonian description of the temporal occurrence, negative exponential distribution of magnitude and attenuation relationship with log-normal distribution of PGA or response spectrum. The main positive aspect of this approach stems into the fact that is presently a standard for the majority of countries, but there are weak points in particular regarding the physical description of the earthquake phenomenon. Factors like site effects, source characteristics like duration of the strong motion and directivity that could significantly influence the expected motion at the site are not taken into account by PSHA. Deterministic models can better evaluate the ground motion at a site from a physical point of view, but its prediction reliability depends on the degree of knowledge of the source, wave propagation and soil parameters. We compare these two approaches in selected sites affected by the May 2012 Emilia-Romagna and Lombardia earthquake, that caused widespread liquefaction phenomena unusually for magnitude less than 6. We focus on sites liquefiable because of their soil mechanical parameters and water table level. Our analysis shows that the choice between deterministic and probabilistic hazard analysis is strongly dependent on site conditions. The looser the soil and the higher the liquefaction potential, the more suitable is the deterministic approach. Source characteristics, in particular the duration of strong ground motion, have long since recognized as relevant to induce liquefaction; unfortunately a quantitative prediction of these parameters appears very unlikely, dramatically reducing the possibility of their adoption in hazard assessment. Last but not least, the economic factors are relevant in the choice of the approach. The case history of 2012 Emilia-Romagna and Lombardia earthquake, with an officially estimated cost of 6 billions
Pathological tremors: Deterministic chaos or nonlinear stochastic oscillators?
NASA Astrophysics Data System (ADS)
Timmer, Jens; Häußler, Siegfried; Lauk, Michael; Lücking, Carl
2000-02-01
Pathological tremors exhibit a nonlinear oscillation that is not strictly periodic. We investigate whether the deviation from periodicity is due to nonlinear deterministic chaotic dynamics or due to nonlinear stochastic dynamics. To do so, we apply methods from linear and nonlinear time series analysis to tremor time series. The results of the different methods suggest that the considered types of pathological tremors represent nonlinear stochastic second order processes.
The deterministic SIS epidemic model in a Markovian random environment.
Economou, Antonis; Lopez-Herrero, Maria Jesus
2016-07-01
We consider the classical deterministic susceptible-infective-susceptible epidemic model, where the infection and recovery rates depend on a background environmental process that is modeled by a continuous time Markov chain. This framework is able to capture several important characteristics that appear in the evolution of real epidemics in large populations, such as seasonality effects and environmental influences. We propose computational approaches for the determination of various distributions that quantify the evolution of the number of infectives in the population.
Deterministic chaos control in neural networks on various topologies
NASA Astrophysics Data System (ADS)
Neto, A. J. F.; Lima, F. W. S.
2017-01-01
Using numerical simulations, we study the control of deterministic chaos in neural networks on various topologies like Voronoi-Delaunay, Barabási-Albert, Small-World networks and Erdös-Rényi random graphs by "pinning" the state of a "special" neuron. We show that the chaotic activity of the networks or graphs, when control is on, can become constant or periodic.
Deterministic binary vectors for efficient automated indexing of MEDLINE/PubMed abstracts.
Wahle, Manuel; Widdows, Dominic; Herskovic, Jorge R; Bernstam, Elmer V; Cohen, Trevor
2012-01-01
The need to maintain accessibility of the biomedical literature has led to development of methods to assist human indexers by recommending index terms for newly encountered articles. Given the rapid expansion of this literature, it is essential that these methods be scalable. Document vector representations are commonly used for automated indexing, and Random Indexing (RI) provides the means to generate them efficiently. However, RI is difficult to implement in real-world indexing systems, as (1) efficient nearest-neighbor search requires retaining all document vectors in RAM, and (2) it is necessary to maintain a store of randomly generated term vectors to index future documents. Motivated by these concerns, this paper documents the development and evaluation of a deterministic binary variant of RI. The increased capacity demonstrated by binary vectors has implications for information retrieval, and the elimination of the need to retain term vectors facilitates distributed implementations, enhancing the scalability of RI.
De Los Ríos, F. A.; Paluszny, M.
2015-01-01
We consider some methods to extract information about the rotator cuff based on magnetic resonance images; the study aims to define an alternative method of display that might facilitate the detection of partial tears in the supraspinatus tendon. Specifically, we are going to use families of ellipsoidal triangular patches to cover the humerus head near the affected area. These patches are going to be textured and displayed with the information of the magnetic resonance images using the trilinear interpolation technique. For the generation of points to texture each patch, we propose a new method that guarantees the uniform distribution of its points using a random statistical method. Its computational cost, defined as the average computing time to generate a fixed number of points, is significantly lower as compared with deterministic and other standard statistical techniques. PMID:25650281
Deterministic Binary Vectors for Efficient Automated Indexing of MEDLINE/PubMed Abstracts
Wahle, Manuel; Widdows, Dominic; Herskovic, Jorge R.; Bernstam, Elmer V.; Cohen, Trevor
2012-01-01
The need to maintain accessibility of the biomedical literature has led to development of methods to assist human indexers by recommending index terms for newly encountered articles. Given the rapid expansion of this literature, it is essential that these methods be scalable. Document vector representations are commonly used for automated indexing, and Random Indexing (RI) provides the means to generate them efficiently. However, RI is difficult to implement in real-world indexing systems, as (1) efficient nearest-neighbor search requires retaining all document vectors in RAM, and (2) it is necessary to maintain a store of randomly generated term vectors to index future documents. Motivated by these concerns, this paper documents the development and evaluation of a deterministic binary variant of RI. The increased capacity demonstrated by binary vectors has implications for information retrieval, and the elimination of the need to retain term vectors facilitates distributed implementations, enhancing the scalability of RI. PMID:23304369
Probabilistic vs deterministic views in facing natural hazards
NASA Astrophysics Data System (ADS)
Arattano, Massimo; Coviello, Velio
2015-04-01
Natural hazards can be mitigated through active or passive measures. Among these latter countermeasures, Early Warning Systems (EWSs) are playing an increasing and significant role. In particular, a growing number of studies investigate the reliability of landslide EWSs, their comparability to alternative protection measures and their cost-effectiveness. EWSs, however, inevitably and intrinsically imply the concept of probability of occurrence and/or probability of error. Since a long time science has accepted and integrated the probabilistic nature of reality and its phenomena. The same cannot be told for other fields of knowledge, such as law or politics, with which scientists sometimes have to interact. These disciplines are in fact still linked to more deterministic views of life. The same is true for what is perceived by the public opinion, which often requires or even pretends a deterministic type of answer to its needs. So, as an example, it might be easy for people to feel completely safe because an EWS has been installed. It is also easy for an administrator or a politician to contribute to spread this wrong feeling, together with the idea of having dealt with the problem and done something definitive to face it. May geoethics play a role to create a link between the probabilistic world of nature and science and the tendency of the society to a more deterministic view of things? Answering this question could help scientists to feel more confident in planning and performing their research activities.
Non-equilibrium Thermodynamics of Piecewise Deterministic Markov Processes
NASA Astrophysics Data System (ADS)
Faggionato, A.; Gabrielli, D.; Ribezzi Crivellari, M.
2009-10-01
We consider a class of stochastic dynamical systems, called piecewise deterministic Markov processes, with states ( x, σ)∈Ω×Γ, Ω being a region in ℝ d or the d-dimensional torus, Γ being a finite set. The continuous variable x follows a piecewise deterministic dynamics, the discrete variable σ evolves by a stochastic jump dynamics and the two resulting evolutions are fully-coupled. We study stationarity, reversibility and time-reversal symmetries of the process. Increasing the frequency of the σ-jumps, the system behaves asymptotically as deterministic and we investigate the structure of its fluctuations (i.e. deviations from the asymptotic behavior), recovering in a non Markovian frame results obtained by Bertini et al. (Phys. Rev. Lett. 87(4):040601, 2001; J. Stat. Phys. 107(3-4):635-675, 2002; J. Stat. Mech. P07014, 2007; Preprint available online at http://www.arxiv.org/abs/0807.4457, 2008), in the context of Markovian stochastic interacting particle systems. Finally, we discuss a Gallavotti-Cohen-type symmetry relation with involution map different from time-reversal.
How Does Quantum Uncertainty Emerge from Deterministic Bohmian Mechanics?
NASA Astrophysics Data System (ADS)
Solé, A.; Oriols, X.; Marian, D.; Zanghì, N.
2016-10-01
Bohmian mechanics is a theory that provides a consistent explanation of quantum phenomena in terms of point particles whose motion is guided by the wave function. In this theory, the state of a system of particles is defined by the actual positions of the particles and the wave function of the system; and the state of the system evolves deterministically. Thus, the Bohmian state can be compared with the state in classical mechanics, which is given by the positions and momenta of all the particles, and which also evolves deterministically. However, while in classical mechanics it is usually taken for granted and considered unproblematic that the state is, at least in principle, measurable, this is not the case in Bohmian mechanics. Due to the linearity of the quantum dynamical laws, one essential component of the Bohmian state, the wave function, is not directly measurable. Moreover, it turns out that the measurement of the other component of the state — the positions of the particles — must be mediated by the wave function; a fact that in turn implies that the positions of the particles, though measurable, are constrained by absolute uncertainty. This is the key to understanding how Bohmian mechanics, despite being deterministic, can account for all quantum predictions, including quantum randomness and uncertainty.
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
Demographic noise can reverse the direction of deterministic selection
Constable, George W. A.; Rogers, Tim; McKane, Alan J.; Tarnita, Corina E.
2016-01-01
Deterministic evolutionary theory robustly predicts that populations displaying altruistic behaviors will be driven to extinction by mutant cheats that absorb common benefits but do not themselves contribute. Here we show that when demographic stochasticity is accounted for, selection can in fact act in the reverse direction to that predicted deterministically, instead favoring cooperative behaviors that appreciably increase the carrying capacity of the population. Populations that exist in larger numbers experience a selective advantage by being more stochastically robust to invasions than smaller populations, and this advantage can persist even in the presence of reproductive costs. We investigate this general effect in the specific context of public goods production and find conditions for stochastic selection reversal leading to the success of public good producers. This insight, developed here analytically, is missed by the deterministic analysis as well as by standard game theoretic models that enforce a fixed population size. The effect is found to be amplified by space; in this scenario we find that selection reversal occurs within biologically reasonable parameter regimes for microbial populations. Beyond the public good problem, we formulate a general mathematical framework for models that may exhibit stochastic selection reversal. In this context, we describe a stochastic analog to r−K theory, by which small populations can evolve to higher densities in the absence of disturbance. PMID:27450085
Deterministic form correction of extreme freeform optical surfaces
NASA Astrophysics Data System (ADS)
Lynch, Timothy P.; Myer, Brian W.; Medicus, Kate; DeGroote Nelson, Jessica
2015-10-01
The blistering pace of recent technological advances has led lens designers to rely increasingly on freeform optical components as crucial pieces of their designs. As these freeform components increase in geometrical complexity and continue to deviate further from traditional optical designs, the optical manufacturing community must rethink their fabrication processes in order to keep pace. To meet these new demands, Optimax has developed a variety of new deterministic freeform manufacturing processes. Combining traditional optical fabrication techniques with cutting edge technological innovations has yielded a multifaceted manufacturing approach that can successfully handle even the most extreme freeform optical surfaces. In particular, Optimax has placed emphasis on refining the deterministic form correction process. By developing many of these procedures in house, changes can be implemented quickly and efficiently in order to rapidly converge on an optimal manufacturing method. Advances in metrology techniques allow for rapid identification and quantification of irregularities in freeform surfaces, while deterministic correction algorithms precisely target features on the part and drastically reduce overall correction time. Together, these improvements have yielded significant advances in the realm of freeform manufacturing. With further refinements to these and other aspects of the freeform manufacturing process, the production of increasingly radical freeform optical components is quickly becoming a reality.
Large scale traffic simulations
Nagel, K.; Barrett, C.L. |; Rickert, M. |
1997-04-01
Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computational speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated {open_quotes}looping{close_quotes} between the microsimulation and the simulated planning of individual person`s behavior is necessary). As a rough number, a real-time simulation of an area such as Los Angeles (ca. 1 million travellers) will need a computational speed of much higher than 1 million {open_quotes}particle{close_quotes} (= vehicle) updates per second. This paper reviews how this problem is approached in different projects and how these approaches are dependent both on the specific questions and on the prospective user community. The approaches reach from highly parallel and vectorizable, single-bit implementations on parallel supercomputers for Statistical Physics questions, via more realistic implementations on coupled workstations, to more complicated driving dynamics implemented again on parallel supercomputers. 45 refs., 9 figs., 1 tab.
Automatic Data Traffic Control on DSM Architecture
NASA Technical Reports Server (NTRS)
Frumkin, Michael; Jin, Hao-Qiang; Yan, Jerry; Kwak, Dochan (Technical Monitor)
2000-01-01
We study data traffic on distributed shared memory machines and conclude that data placement and grouping improve performance of scientific codes. We present several methods which user can employ to improve data traffic in his code. We report on implementation of a tool which detects the code fragments causing data congestions and advises user on improvements of data routing in these fragments. The capabilities of the tool include deduction of data alignment and affinity from the source code; detection of the code constructs having abnormally high cache or TLB misses; generation of data placement constructs. We demonstrate the capabilities of the tool on experiments with NAS parallel benchmarks and with a simple computational fluid dynamics application ARC3D.
Design of automated system for management of arrival traffic
NASA Technical Reports Server (NTRS)
Erzberger, Heinz; Nedell, William
1989-01-01
The design of an automated air traffic control system based on a hierarchy of advisory tools for controllers is described. Compatibility of the tools with the human controller, a key objective of the design, is achieved by a judicious selection of tasks to be automated and careful attention to the design of the controller system interface. The design comprises three interconnected subsystems referred to as the Traffic Management Advisor, the Descent Advisor, and the Final Approach Spacing Tool. Each of these subsystems provides a collection of tools for specific controller positions and tasks. The design of two of these tools, the Descent Advisor, which provides automation tools for managing descent traffic, and the Traffic Management Advisor, which generates optimum landing schedules is focused on. The algorithms, automation modes, and graphical interfaces incorporated in the design are described.
Traffic-driven epidemic spreading in correlated networks
NASA Astrophysics Data System (ADS)
Yang, Han-Xin; Tang, Ming; Lai, Ying-Cheng
2015-06-01
In spite of the extensive previous efforts on traffic dynamics and epidemic spreading in complex networks, the problem of traffic-driven epidemic spreading on correlated networks has not been addressed. Interestingly, we find that the epidemic threshold, a fundamental quantity underlying the spreading dynamics, exhibits a nonmonotonic behavior in that it can be minimized for some critical value of the assortativity coefficient, a parameter characterizing the network correlation. To understand this phenomenon, we use the degree-based mean-field theory to calculate the traffic-driven epidemic threshold for correlated networks. The theory predicts that the threshold is inversely proportional to the packet-generation rate and the largest eigenvalue of the betweenness matrix. We obtain consistency between theory and numerics. Our results may provide insights into the important problem of controlling and/or harnessing real-world epidemic spreading dynamics driven by traffic flows.
Sub-surface single ion detection in diamond: A path for deterministic color center creation
NASA Astrophysics Data System (ADS)
Abraham, John; Aguirre, Brandon; Pacheco, Jose; Camacho, Ryan; Bielejec, Edward; Sandia National Laboratories Team
Deterministic single color center creation remains a critical milestone for the integrated use of diamond color centers. It depends on three components: focused ion beam implantation to control the location, yield improvement to control the activation, and single ion implantation to control the number of implanted ions. A surface electrode detector has been fabricated on diamond where the electron hole pairs generated during ion implantation are used as the detection signal. Results will be presented demonstrating single ion detection. The detection efficiency of the device will be described as a function of implant energy and device geometry. It is anticipated that the controlled introduction of single dopant atoms in diamond will provide a basis for deterministic single localized color centers. This work was performed, in part, at the Center for Integrated Nanotechnologies, an Office of Science User Facility operated for the U.S. Department of Energy Office of Science. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
Bianchini, G.; Burgio, N.; Carta, M.; Peluso, V.; Fabrizio, V.; Ricci, L.
2012-07-01
The GUINEVERE experiment (Generation of Uninterrupted Intense Neutrons at the lead Venus Reactor) is an experimental program in support of the ADS technology presently carried out at SCK-CEN in Mol (Belgium). In the experiment a modified lay-out of the original thermal VENUS critical facility is coupled to an accelerator, built by the French body CNRS in Grenoble, working in both continuous and pulsed mode and delivering 14 MeV neutrons by bombardment of deuterons on a tritium-target. The modified lay-out of the facility consists of a fast subcritical core made of 30% U-235 enriched metallic Uranium in a lead matrix. Several off-line and on-line reactivity measurement techniques will be investigated during the experimental campaign. This report is focused on the simulation by deterministic (ERANOS French code) and Monte Carlo (MCNPX US code) calculations of three reactivity measurement techniques, Slope ({alpha}-fitting), Area-ratio and Source-jerk, applied to a GUINEVERE subcritical configuration (namely SC1). The inferred reactivity, in dollar units, by the Area-ratio method shows an overall agreement between the two deterministic and Monte Carlo computational approaches, whereas the MCNPX Source-jerk results are affected by large uncertainties and allow only partial conclusions about the comparison. Finally, no particular spatial dependence of the results is observed in the case of the GUINEVERE SC1 subcritical configuration. (authors)
NASA Astrophysics Data System (ADS)
Galvan-Sosa, M.; Portilla, J.; Hernandez-Rueda, J.; Siegel, J.; Moreno, L.; Ruiz de la Cruz, A.; Solis, J.
2014-02-01
Femtosecond laser pulse temporal shaping techniques have led to important advances in different research fields like photochemistry, laser physics, non-linear optics, biology, or materials processing. This success is partly related to the use of optimal control algorithms. Due to the high dimensionality of the solution and control spaces, evolutionary algorithms are extensively applied and, among them, genetic ones have reached the status of a standard adaptive strategy. Still, their use is normally accompanied by a reduction of the problem complexity by different modalities of parameterization of the spectral phase. Exploiting Rabitz and co-authors' ideas about the topology of quantum landscapes, in this work we analyze the optimization of two different problems under a deterministic approach, using a multiple one-dimensional search (MODS) algorithm. In the first case we explore the determination of the optimal phase mask required for generating arbitrary temporal pulse shapes and compare the performance of the MODS algorithm to the standard iterative Gerchberg-Saxton algorithm. Based on the good performance achieved, the same method has been applied for optimizing two-photon absorption starting from temporally broadened laser pulses, or from laser pulses temporally and spectrally distorted by non-linear absorption in air, obtaining similarly good results which confirm the validity of the deterministic search approach.
NASA Astrophysics Data System (ADS)
Galvan-Sosa, M.; Portilla, J.; Hernandez-Rueda, J.; Siegel, J.; Moreno, L.; Ruiz de la Cruz, A.; Solis, J.
2013-04-01
Femtosecond laser pulse temporal shaping techniques have led to important advances in different research fields like photochemistry, laser physics, non-linear optics, biology, or materials processing. This success is partly related to the use of optimal control algorithms. Due to the high dimensionality of the solution and control spaces, evolutionary algorithms are extensively applied and, among them, genetic ones have reached the status of a standard adaptive strategy. Still, their use is normally accompanied by a reduction of the problem complexity by different modalities of parameterization of the spectral phase. Exploiting Rabitz and co-authors' ideas about the topology of quantum landscapes, in this work we analyze the optimization of two different problems under a deterministic approach, using a multiple one-dimensional search (MODS) algorithm. In the first case we explore the determination of the optimal phase mask required for generating arbitrary temporal pulse shapes and compare the performance of the MODS algorithm to the standard iterative Gerchberg-Saxton algorithm. Based on the good performance achieved, the same method has been applied for optimizing two-photon absorption starting from temporally broadened laser pulses, or from laser pulses temporally and spectrally distorted by non-linear absorption in air, obtaining similarly good results which confirm the validity of the deterministic search approach.
Deterministic time-reversible thermostats: chaos, ergodicity, and the zeroth law of thermodynamics
NASA Astrophysics Data System (ADS)
Patra, Puneet Kumar; Sprott, Julien Clinton; Hoover, William Graham; Griswold Hoover, Carol
2015-09-01
The relative stability and ergodicity of deterministic time-reversible thermostats, both singly and in coupled pairs, are assessed through their Lyapunov spectra. Five types of thermostat are coupled to one another through a single Hooke's-law harmonic spring. The resulting dynamics shows that three specific thermostat types, Hoover-Holian, Ju-Bulgac, and Martyna-Klein-Tuckerman, have very similar Lyapunov spectra in their equilibrium four-dimensional phase spaces and when coupled in equilibrium or nonequilibrium pairs. All three of these oscillator-based thermostats are shown to be ergodic, with smooth analytic Gaussian distributions in their extended phase spaces (coordinate, momentum, and two control variables). Evidently these three ergodic and time-reversible thermostat types are particularly useful as statistical-mechanical thermometers and thermostats. Each of them generates Gibbs' universal canonical distribution internally as well as for systems to which they are coupled. Thus they obey the zeroth law of thermodynamics, as a good heat bath should. They also provide dissipative heat flow with relatively small nonlinearity when two or more such temperature baths interact and provide useful deterministic replacements for the stochastic Langevin equation.
Godt, J.W.; Baum, R.L.; Savage, W.Z.; Salciarini, D.; Schulz, W.H.; Harp, E.L.
2008-01-01
Application of transient deterministic shallow landslide models over broad regions for hazard and susceptibility assessments requires information on rainfall, topography and the distribution and properties of hillside materials. We survey techniques for generating the spatial and temporal input data for such models and present an example using a transient deterministic model that combines an analytic solution to assess the pore-pressure response to rainfall infiltration with an infinite-slope stability calculation. Pore-pressures and factors of safety are computed on a cell-by-cell basis and can be displayed or manipulated in a grid-based GIS. Input data are high-resolution (1.8??m) topographic information derived from LiDAR data and simple descriptions of initial pore-pressure distribution and boundary conditions for a study area north of Seattle, Washington. Rainfall information is taken from a previously defined empirical rainfall intensity-duration threshold and material strength and hydraulic properties were measured both in the field and laboratory. Results are tested by comparison with a shallow landslide inventory. Comparison of results with those from static infinite-slope stability analyses assuming fixed water-table heights shows that the spatial prediction of shallow landslide susceptibility is improved using the transient analyses; moreover, results can be depicted in terms of the rainfall intensity and duration known to trigger shallow landslides in the study area.
TRAFFIC AND TRANSPORTATION. (BUSINESS TECHNOLOGY).
ERIC Educational Resources Information Center
North Carolina State Dept. of Community Colleges, Raleigh.
THE PREEMPLOYMENT, 6-QUARTER CURRICULUM IS FOR USE IN TECHNICAL INSTITUTES AND COMMUNITY COLLEGES. ITS PURPOSE IS TO PROVIDE TRAINING IN NEW TECHNIQUES AND UNDERSTANDING OF THE LATEST STATE AND FEDERAL REGULATIONS APPLICABLE TO TRAFFIC AND TRANSPORTATION. GRADUATES OF THIS CURRICULUM MAY SEEK CAREER OPPORTUNITIES AS TRAFFIC REPRESENTATIVES, CLAIMS…
Probabilistic description of traffic flow
NASA Astrophysics Data System (ADS)
Mahnke, R.; Kaupužs, J.; Lubashevsky, I.
2005-03-01
A stochastic description of traffic flow, called probabilistic traffic flow theory, is developed. The general master equation is applied to relatively simple models to describe the formation and dissolution of traffic congestions. Our approach is mainly based on spatially homogeneous systems like periodically closed circular rings without on- and off-ramps. We consider a stochastic one-step process of growth or shrinkage of a car cluster (jam). As generalization we discuss the coexistence of several car clusters of different sizes. The basic problem is to find a physically motivated ansatz for the transition rates of the attachment and detachment of individual cars to a car cluster consistent with the empirical observations in real traffic. The emphasis is put on the analogy with first-order phase transitions and nucleation phenomena in physical systems like supersaturated vapour. The results are summarized in the flux-density relation, the so-called fundamental diagram of traffic flow, and compared with empirical data. Different regimes of traffic flow are discussed: free flow, congested mode as stop-and-go regime, and heavy viscous traffic. The traffic breakdown is studied based on the master equation as well as the Fokker-Planck approximation to calculate mean first passage times or escape rates. Generalizations are developed to allow for on-ramp effects. The calculated flux-density relation and characteristic breakdown times coincide with empirical data measured on highways. Finally, a brief summary of the stochastic cellular automata approach is given.
Traffic Calming: A Social Issue
ERIC Educational Resources Information Center
Crouse, David W.
2004-01-01
Substantial urban growth fueled by a strong economy often results in heavy traffic thus making streets less hospitable. Traffic calming is one response to the pervasiveness of the automobile. The issues concern built environments and involve multiple actors reflecting different interests. The issues are rarely technical and involve combinations of…
The Effect of Damaged Vehicles Evacuation on Traffic Flow Behavior
NASA Astrophysics Data System (ADS)
Mhirech, Abdelaziz; Ez-Zahraouy, Hamid; Ismaili, Assia Alaoui
The effect of the damaged car evacuation on the traffic flow behavior is investigated, in the one-dimensional deterministic Nagel-Schreckenberg model, using parallel dynamics. A realistic model applied to the cars involved in collisions is considered. Indeed, in this model we suppose that the damaged cars must be removed from the ring with a probability Pexit. This investigation enables us to understand how the combination of the two probabilities, namely Pcol and Pexit, acts on density and current. It is found that the current and density at the steady state, depend strongly on the initial density of cars in the ring. However, for the intermediate initial density ρi, the current J decreases when increasing either Pexit and/or Pcol. While, for high initial density, J increases passes through a maximum and decreases for large values of Pexit. Furthermore, the current can decrease or increase with the collision probability depending on the initial density.
Comparison of probabilistic and deterministic fiber tracking of cranial nerves.
Zolal, Amir; Sobottka, Stephan B; Podlesek, Dino; Linn, Jennifer; Rieger, Bernhard; Juratli, Tareq A; Schackert, Gabriele; Kitzler, Hagen H
2016-12-16
OBJECTIVE The depiction of cranial nerves (CNs) using diffusion tensor imaging (DTI) is of great interest in skull base tumor surgery and DTI used with deterministic tracking methods has been reported previously. However, there are still no good methods usable for the elimination of noise from the resulting depictions. The authors have hypothesized that probabilistic tracking could lead to more accurate results, because it more efficiently extracts information from the underlying data. Moreover, the authors have adapted a previously described technique for noise elimination using gradual threshold increases to probabilistic tracking. To evaluate the utility of this new approach, a comparison is provided with this work between the gradual threshold increase method in probabilistic and deterministic tracking of CNs. METHODS Both tracking methods were used to depict CNs II, III, V, and the VII+VIII bundle. Depiction of 240 CNs was attempted with each of the above methods in 30 healthy subjects, which were obtained from 2 public databases: the Kirby repository (KR) and Human Connectome Project (HCP). Elimination of erroneous fibers was attempted by gradually increasing the respective thresholds (fractional anisotropy [FA] and probabilistic index of connectivity [PICo]). The results were compared with predefined ground truth images based on corresponding anatomical scans. Two label overlap measures (false-positive error and Dice similarity coefficient) were used to evaluate the success of both methods in depicting the CN. Moreover, the differences between these parameters obtained from the KR and HCP (with higher angular resolution) databases were evaluated. Additionally, visualization of 10 CNs in 5 clinical cases was attempted with both methods and evaluated by comparing the depictions with intraoperative findings. RESULTS Maximum Dice similarity coefficients were significantly higher with probabilistic tracking (p < 0.001; Wilcoxon signed-rank test). The false
Photonics approach to traffic signs
NASA Astrophysics Data System (ADS)
Litwin, Dariusz; Galas, Jacek; CzyŻewski, Adam; Rymsza, Barbara; Kornalewski, Leszek; Kryszczyński, Tadeusz; Mikucki, Jerzy; Wikliński, Piotr; Daszkiewicz, Marek; Malasek, Jacek
2016-12-01
The automotive industry has been always a driving force for all economies. Despite of its beneficial meaning to every society it brings also many issues including wide area of road safety. The latter has been enforced by the increasing number of cars and the dynamic development of the traffic as a whole. Road signs and traffic lights are crucial in context of good traffic arrangement and its fluency. Traffic designers are used to treat horizontal road signs independently of vertical signs. However, modern light sources and growing flexibility in shaping optical systems create opportunity to design more advanced and smart solutions. In this paper we present an innovative, multidisciplinary approach that consists in tight interdependence of different traffic signals. We describe new optical systems together with their influence on the perception of the road user. The analysis includes maintenance and visibility in different weather conditions. A special attention has been focused on intersections of complex geometry.
Traffic information computing platform for big data
Duan, Zongtao Li, Ying Zheng, Xibin Liu, Yan Dai, Jiting Kang, Jun
2014-10-06
Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users.
Automated Conflict Resolution For Air Traffic Control
NASA Technical Reports Server (NTRS)
Erzberger, Heinz
2005-01-01
The ability to detect and resolve conflicts automatically is considered to be an essential requirement for the next generation air traffic control system. While systems for automated conflict detection have been used operationally by controllers for more than 20 years, automated resolution systems have so far not reached the level of maturity required for operational deployment. Analytical models and algorithms for automated resolution have been traffic conditions to demonstrate that they can handle the complete spectrum of conflict situations encountered in actual operations. The resolution algorithm described in this paper was formulated to meet the performance requirements of the Automated Airspace Concept (AAC). The AAC, which was described in a recent paper [1], is a candidate for the next generation air traffic control system. The AAC's performance objectives are to increase safety and airspace capacity and to accommodate user preferences in flight operations to the greatest extent possible. In the AAC, resolution trajectories are generated by an automation system on the ground and sent to the aircraft autonomously via data link .The algorithm generating the trajectories must take into account the performance characteristics of the aircraft, the route structure of the airway system, and be capable of resolving all types of conflicts for properly equipped aircraft without requiring supervision and approval by a controller. Furthermore, the resolution trajectories should be compatible with the clearances, vectors and flight plan amendments that controllers customarily issue to pilots in resolving conflicts. The algorithm described herein, although formulated specifically to meet the needs of the AAC, provides a generic engine for resolving conflicts. Thus, it can be incorporated into any operational concept that requires a method for automated resolution, including concepts for autonomous air to air resolution.
Wildfire susceptibility mapping: comparing deterministic and stochastic approaches
NASA Astrophysics Data System (ADS)
Pereira, Mário; Leuenberger, Michael; Parente, Joana; Tonini, Marj
2016-04-01
Conservation of Nature and Forests (ICNF) (http://www.icnf.pt/portal) which provides a detailed description of the shape and the size of area burnt by each fire in each year of occurrence. Two methodologies for susceptibility mapping were compared. First, the deterministic approach, based on the study of Verde and Zêzere (2010), which includes the computation of the favorability scores for each variable and the fire occurrence probability, as well as the validation of each model, resulting from the integration of different variables. Second, as non-linear method we selected the Random Forest algorithm (Breiman, 2001): this led us to identifying the most relevant variables conditioning the presence of wildfire and allowed us generating a map of fire susceptibility based on the resulting variable importance measures. By means of GIS techniques, we mapped the obtained predictions which represent the susceptibility of the study area to fires. Results obtained applying both the methodologies for wildfire susceptibility mapping, as well as of wildfire hazard maps for different total annual burnt area scenarios, were compared with the reference maps and allow us to assess the best approach for susceptibility mapping in Portugal. References: - Breiman, L. (2001). Random forests. Machine Learning, 45, 5-32. - Verde, J. C., & Zêzere, J. L. (2010). Assessment and validation of wildfire susceptibility and hazard in Portugal. Natural Hazards and Earth System Science, 10(3), 485-497.
Low Earth Orbit satellite traffic simulator
NASA Technical Reports Server (NTRS)
Hoelzel, John
1995-01-01
This paper describes a significant tool for Low Earth Orbit (LEO) capacity analysis, needed to support marketing, economic, and design analysis, known as a Satellite Traffic Simulator (STS). LEO satellites typically use multiple beams to help achieve the desired communication capacity, but the traffic demand in these beams in usually not uniform. Simulations of dynamic, average, and peak expected demand per beam is a very critical part of the marketing, economic, and design analysis necessary to field a viable LEO system. An STS is described in this paper which can simulate voice, data and FAX traffic carried by LEO satellite beams and Earth Station Gateways. It is applicable world-wide for any LEO satellite constellations operating over any regions. For aeronautical applications to LEO satellites. the anticipates aeronautical traffic (Erlangs for each hour of the day to be simulated) is prepared for geographically defined 'area targets' (each major operational region for the respective aircraft), and used as input to the STS. The STS was designed by Constellations Communications Inc. (CCI) and E-Systems for usage in Brazil in accordance with an ESCA/INPE Statement Of Work, and developed by Analytical Graphics Inc. (AGI) to execute on top of its Satellite Tool Kit (STK) commercial software. The STS simulates constellations of LEO satellite orbits, with input of traffic intensity (Erlangs) for each hour of the day generated from area targets (such as Brazilian States). accumulated in custom LEO satellite beams, and then accumulated in Earth Station Gateways. The STS is a very general simulator which can accommodate: many forms of orbital element and Walker Constellation input; simple beams or any user defined custom beams; and any location of Gateways. The paper describes some of these features, including Manual Mode dynamic graphical display of communication links, to illustrate which Gateway links are accessible and which links are not, at each 'step' of the
Air Traffic Management Research at NASA
NASA Technical Reports Server (NTRS)
Farley, Todd
2012-01-01
The U.S. air transportation system is the most productive in the world, moving far more people and goods than any other. It is also the safest system in the world, thanks in part to its venerable air traffic control system. But as demand for air travel continues to grow, the air traffic control systems aging infrastructure and labor-intensive procedures are impinging on its ability to keep pace with demand. And that impinges on the growth of our economy. Part of NASA's current mission in aeronautics research is to invent new technologies and procedures for ATC that will enable our national airspace system to accommodate the increasing demand for air transportation well into the next generation while still maintaining its excellent record for safety. It is a challenging mission, as efforts to modernize have, for decades, been hamstrung by the inability to assure safety to the satisfaction of system operators, system regulators, and/or the traveling public. In this talk, we'll provide a brief history of air traffic control, focusing on the tension between efficiency and safety assurance, and we'll highlight some new NASA technologies coming down the pike.
Automated Traffic Management System and Method
NASA Technical Reports Server (NTRS)
Glass, Brian J. (Inventor); Spirkovska, Liljana (Inventor); McDermott, William J. (Inventor); Reisman, Ronald J. (Inventor); Gibson, James (Inventor); Iverson, David L. (Inventor)
2000-01-01
A data management system and method that enables acquisition, integration, and management of real-time data generated at different rates, by multiple heterogeneous incompatible data sources. The system achieves this functionality by using an expert system to fuse data from a variety of airline, airport operations, ramp control, and air traffic control tower sources, to establish and update reference data values for every aircraft surface operation. The system may be configured as a real-time airport surface traffic management system (TMS) that electronically interconnects air traffic control, airline data, and airport operations data to facilitate information sharing and improve taxi queuing. In the TMS operational mode, empirical data shows substantial benefits in ramp operations for airlines, reducing departure taxi times by about one minute per aircraft in operational use, translating as $12 to $15 million per year savings to airlines at the Atlanta, Georgia airport. The data management system and method may also be used for scheduling the movement of multiple vehicles in other applications, such as marine vessels in harbors and ports, trucks or railroad cars in ports or shipping yards, and railroad cars in switching yards. Finally, the data management system and method may be used for managing containers at a shipping dock, stock on a factory floor or in a warehouse, or as a training tool for improving situational awareness of FAA tower controllers, ramp and airport operators, or commercial airline personnel in airfield surface operations.
Fluctuations in Urban Traffic Networks
NASA Astrophysics Data System (ADS)
Chen, Yu-Dong; Li, Li; Zhang, Yi; Hu, Jian-Ming; Jin, Xue-Xiang
Urban traffic network is a typical complex system, in which movements of tremendous microscopic traffic participants (pedestrians, bicyclists and vehicles) form complicated spatial and temporal dynamics. We collected flow volumes data on the time-dependent activity of a typical urban traffic network, finding that the coupling between the average flux and the fluctuation on individual links obeys a certain scaling law, with a wide variety of scaling exponents between 1/2 and 1. These scaling phenomena can explain the interaction between the nodes' internal dynamics (i.e. queuing at intersections, car-following in driving) and changes in the external (network-wide) traffic demand (i.e. the every day increase of traffic amount during peak hours and shocking caused by traffic accidents), allowing us to further understand the mechanisms governing the transportation system's collective behavior. Multiscaling and hotspot features are observed in the traffic flow data as well. But the reason why the separated internal dynamics are comparable to the external dynamics in magnitude is still unclear and needs further investigations.
Memory effects in microscopic traffic models and wide scattering in flow-density data.
Treiber, Martin; Helbing, Dirk
2003-10-01
By means of microscopic simulations we show that noninstantaneous adaptation of the driving behavior to the traffic situation together with the conventional method to measure flow-density data provides a possible explanation for the observed inverse-lambda shape and the wide scattering of flow-density data in "synchronized" congested traffic. We model a memory effect in the response of drivers to the traffic situation for a wide class of car-following models by introducing an additional dynamical variable (the "subjective level of service") describing the adaptation of drivers to the surrounding traffic situation during the past few minutes and couple this internal state to parameters of the underlying model that are related to the driving style. For illustration, we use the intelligent-driver model (IDM) as the underlying model, characterize the level of service solely by the velocity, and couple the internal variable to the IDM parameter "time gap" to model an increase of the time gap in congested traffic ("frustration effect"), which is supported by single-vehicle data. We simulate open systems with a bottleneck and obtain flow-density data by implementing "virtual detectors." The shape, relative size, and apparent "stochasticity" of the region of the scattered data points agree nearly quantitatively with empirical data. Wide scattering is even observed for identical vehicles, although the proposed model is a time-continuous, deterministic, single-lane car-following model with a unique fundamental diagram.
Matching solute breakthrough with deterministic and stochastic aquifer models.
Lemke, Lawrence D; Barrack, William A; Abriola, Linda M; Goovaerts, Pierre
2004-01-01
Two different deterministic and two alternative stochastic (i.e., geostatistical) approaches to modeling the distribution of hydraulic conductivity (K) in a nonuniform (sigma2ln(K)) = 0.29) glacial sand aquifer were used to explore the influence of conceptual model selection on simulations of three-dimensional tracer movement. The deterministic K models employed included a homogeneous effective K and a perfectly stratified 14 layer model. Stochastic K models were constructed using sequential Gaussian simulation and sequential i ndicator simulation conditioned to available K values estimated from measured grain size distributions. Standard simulation software packages MODFLOW, MT3DMS, and MODPATH were used to model three-dimensional ground water flow and transport in a field tracer test, where a pulse of bromide was injected through an array of three fully screened wells and extracted through a single fully screened well approximately 8 m away. Agreement between observed and simulated transport behavior was assessed through direct comparison of breakthrough curves (BTCs) and selected breakthrough metrics at the extraction well and at 26 individual multilevel sample ports distributed irregularly between the injection and extraction wells. Results indicate that conceptual models incorporating formation variability are better able to capture observed breakthrough behavior. Root mean square (RMS) error of the deterministic models bracketed the ensemble mean RMS error of stochastic models for simulated concentration vs. time series, but not for individual BTC characteristic metrics. The spatial variability models evaluated here may be better suited to simulating breakthrough behavior measured in wells screened over large intervals than at arbitrarily distributed observation points within a nonuniform aquifer domain.
Optical image encryption technique based on deterministic phase masks
NASA Astrophysics Data System (ADS)
Zamrani, Wiam; Ahouzi, Esmail; Lizana, Angel; Campos, Juan; Yzuel, María J.
2016-10-01
The double-random phase encoding (DRPE) scheme, which is based on a 4f optical correlator system, is considered as a reference for the optical encryption field. We propose a modification of the classical DRPE scheme based on the use of a class of structured phase masks, the deterministic phase masks. In particular, we propose to conduct the encryption process by using two deterministic phase masks, which are built from linear combinations of several subkeys. For the decryption step, the input image is retrieved by using the complex conjugate of the deterministic phase masks, which were set in the encryption process. This concept of structured masks gives rise to encryption-decryption keys which are smaller and more compact than those required in the classical DRPE. In addition, we show that our method significantly improves the tolerance of the DRPE method to shifts of the decrypting phase mask-when no shift is applied, it provides similar performance to the DRPE scheme in terms of encryption-decryption results. This enhanced tolerance to the shift, which is proven by providing numerical simulation results for grayscale and binary images, may relax the rigidity of an encryption-decryption experimental implementation setup. To evaluate the effectiveness of the described method, the mean-square-error and the peak signal-to-noise ratio between the input images and the recovered images are calculated. Different studies based on simulated data are also provided to highlight the suitability and robustness of the method when applied to the image encryption-decryption processes.
Deterministic side-branching during thermal dendritic growth
NASA Astrophysics Data System (ADS)
Mullis, Andrew M.
2015-06-01
The accepted view on dendritic side-branching is that side-branches grow as the result of selective amplification of thermal noise and that in the absence of such noise dendrites would grow without the development of side-arms. However, recently there has been renewed speculation about dendrites displaying deterministic side-branching [see e.g. ME Glicksman, Metall. Mater. Trans A 43 (2012) 391]. Generally, numerical models of dendritic growth, such as phase-field simulation, have tended to display behaviour which is commensurate with the former view, in that simulated dendrites do not develop side-branches unless noise is introduced into the simulation. However, here we present simulations at high undercooling that show that under certain conditions deterministic side-branching may occur. We use a model formulated in the thin interface limit and a range of advanced numerical techniques to minimise the numerical noise introduced into the solution, including a multigrid solver. Not only are multigrid solvers one of the most efficient means of inverting the large, but sparse, system of equations that results from implicit time-stepping, they are also very effective at smoothing noise at all wavelengths. This is in contrast to most Jacobi or Gauss-Seidel iterative schemes which are effective at removing noise with wavelengths comparable to the mesh size but tend to leave noise at longer wavelengths largely undamped. From an analysis of the tangential thermal gradients on the solid-liquid interface the mechanism for side-branching appears to be consistent with the deterministic model proposed by Glicksman.
Spatial continuity measures for probabilistic and deterministic geostatistics
Isaaks, E.H.; Srivastava, R.M.
1988-05-01
Geostatistics has traditionally used a probabilistic framework, one in which expected values or ensemble averages are of primary importance. The less familiar deterministic framework views geostatistical problems in terms of spatial integrals. This paper outlines the two frameworks and examines the issue of which spatial continuity measure, the covariance C(h) or the variogram ..sigma..(h), is appropriate for each framework. Although C(h) and ..sigma..(h) were defined originally in terms of spatial integrals, the convenience of probabilistic notation made the expected value definitions more common. These now classical expected value definitions entail a linear relationship between C(h) and ..sigma..(h); the spatial integral definitions do not. In a probabilistic framework, where available sample information is extrapolated to domains other than the one which was sampled, the expected value definitions are appropriate; furthermore, within a probabilistic framework, reasons exist for preferring the variogram to the covariance function. In a deterministic framework, where available sample information is interpolated within the same domain, the spatial integral definitions are appropriate and no reasons are known for preferring the variogram. A case study on a Wiener-Levy process demonstrates differences between the two frameworks and shows that, for most estimation problems, the deterministic viewpoint is more appropriate. Several case studies on real data sets reveal that the sample covariance function reflects the character of spatial continuity better than the sample variogram. From both theoretical and practical considerations, clearly for most geostatistical problems, direct estimation of the covariance is better than the traditional variogram approach.
CALTRANS: A parallel, deterministic, 3D neutronics code
Carson, L.; Ferguson, J.; Rogers, J.
1994-04-01
Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.
Deterministic Models of Channel Headwall Erosion: Initiation and Propagation
1991-06-14
Port Ocean Div., Amer. Soc. Civil Engr. 106(WW3):369-389. Beltaos , S . 1976 Oblique impingement of plane turbulent jets. J. Hydr. Div. Amer. Soc. Civil...Engrs. 102(HY9): 1177-1192. Beltaos , S . and Rajaratnam. 1973. Plane turbulent impinging jets. J. Hydr. Res. 11:29-59. Bradford, J. M. and R. F. Priest...June 14, 1991 FINAL 7!5T Oy7- /%faq 4. TITLE AND SUBTITLE S . FUNDING NUMB ERS Deterministic Models of Channel Headwall Erosion: Initiation and
Demonstration of deterministic and high fidelity squeezing of quantum information
Yoshikawa, Jun-ichi; Takei, Nobuyuki; Furusawa, Akira; Hayashi, Toshiki; Akiyama, Takayuki; Huck, Alexander; Andersen, Ulrik L.
2007-12-15
By employing a recent proposal [R. Filip, P. Marek, and U.L. Andersen, Phys. Rev. A 71, 042308 (2005)] we experimentally demonstrate a universal, deterministic, and high-fidelity squeezing transformation of an optical field. It relies only on linear optics, homodyne detection, feedforward, and an ancillary squeezed vacuum state, thus direct interaction between a strong pump and the quantum state is circumvented. We demonstrate three different squeezing levels for a coherent state input. This scheme is highly suitable for the fault-tolerant squeezing transformation in a continuous variable quantum computer.
Deterministic regularization of three-dimensional optical diffraction tomography
Sung, Yongjin; Dasari, Ramachandra R.
2012-01-01
In this paper we discuss a deterministic regularization algorithm to handle the missing cone problem of three-dimensional optical diffraction tomography (ODT). The missing cone problem arises in most practical applications of ODT and is responsible for elongation of the reconstructed shape and underestimation of the value of the refractive index. By applying positivity and piecewise-smoothness constraints in an iterative reconstruction framework, we effectively suppress the missing cone artifact and recover sharp edges rounded out by the missing cone, and we significantly improve the accuracy of the predictions of the refractive index. We also show the noise handling capability of our algorithm in the reconstruction process. PMID:21811316
Deterministic shape control in plasma-aided nanotip assembly
NASA Astrophysics Data System (ADS)
Tam, E.; Levchenko, I.; Ostrikov, K.
2006-08-01
The possibility of deterministic plasma-assisted reshaping of capped cylindrical seed nanotips by manipulating the plasma parameter-dependent sheath width is shown. Multiscale hybrid gas phase/solid surface numerical experiments reveal that under the wide-sheath conditions the nanotips widen at the base and when the sheath is narrow, they sharpen up. By combining the wide- and narrow-sheath stages in a single process, it turns out possible to synthesize wide-base nanotips with long- and narrow-apex spikes, ideal for electron microemitter applications. This plasma-based approach is generic and can be applied to a larger number of multipurpose nanoassemblies.
Deterministic versus stochastic aspects of superexponential population growth models
NASA Astrophysics Data System (ADS)
Grosjean, Nicolas; Huillet, Thierry
2016-08-01
Deterministic population growth models with power-law rates can exhibit a large variety of growth behaviors, ranging from algebraic, exponential to hyperexponential (finite time explosion). In this setup, selfsimilarity considerations play a key role, together with two time substitutions. Two stochastic versions of such models are investigated, showing a much richer variety of behaviors. One is the Lamperti construction of selfsimilar positive stochastic processes based on the exponentiation of spectrally positive processes, followed by an appropriate time change. The other one is based on stable continuous-state branching processes, given by another Lamperti time substitution applied to stable spectrally positive processes.
The deterministic optical alignment of the HERMES spectrograph
NASA Astrophysics Data System (ADS)
Gers, Luke; Staszak, Nicholas
2014-07-01
The High Efficiency and Resolution Multi Element Spectrograph (HERMES) is a four channel, VPH-grating spectrograph fed by two 400 fiber slit assemblies whose construction and commissioning has now been completed at the Anglo Australian Telescope (AAT). The size, weight, complexity, and scheduling constraints of the system necessitated that a fully integrated, deterministic, opto-mechanical alignment system be designed into the spectrograph before it was manufactured. This paper presents the principles about which the system was assembled and aligned, including the equipment and the metrology methods employed to complete the spectrograph integration.
A Deterministic Transport Code for Space Environment Electrons
NASA Technical Reports Server (NTRS)
Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamczyk, Anne M.
2010-01-01
A deterministic computational procedure has been developed to describe transport of space environment electrons in various shield media. This code is an upgrade and extension of an earlier electron code. Whereas the former code was formulated on the basis of parametric functions derived from limited laboratory data, the present code utilizes well established theoretical representations to describe the relevant interactions and transport processes. The shield material specification has been made more general, as have the pertinent cross sections. A combined mean free path and average trajectory approach has been used in the transport formalism. Comparisons with Monte Carlo calculations are presented.
Non-deterministic analysis of ocean environment loads
Fang Huacan; Xu Fayan; Gao Guohua; Xu Xingping
1995-12-31
Ocean environment loads consist of the wind force, sea wave force etc. Sea wave force not only has randomness, but also has fuzziness. Hence the non-deterministic description of wave environment must be carried out, in designing of an offshore structure or evaluation of the safety of offshore structure members in service. In order to consider the randomness of sea wave, the wind speed single parameter sea wave spectrum is proposed in the paper. And a new fuzzy grading statistic method for considering fuzziness of sea wave height H and period T is given in this paper. The principle and process of calculating fuzzy random sea wave spectrum will be published lastly.
Lasing in an optimized deterministic aperiodic nanobeam cavity
NASA Astrophysics Data System (ADS)
Moon, Seul-Ki; Jeong, Kwang-Yong; Noh, Heeso; Yang, Jin-Kyu
2016-12-01
We have demonstrated lasing action from partially extended modes in deterministic aperiodic nanobeam cavities inflated by Rudin-Shapiro sequence with two different air holes at room temperature. By varying the size ratio of the holes and hence the structural aperiodicity, different optical lasing modes were obtained with maximized quality factors. The lasing characteristics of the partially extended modes were confirmed by numerical simulations based on scanning microscope images of the fabricated samples. We believe that this partially extended nanobeam modes will be useful for label-free optical biosensors.
Deterministic Smoluchowski-Feynman ratchets driven by chaotic noise.
Chew, Lock Yue
2012-01-01
We have elucidated the effect of statistical asymmetry on the directed current in Smoluchowski-Feynman ratchets driven by chaotic noise. Based on the inhomogeneous Smoluchowski equation and its generalized version, we arrive at analytical expressions of the directed current that includes a source term. The source term indicates that statistical asymmetry can drive the system further away from thermodynamic equilibrium, as exemplified by the constant flashing, the state-dependent, and the tilted deterministic Smoluchowski-Feynman ratchets, with the consequence of an enhancement in the directed current.
The Future of Air Traffic Management
NASA Technical Reports Server (NTRS)
Denery, Dallas G.; Erzberger, Heinz; Edwards, Thomas A. (Technical Monitor)
1998-01-01
A system for the control of terminal area traffic to improve productivity, referred to as the Center-TRACON Automation System (CTAS), is being developed at NASA's Ames Research Center under a joint program with the FAA. CTAS consists of a set of integrated tools that provide computer-generated advisories for en-route and terminal area controllers. The premise behind the design of CTAS has been that successful planning of traffic requires accurate trajectory prediction. Data bases consisting of representative aircraft performance models, airline preferred operational procedures and a three dimensional wind model support the trajectory prediction. The research effort has been the design of a set of automation tools that make use of this trajectory prediction capability to assist controllers in overall management of traffic. The first tool, the Traffic Management Advisor (TMA), provides the overall flow management between the en route and terminal areas. A second tool, the Final Approach Spacing Tool (FAST) provides terminal area controllers with sequence and runway advisories to allow optimal use of the runways. The TMA and FAST are now being used in daily operations at Dallas/Ft. Worth airport. Additional activities include the development of several other tools. These include: 1) the En Route Descent Advisor that assist the en route controller in issuing conflict free descents and ascents; 2) the extension of FAST to include speed and heading advisories and the Expedite Departure Path (EDP) that assists the terminal controller in management of departures; and 3) the Collaborative Arrival Planner (CAP) that will assist the airlines in operational decision making. The purpose of this presentation is to review the CTAS concept and to present the results of recent field tests. The paper will first discuss the overall concept and then discuss the status of the individual tools.
Air Traffic Management Research at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Lee, Katharine
2005-01-01
Since the late 1980's, NASA Ames researchers have been investigating ways to improve the air transportation system through the development of decision support automation. These software advances, such as the Center-TRACON Automation System (eTAS) have been developed with teams of engineers, software developers, human factors experts, and air traffic controllers; some ASA Ames decision support tools are currently operational in Federal Aviation Administration (FAA) facilities and some are in use by the airlines. These tools have provided air traffic controllers and traffic managers the capabilities to help reduce overall delays and holding, and provide significant cost savings to the airlines as well as more manageable workload levels for air traffic service providers. NASA is continuing to collaborate with the FAA, as well as other government agencies, to plan and develop the next generation of decision support tools that will support anticipated changes in the air transportation system, including a projected increase to three times today's air-traffic levels by 2025. The presentation will review some of NASA Ames' recent achievements in air traffic management research, and discuss future tool developments and concepts currently under consideration.
TrafficGen Architecture Document
2016-01-01
v 1. Overview of TrafficGen Application 1 2. Modules 2 3. User Interface 3 3.1 Top-Level MVC Classes 3 3.1.1 TrafficGenView 3 3.1.2...release; distribution unlimited. 3 A key design concept in use here is the model-view-controller ( MVC ) pattern. In general, the MVC design pattern...and easier to manage. MVC facilitates reuse by reducing and formalizing coupling between model components and the user interface. The same TrafficGen
Deterministic secure quantum communication using a single d-level system
Jiang, Dong; Chen, Yuanyuan; Gu, Xuemei; Xie, Ling; Chen, Lijun
2017-01-01
Deterministic secure quantum communication (DSQC) can transmit secret messages between two parties without first generating a shared secret key. Compared with quantum key distribution (QKD), DSQC avoids the waste of qubits arising from basis reconciliation and thus reaches higher efficiency. In this paper, based on data block transmission and order rearrangement technologies, we propose a DSQC protocol. It utilizes a set of single d-level systems as message carriers, which are used to directly encode the secret message in one communication process. Theoretical analysis shows that these employed technologies guarantee the security, and the use of a higher dimensional quantum system makes our protocol achieve higher security and efficiency. Since only quantum memory is required for implementation, our protocol is feasible with current technologies. Furthermore, Trojan horse attack (THA) is taken into account in our protocol. We give a THA model and show that THA significantly increases the multi-photon rate and can thus be detected. PMID:28327557
Deterministic secure quantum communication using a single d-level system
NASA Astrophysics Data System (ADS)
Jiang, Dong; Chen, Yuanyuan; Gu, Xuemei; Xie, Ling; Chen, Lijun
2017-03-01
Deterministic secure quantum communication (DSQC) can transmit secret messages between two parties without first generating a shared secret key. Compared with quantum key distribution (QKD), DSQC avoids the waste of qubits arising from basis reconciliation and thus reaches higher efficiency. In this paper, based on data block transmission and order rearrangement technologies, we propose a DSQC protocol. It utilizes a set of single d-level systems as message carriers, which are used to directly encode the secret message in one communication process. Theoretical analysis shows that these employed technologies guarantee the security, and the use of a higher dimensional quantum system makes our protocol achieve higher security and efficiency. Since only quantum memory is required for implementation, our protocol is feasible with current technologies. Furthermore, Trojan horse attack (THA) is taken into account in our protocol. We give a THA model and show that THA significantly increases the multi-photon rate and can thus be detected.
Hong-Ou-Mandel Interference between Two Deterministic Collective Excitations in an Atomic Ensemble.
Li, Jun; Zhou, Ming-Ti; Jing, Bo; Wang, Xu-Jie; Yang, Sheng-Jun; Jiang, Xiao; Mølmer, Klaus; Bao, Xiao-Hui; Pan, Jian-Wei
2016-10-28
We demonstrate deterministic generation of two distinct collective excitations in one atomic ensemble, and we realize the Hong-Ou-Mandel interference between them. Using Rydberg blockade we create single collective excitations in two different Zeeman levels, and we use stimulated Raman transitions to perform a beam-splitter operation between the excited atomic modes. By converting the atomic excitations into photons, the two-excitation interference is measured by photon coincidence detection with a visibility of 0.89(6). The Hong-Ou-Mandel interference witnesses an entangled NOON state of the collective atomic excitations, and we demonstrate its two times enhanced sensitivity to a magnetic field compared with a single excitation. Our work implements a minimal instance of boson sampling and paves the way for further multimode and multiexcitation studies with collective excitations of atomic ensembles.
Hong-Ou-Mandel Interference between Two Deterministic Collective Excitations in an Atomic Ensemble
NASA Astrophysics Data System (ADS)
Li, Jun; Zhou, Ming-Ti; Jing, Bo; Wang, Xu-Jie; Yang, Sheng-Jun; Jiang, Xiao; Mølmer, Klaus; Bao, Xiao-Hui; Pan, Jian-Wei
2016-10-01
We demonstrate deterministic generation of two distinct collective excitations in one atomic ensemble, and we realize the Hong-Ou-Mandel interference between them. Using Rydberg blockade we create single collective excitations in two different Zeeman levels, and we use stimulated Raman transitions to perform a beam-splitter operation between the excited atomic modes. By converting the atomic excitations into photons, the two-excitation interference is measured by photon coincidence detection with a visibility of 0.89(6). The Hong-Ou-Mandel interference witnesses an entangled NOON state of the collective atomic excitations, and we demonstrate its two times enhanced sensitivity to a magnetic field compared with a single excitation. Our work implements a minimal instance of boson sampling and paves the way for further multimode and multiexcitation studies with collective excitations of atomic ensembles.
Deterministic secure quantum communication using a single d-level system.
Jiang, Dong; Chen, Yuanyuan; Gu, Xuemei; Xie, Ling; Chen, Lijun
2017-03-22
Deterministic secure quantum communication (DSQC) can transmit secret messages between two parties without first generating a shared secret key. Compared with quantum key distribution (QKD), DSQC avoids the waste of qubits arising from basis reconciliation and thus reaches higher efficiency. In this paper, based on data block transmission and order rearrangement technologies, we propose a DSQC protocol. It utilizes a set of single d-level systems as message carriers, which are used to directly encode the secret message in one communication process. Theoretical analysis shows that these employed technologies guarantee the security, and the use of a higher dimensional quantum system makes our protocol achieve higher security and efficiency. Since only quantum memory is required for implementation, our protocol is feasible with current technologies. Furthermore, Trojan horse attack (THA) is taken into account in our protocol. We give a THA model and show that THA significantly increases the multi-photon rate and can thus be detected.
Latanision, R.M.
1990-12-01
Electrochemical corrosion is pervasive in virtually all engineering systems and in virtually all industrial circumstances. Although engineers now understand how to design systems to minimize corrosion in many instances, many fundamental questions remain poorly understood and, therefore, the development of corrosion control strategies is based more on empiricism than on a deep understanding of the processes by which metals corrode in electrolytes. Fluctuations in potential, or current, in electrochemical systems have been observed for many years. To date, all investigations of this phenomenon have utilized non-deterministic analyses. In this work it is proposed to study electrochemical noise from a deterministic viewpoint by comparison of experimental parameters, such as first and second order moments (non-deterministic), with computer simulation of corrosion at metal surfaces. In this way it is proposed to analyze the origins of these fluctuations and to elucidate the relationship between these fluctuations and kinetic parameters associated with metal dissolution and cathodic reduction reactions. This research program addresses in essence two areas of interest: (a) computer modeling of corrosion processes in order to study the electrochemical processes on an atomistic scale, and (b) experimental investigations of fluctuations in electrochemical systems and correlation of experimental results with computer modeling. In effect, the noise generated by mathematical modeling will be analyzed and compared to experimental noise in electrochemical systems. 1 fig.
Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W; Grove, Robert E
2015-01-01
Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNce reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).
The effect of traffic tickets on road traffic crashes.
Factor, Roni
2014-03-01
Road traffic crashes are globally a leading cause of death. The current study tests the effect of traffic tickets issued to drivers on subsequent crashes, using a unique dataset that overcomes some shortcomings of previous studies. The study takes advantage of a national longitudinal dataset at the individual level that merges Israeli census data with data on traffic tickets issued by the police and official data on involvement in road traffic crashes over seven years. The results show that the estimated probability of involvement in a subsequent fatal or severe crash was more than eleven times higher for drivers with six traffic tickets per year compared to those with one ticket per year, while controlling for various confounders. However, the majority of fatal and severe crashes involved the larger population of drivers who received up to one ticket on average per year. The current findings indicate that reducing traffic violations may contribute significantly to crash and injury reduction. In addition, mass random enforcement programs may be more effective in reducing fatal and severe crashes than targeting high-risk recidivist drivers.
14 CFR 25 - Traffic and Capacity Elements
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Traffic and Capacity Elements Section 25... Traffic Reporting Requirements Section 25 Traffic and Capacity Elements General Instructions. (a) All prescribed reporting for traffic and capacity elements shall conform with the data compilation standards...
36 CFR 4.13 - Obstructing traffic.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 36 Parks, Forests, and Public Property 1 2012-07-01 2012-07-01 false Obstructing traffic. 4.13... VEHICLES AND TRAFFIC SAFETY § 4.13 Obstructing traffic. The following are prohibited: (a) Stopping or... interfere with the normal flow of traffic....
36 CFR 4.13 - Obstructing traffic.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 36 Parks, Forests, and Public Property 1 2011-07-01 2011-07-01 false Obstructing traffic. 4.13... VEHICLES AND TRAFFIC SAFETY § 4.13 Obstructing traffic. The following are prohibited: (a) Stopping or... interfere with the normal flow of traffic....
36 CFR 4.13 - Obstructing traffic.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Obstructing traffic. 4.13... VEHICLES AND TRAFFIC SAFETY § 4.13 Obstructing traffic. The following are prohibited: (a) Stopping or... interfere with the normal flow of traffic....
36 CFR 4.13 - Obstructing traffic.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 36 Parks, Forests, and Public Property 1 2013-07-01 2013-07-01 false Obstructing traffic. 4.13... VEHICLES AND TRAFFIC SAFETY § 4.13 Obstructing traffic. The following are prohibited: (a) Stopping or... interfere with the normal flow of traffic....
36 CFR 4.13 - Obstructing traffic.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 36 Parks, Forests, and Public Property 1 2014-07-01 2014-07-01 false Obstructing traffic. 4.13... VEHICLES AND TRAFFIC SAFETY § 4.13 Obstructing traffic. The following are prohibited: (a) Stopping or... interfere with the normal flow of traffic....
AAA Foundation for Traffic Safety
... of Top Deadly Mistakes Made by Teen Drivers -- AAA AAA: Road debris causes avoidable crashes, deaths Save the ... Analyst Associate Researcher Program Coordinator Stay Tuned New AAA Foundation for Traffic Safety website coming Fall 2017 ...
Road traffic injuries: a stocktaking.
Mohan, Dinesh
2008-08-01
Once we accept that road traffic injury control is a public health problem, and that we have an ethical responsibility to arrange for the safety of individuals, then it follows that health and medical professionals have to assume responsibility for participating in efforts to control this pandemic. Over 1.2 million people die of road traffic crashes annually. Road traffic injuries are among the second to the sixth leading causes of death in the age groups 15-60 years in all countries around the world. Control of road traffic injuries is going to require very special efforts as patterns are different in high- and lower-income countries, and while some countermeasures are applicable internationally, others will need further research and innovation. We will need to focus on the safety of pedestrians, bicyclists and motorcyclists, speed control, and prevention of driving under the influence of alcohol.
Kinetic model of network traffic
NASA Astrophysics Data System (ADS)
Antoniou, I.; Ivanov, V. V.; Kalinovsky, Yu. L.
2002-05-01
We present the first results on the application of the Prigogine-Herman kinetic approach (Kinetic Theory of Vehicular Traffic, American Elsevier Publishing Company, Inc., New York, 1971) to the network traffic. We discuss the solution of the kinetic equation for homogeneous time-independent situations and for the desired speed distribution function, obtained from traffic measurements analysis. For the log-normal desired speed distribution function the solution clearly shows two modes corresponding to individual flow patterns (low-concentration mode) and to collective flow patterns (traffic jam mode). For low-concentration situations we found almost linear dependence of the information flow versus the concentration and that the higher the average speed the lower the concentration at which the optimum flow takes place. When approaching the critical concentration there are no essential differences in the flow for different desired average speeds, whereas for the individual flow regions there are dramatic differences.
Real-Time Traffic Signal Control for Optimization of Traffic Jam Probability
NASA Astrophysics Data System (ADS)
Cui, Cheng-You; Shin, Ji-Sun; Miyazaki, Michio; Lee, Hee-Hyol
Real-time traffic signal control is an integral part of urban traffic control system. It can control traffic signals online according to variation of traffic flow. In this paper, we propose a new method for the real-time traffic signal control system. The system uses a Cellular Automaton model and a Bayesian Network model to predict probabilistic distributions of standing vehicles, and uses a Particle Swarm Optimization method to calculate optimal traffic signals. A simulation based on real traffic data was carried out to show the effectiveness of the proposed real-time traffic signal control system CAPSOBN using a micro traffic simulator.
Deterministic composite nanophotonic lattices in large area for broadband applications
Xavier, Jolly; Probst, Jürgen; Becker, Christiane
2016-01-01
Exotic manipulation of the flow of photons in nanoengineered materials with an aperiodic distribution of nanostructures plays a key role in efficiency-enhanced broadband photonic and plasmonic technologies for spectrally tailorable integrated biosensing, nanostructured thin film solarcells, white light emitting diodes, novel plasmonic ensembles etc. Through a generic deterministic nanotechnological route here we show subwavelength-scale silicon (Si) nanostructures on nanoimprinted glass substrate in large area (4 cm2) with advanced functional features of aperiodic composite nanophotonic lattices. These nanophotonic aperiodic lattices have easily tailorable supercell tiles with well-defined and discrete lattice basis elements and they show rich Fourier spectra. The presented nanophotonic lattices are designed functionally akin to two-dimensional aperiodic composite lattices with unconventional flexibility- comprising periodic photonic crystals and/or in-plane photonic quasicrystals as pattern design subsystems. The fabricated composite lattice-structured Si nanostructures are comparatively analyzed with a range of nanophotonic structures with conventional lattice geometries of periodic, disordered random as well as in-plane quasicrystalline photonic lattices with comparable lattice parameters. As a proof of concept of compatibility with advanced bottom-up liquid phase crystallized (LPC) Si thin film fabrication, the experimental structural analysis is further extended to double-side-textured deterministic aperiodic lattice-structured 10 μm thick large area LPC Si film on nanoimprinted substrates. PMID:27941869
Directional locking in deterministic lateral-displacement microfluidic separation systems.
Risbud, Sumedh R; Drazer, German
2014-07-01
We analyze the trajectory of suspended spherical particles moving through a square array of obstacles, in the deterministic limit and at zero Reynolds number. We show that in the dilute approximation of widely separated obstacles, the average motion of the particles is equivalent to the trajectory followed by a point particle moving through an array of obstacles with an effective radius. The effective radius accounts for the hydrodynamic as well as short-range repulsive nonhydrodynamic interactions between the suspended particles and the obstacles, and is equal to the critical offset at which particle trajectories become irreversible. Using this equivalent system we demonstrate the presence of directional locking in the trajectory of the particles and derive an inequality that accurately describes the "devil's staircase" type of structure observed in the migration angle as a function of the forcing direction. We use these results to determine the optimum resolution in the fractionation of binary mixtures using deterministic lateral-displacement microfluidic separation systems as well as to comment on the collision frequencies when the arrays of posts are utilized as immunocapture devices.
Deterministic doping and the exploration of spin qubits
Schenkel, T.; Weis, C. D.; Persaud, A.; Lo, C. C.; Chakarov, I.; Schneider, D. H.; Bokor, J.
2015-01-09
Deterministic doping by single ion implantation, the precise placement of individual dopant atoms into devices, is a path for the realization of quantum computer test structures where quantum bits (qubits) are based on electron and nuclear spins of donors or color centers. We present a donor - quantum dot type qubit architecture and discuss the use of medium and highly charged ions extracted from an Electron Beam Ion Trap/Source (EBIT/S) for deterministic doping. EBIT/S are attractive for the formation of qubit test structures due to the relatively low emittance of ion beams from an EBIT/S and due to the potential energy associated with the ions' charge state, which can aid single ion impact detection. Following ion implantation, dopant specific diffusion mechanisms during device processing affect the placement accuracy and coherence properties of donor spin qubits. For bismuth, range straggling is minimal but its relatively low solubility in silicon limits thermal budgets for the formation of qubit test structures.
Deterministic Stress Modeling of Hot Gas Segregation in a Turbine
NASA Technical Reports Server (NTRS)
Busby, Judy; Sondak, Doug; Staubach, Brent; Davis, Roger
1998-01-01
Simulation of unsteady viscous turbomachinery flowfields is presently impractical as a design tool due to the long run times required. Designers rely predominantly on steady-state simulations, but these simulations do not account for some of the important unsteady flow physics. Unsteady flow effects can be modeled as source terms in the steady flow equations. These source terms, referred to as Lumped Deterministic Stresses (LDS), can be used to drive steady flow solution procedures to reproduce the time-average of an unsteady flow solution. The goal of this work is to investigate the feasibility of using inviscid lumped deterministic stresses to model unsteady combustion hot streak migration effects on the turbine blade tip and outer air seal heat loads using a steady computational approach. The LDS model is obtained from an unsteady inviscid calculation. The LDS model is then used with a steady viscous computation to simulate the time-averaged viscous solution. Both two-dimensional and three-dimensional applications are examined. The inviscid LDS model produces good results for the two-dimensional case and requires less than 10% of the CPU time of the unsteady viscous run. For the three-dimensional case, the LDS model does a good job of reproducing the time-averaged viscous temperature migration and separation as well as heat load on the outer air seal at a CPU cost that is 25% of that of an unsteady viscous computation.
Deterministic nature of the underlying dynamics of surface wind fluctuations
NASA Astrophysics Data System (ADS)
Sreelekshmi, R. C.; Asokan, K.; Satheesh Kumar, K.
2012-10-01
Modelling the fluctuations of the Earth's surface wind has a significant role in understanding the dynamics of atmosphere besides its impact on various fields ranging from agriculture to structural engineering. Most of the studies on the modelling and prediction of wind speed and power reported in the literature are based on statistical methods or the probabilistic distribution of the wind speed data. In this paper we investigate the suitability of a deterministic model to represent the wind speed fluctuations by employing tools of nonlinear dynamics. We have carried out a detailed nonlinear time series analysis of the daily mean wind speed data measured at Thiruvananthapuram (8.483° N,76.950° E) from 2000 to 2010. The results of the analysis strongly suggest that the underlying dynamics is deterministic, low-dimensional and chaotic suggesting the possibility of accurate short-term prediction. As most of the chaotic systems are confined to laboratories, this is another example of a naturally occurring time series showing chaotic behaviour.
Stochastic and deterministic causes of streamer branching in liquid dielectrics
Jadidian, Jouya; Zahn, Markus; Lavesson, Nils; Widlund, Ola; Borg, Karl
2013-08-14
Streamer branching in liquid dielectrics is driven by stochastic and deterministic factors. The presence of stochastic causes of streamer branching such as inhomogeneities inherited from noisy initial states, impurities, or charge carrier density fluctuations is inevitable in any dielectric. A fully three-dimensional streamer model presented in this paper indicates that deterministic origins of branching are intrinsic attributes of streamers, which in some cases make the branching inevitable depending on shape and velocity of the volume charge at the streamer frontier. Specifically, any given inhomogeneous perturbation can result in streamer branching if the volume charge layer at the original streamer head is relatively thin and slow enough. Furthermore, discrete nature of electrons at the leading edge of an ionization front always guarantees the existence of a non-zero inhomogeneous perturbation ahead of the streamer head propagating even in perfectly homogeneous dielectric. Based on the modeling results for streamers propagating in a liquid dielectric, a gauge on the streamer head geometry is introduced that determines whether the branching occurs under particular inhomogeneous circumstances. Estimated number, diameter, and velocity of the born branches agree qualitatively with experimental images of the streamer branching.
On the deterministic and stochastic use of hydrologic models
Farmer, William H.; Vogel, Richard M.
2016-01-01
Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.
On the deterministic and stochastic use of hydrologic models
NASA Astrophysics Data System (ADS)
Farmer, William H.; Vogel, Richard M.
2016-07-01
Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.
Deterministic composite nanophotonic lattices in large area for broadband applications
NASA Astrophysics Data System (ADS)
Xavier, Jolly; Probst, Jürgen; Becker, Christiane
2016-12-01
Exotic manipulation of the flow of photons in nanoengineered materials with an aperiodic distribution of nanostructures plays a key role in efficiency-enhanced broadband photonic and plasmonic technologies for spectrally tailorable integrated biosensing, nanostructured thin film solarcells, white light emitting diodes, novel plasmonic ensembles etc. Through a generic deterministic nanotechnological route here we show subwavelength-scale silicon (Si) nanostructures on nanoimprinted glass substrate in large area (4 cm2) with advanced functional features of aperiodic composite nanophotonic lattices. These nanophotonic aperiodic lattices have easily tailorable supercell tiles with well-defined and discrete lattice basis elements and they show rich Fourier spectra. The presented nanophotonic lattices are designed functionally akin to two-dimensional aperiodic composite lattices with unconventional flexibility- comprising periodic photonic crystals and/or in-plane photonic quasicrystals as pattern design subsystems. The fabricated composite lattice-structured Si nanostructures are comparatively analyzed with a range of nanophotonic structures with conventional lattice geometries of periodic, disordered random as well as in-plane quasicrystalline photonic lattices with comparable lattice parameters. As a proof of concept of compatibility with advanced bottom-up liquid phase crystallized (LPC) Si thin film fabrication, the experimental structural analysis is further extended to double-side-textured deterministic aperiodic lattice-structured 10 μm thick large area LPC Si film on nanoimprinted substrates.
Deterministic photon-emitter coupling in chiral photonic circuits
NASA Astrophysics Data System (ADS)
Söllner, Immo; Mahmoodian, Sahand; Hansen, Sofie Lindskov; Midolo, Leonardo; Javadi, Alisa; Kiršanskė, Gabija; Pregnolato, Tommaso; El-Ella, Haitham; Lee, Eun Hye; Song, Jin Dong; Stobbe, Søren; Lodahl, Peter
2015-09-01
Engineering photon emission and scattering is central to modern photonics applications ranging from light harvesting to quantum-information processing. To this end, nanophotonic waveguides are well suited as they confine photons to a one-dimensional geometry and thereby increase the light-matter interaction. In a regular waveguide, a quantum emitter interacts equally with photons in either of the two propagation directions. This symmetry is violated in nanophotonic structures in which non-transversal local electric-field components imply that photon emission and scattering may become directional. Here we show that the helicity of the optical transition of a quantum emitter determines the direction of single-photon emission in a specially engineered photonic-crystal waveguide. We observe single-photon emission into the waveguide with a directionality that exceeds 90% under conditions in which practically all the emitted photons are coupled to the waveguide. The chiral light-matter interaction enables deterministic and highly directional photon emission for experimentally achievable on-chip non-reciprocal photonic elements. These may serve as key building blocks for single-photon optical diodes, transistors and deterministic quantum gates. Furthermore, chiral photonic circuits allow the dissipative preparation of entangled states of multiple emitters for experimentally achievable parameters, may lead to novel topological photon states and could be applied for directional steering of light.
Predictability of normal heart rhythms and deterministic chaos
NASA Astrophysics Data System (ADS)
Lefebvre, J. H.; Goodings, D. A.; Kamath, M. V.; Fallen, E. L.
1993-04-01
The evidence for deterministic chaos in normal heart rhythms is examined. Electrocardiograms were recorded of 29 subjects falling into four groups—a young healthy group, an older healthy group, and two groups of patients who had recently suffered an acute myocardial infarction. From the measured R-R intervals, a time series of 1000 first differences was constructed for each subject. The correlation integral of Grassberger and Procaccia was calculated for several subjects using these relatively short time series. No evidence was found for the existence of an attractor having a dimension less than about 4. However, a prediction method recently proposed by Sugihara and May and an autoregressive linear predictor both show that there is a measure of short-term predictability in the differenced R-R intervals. Further analysis revealed that the short-term predictability calculated by the Sugihara-May method is not consistent with the null hypothesis of a Gaussian random process. The evidence for a small amount of nonlinear dynamical behavior together with the short-term predictability suggest that there is an element of deterministic chaos in normal heart rhythms, although it is not strong or persistent. Finally, two useful parameters of the predictability curves are identified, namely, the `first step predictability' and the `predictability decay rate,' neither of which appears to be significantly correlated with the standard deviation of the R-R intervals.
An advanced deterministic method for spent fuel criticality safety analysis
DeHart, M.D.
1998-01-01
Over the past two decades, criticality safety analysts have come to rely to a large extent on Monte Carlo methods for criticality calculations. Monte Carlo has become popular because of its capability to model complex, non-orthogonal configurations or fissile materials, typical of real world problems. Over the last few years, however, interest in determinist transport methods has been revived, due shortcomings in the stochastic nature of Monte Carlo approaches for certain types of analyses. Specifically, deterministic methods are superior to stochastic methods for calculations requiring accurate neutron density distributions or differential fluxes. Although Monte Carlo methods are well suited for eigenvalue calculations, they lack the localized detail necessary to assess uncertainties and sensitivities important in determining a range of applicability. Monte Carlo methods are also inefficient as a transport solution for multiple pin depletion methods. Discrete ordinates methods have long been recognized as one of the most rigorous and accurate approximations used to solve the transport equation. However, until recently, geometric constraints in finite differencing schemes have made discrete ordinates methods impractical for non-orthogonal configurations such as reactor fuel assemblies. The development of an extended step characteristic (ESC) technique removes the grid structure limitations of traditional discrete ordinates methods. The NEWT computer code, a discrete ordinates code built upon the ESC formalism, is being developed as part of the SCALE code system. This paper will demonstrate the power, versatility, and applicability of NEWT as a state-of-the-art solution for current computational needs.
Shock-induced explosive chemistry in a deterministic sample configuration.
Stuecker, John Nicholas; Castaneda, Jaime N.; Cesarano, Joseph, III; Trott, Wayne Merle; Baer, Melvin R.; Tappan, Alexander Smith
2005-10-01
Explosive initiation and energy release have been studied in two sample geometries designed to minimize stochastic behavior in shock-loading experiments. These sample concepts include a design with explosive material occupying the hole locations of a close-packed bed of inert spheres and a design that utilizes infiltration of a liquid explosive into a well-defined inert matrix. Wave profiles transmitted by these samples in gas-gun impact experiments have been characterized by both velocity interferometry diagnostics and three-dimensional numerical simulations. Highly organized wave structures associated with the characteristic length scales of the deterministic samples have been observed. Initiation and reaction growth in an inert matrix filled with sensitized nitromethane (a homogeneous explosive material) result in wave profiles similar to those observed with heterogeneous explosives. Comparison of experimental and numerical results indicates that energetic material studies in deterministic sample geometries can provide an important new tool for validation of models of energy release in numerical simulations of explosive initiation and performance.
Deterministic direct reprogramming of somatic cells to pluripotency.
Rais, Yoach; Zviran, Asaf; Geula, Shay; Gafni, Ohad; Chomsky, Elad; Viukov, Sergey; Mansour, Abed AlFatah; Caspi, Inbal; Krupalnik, Vladislav; Zerbib, Mirie; Maza, Itay; Mor, Nofar; Baran, Dror; Weinberger, Leehee; Jaitin, Diego A; Lara-Astiaso, David; Blecher-Gonen, Ronnie; Shipony, Zohar; Mukamel, Zohar; Hagai, Tzachi; Gilad, Shlomit; Amann-Zalcenstein, Daniela; Tanay, Amos; Amit, Ido; Novershtern, Noa; Hanna, Jacob H
2013-10-03
Somatic cells can be inefficiently and stochastically reprogrammed into induced pluripotent stem (iPS) cells by exogenous expression of Oct4 (also called Pou5f1), Sox2, Klf4 and Myc (hereafter referred to as OSKM). The nature of the predominant rate-limiting barrier(s) preventing the majority of cells to successfully and synchronously reprogram remains to be defined. Here we show that depleting Mbd3, a core member of the Mbd3/NuRD (nucleosome remodelling and deacetylation) repressor complex, together with OSKM transduction and reprogramming in naive pluripotency promoting conditions, result in deterministic and synchronized iPS cell reprogramming (near 100% efficiency within seven days from mouse and human cells). Our findings uncover a dichotomous molecular function for the reprogramming factors, serving to reactivate endogenous pluripotency networks while simultaneously directly recruiting the Mbd3/NuRD repressor complex that potently restrains the reactivation of OSKM downstream target genes. Subsequently, the latter interactions, which are largely depleted during early pre-implantation development in vivo, lead to a stochastic and protracted reprogramming trajectory towards pluripotency in vitro. The deterministic reprogramming approach devised here offers a novel platform for the dissection of molecular dynamics leading to establishing pluripotency at unprecedented flexibility and resolution.
Forced Translocation of Polymer through Nanopore: Deterministic Model and Simulations
NASA Astrophysics Data System (ADS)
Wang, Yanqian; Panyukov, Sergey; Liao, Qi; Rubinstein, Michael
2012-02-01
We propose a new theoretical model of forced translocation of a polymer chain through a nanopore. We assume that DNA translocation at high fields proceeds too fast for the chain to relax, and thus the chain unravels loop by loop in an almost deterministic way. So the distribution of translocation times of a given monomer is controlled by the initial conformation of the chain (the distribution of its loops). Our model predicts the translocation time of each monomer as an explicit function of initial polymer conformation. We refer to this concept as ``fingerprinting''. The width of the translocation time distribution is determined by the loop distribution in initial conformation as well as by the thermal fluctuations of the polymer chain during the translocation process. We show that the conformational broadening δt of translocation times of m-th monomer δtm^1.5 is stronger than the thermal broadening δtm^1.25 The predictions of our deterministic model were verified by extensive molecular dynamics simulations
Deterministic composite nanophotonic lattices in large area for broadband applications.
Xavier, Jolly; Probst, Jürgen; Becker, Christiane
2016-12-12
Exotic manipulation of the flow of photons in nanoengineered materials with an aperiodic distribution of nanostructures plays a key role in efficiency-enhanced broadband photonic and plasmonic technologies for spectrally tailorable integrated biosensing, nanostructured thin film solarcells, white light emitting diodes, novel plasmonic ensembles etc. Through a generic deterministic nanotechnological route here we show subwavelength-scale silicon (Si) nanostructures on nanoimprinted glass substrate in large area (4 cm(2)) with advanced functional features of aperiodic composite nanophotonic lattices. These nanophotonic aperiodic lattices have easily tailorable supercell tiles with well-defined and discrete lattice basis elements and they show rich Fourier spectra. The presented nanophotonic lattices are designed functionally akin to two-dimensional aperiodic composite lattices with unconventional flexibility- comprising periodic photonic crystals and/or in-plane photonic quasicrystals as pattern design subsystems. The fabricated composite lattice-structured Si nanostructures are comparatively analyzed with a range of nanophotonic structures with conventional lattice geometries of periodic, disordered random as well as in-plane quasicrystalline photonic lattices with comparable lattice parameters. As a proof of concept of compatibility with advanced bottom-up liquid phase crystallized (LPC) Si thin film fabrication, the experimental structural analysis is further extended to double-side-textured deterministic aperiodic lattice-structured 10 μm thick large area LPC Si film on nanoimprinted substrates.
Deterministic photon-emitter coupling in chiral photonic circuits.
Söllner, Immo; Mahmoodian, Sahand; Hansen, Sofie Lindskov; Midolo, Leonardo; Javadi, Alisa; Kiršanskė, Gabija; Pregnolato, Tommaso; El-Ella, Haitham; Lee, Eun Hye; Song, Jin Dong; Stobbe, Søren; Lodahl, Peter
2015-09-01
Engineering photon emission and scattering is central to modern photonics applications ranging from light harvesting to quantum-information processing. To this end, nanophotonic waveguides are well suited as they confine photons to a one-dimensional geometry and thereby increase the light-matter interaction. In a regular waveguide, a quantum emitter interacts equally with photons in either of the two propagation directions. This symmetry is violated in nanophotonic structures in which non-transversal local electric-field components imply that photon emission and scattering may become directional. Here we show that the helicity of the optical transition of a quantum emitter determines the direction of single-photon emission in a specially engineered photonic-crystal waveguide. We observe single-photon emission into the waveguide with a directionality that exceeds 90% under conditions in which practically all the emitted photons are coupled to the waveguide. The chiral light-matter interaction enables deterministic and highly directional photon emission for experimentally achievable on-chip non-reciprocal photonic elements. These may serve as key building blocks for single-photon optical diodes, transistors and deterministic quantum gates. Furthermore, chiral photonic circuits allow the dissipative preparation of entangled states of multiple emitters for experimentally achievable parameters, may lead to novel topological photon states and could be applied for directional steering of light.
Evidence of Long Range Dependence and Self-similarity in Urban Traffic Systems
Thakur, Gautam S; Helmy, Ahmed; Hui, Pan
2015-01-01
Transportation simulation technologies should accurately model traffic demand, distribution, and assignment parame- ters for urban environment simulation. These three param- eters significantly impact transportation engineering bench- mark process, are also critical in realizing realistic traffic modeling situations. In this paper, we model and charac- terize traffic density distribution of thousands of locations around the world. The traffic densities are generated from millions of images collected over several years and processed using computer vision techniques. The resulting traffic den- sity distribution time series are then analyzed. It is found using the goodness-of-fit test that the traffic density dis- tributions follows heavy-tail models such as Log-gamma, Log-logistic, and Weibull in over 90% of analyzed locations. Moreover, a heavy-tail gives rise to long-range dependence and self-similarity, which we studied by estimating the Hurst exponent (H). Our analysis based on seven different Hurst estimators strongly indicate that the traffic distribution pat- terns are stochastically self-similar (0.5 H 1.0). We believe this is an important finding that will influence the design and development of the next generation traffic simu- lation techniques and also aid in accurately modeling traffic engineering of urban systems. In addition, it shall provide a much needed input for the development of smart cities.
Mechanism of the jamming transition in the two-dimensional traffic networks. II
NASA Astrophysics Data System (ADS)
Ishibashi, Yoshihiro; Fukui, Minoru
2014-01-01
The jamming transition in a two-dimensional traffic network is investigated based upon the cellular automaton simulations, where the update rule is deterministic, though the initial car configuration is randomly set. The lifetime of the system is defined as the time until when all cars in the system come to a stop, and it will increase with decreasing car density from a higher density side. The critical car density is defined as the car density, at which the corresponding lifetime diverges. The analytical expression for the critical car density is proposed.
Fully automated urban traffic system
NASA Technical Reports Server (NTRS)
Dobrotin, B. M.; Hansen, G. R.; Peng, T. K. C.; Rennels, D. A.
1977-01-01
The replacement of the driver with an automatic system which could perform the functions of guiding and routing a vehicle with a human's capability of responding to changing traffic demands was discussed. The problem was divided into four technological areas; guidance, routing, computing, and communications. It was determined that the latter three areas being developed independent of any need for fully automated urban traffic. A guidance system that would meet system requirements was not being developed but was technically feasible.
Distributed Traffic Complexity Management by Preserving Trajectory Flexibility
NASA Technical Reports Server (NTRS)
Idris, Husni; Vivona, Robert A.; Garcia-Chico, Jose-Luis; Wing, David J.
2007-01-01
In order to handle the expected increase in air traffic volume, the next generation air transportation system is moving towards a distributed control architecture, in which groundbased service providers such as controllers and traffic managers and air-based users such as pilots share responsibility for aircraft trajectory generation and management. This paper presents preliminary research investigating a distributed trajectory-oriented approach to manage traffic complexity, based on preserving trajectory flexibility. The underlying hypotheses are that preserving trajectory flexibility autonomously by aircraft naturally achieves the aggregate objective of avoiding excessive traffic complexity, and that trajectory flexibility is increased by collaboratively minimizing trajectory constraints without jeopardizing the intended air traffic management objectives. This paper presents an analytical framework in which flexibility is defined in terms of robustness and adaptability to disturbances and preliminary metrics are proposed that can be used to preserve trajectory flexibility. The hypothesized impacts are illustrated through analyzing a trajectory solution space in a simple scenario with only speed as a degree of freedom, and in constraint situations involving meeting multiple times of arrival and resolving conflicts.
Optimal structure of complex networks for minimizing traffic congestion.
Zhao, Liang; Cupertino, Thiago Henrique; Park, Kwangho; Lai, Ying-Cheng; Jin, Xiaogang
2007-12-01
To design complex networks to minimize traffic congestion, it is necessary to understand how traffic flow depends on network structure. We study data packet flow on complex networks, where the packet delivery capacity of each node is not fixed. The optimal configuration of capacities to minimize traffic congestion is derived and the critical packet generating rate is determined, below which the network is at a free flow state but above which congestion occurs. Our analysis reveals a direct relation between network topology and traffic flow. Optimal network structure, free of traffic congestion, should have two features: uniform distribution of load over all nodes and small network diameter. This finding is confirmed by numerical simulations. Our analysis also makes it possible to theoretically compare the congestion conditions for different types of complex networks. In particular, we find that network with low critical generating rate is more susceptible to congestion. The comparison has been made on the following complex-network topologies: random, scale-free, and regular.
Self-Organized Criticality and Scaling in Lifetime of Traffic Jams
NASA Astrophysics Data System (ADS)
Nagatani, Takashi
1995-01-01
The deterministic cellular automaton 184 (the one-dimensional asymmetric simple-exclusion model with parallel dynamics) is extended to take into account injection or extraction of particles. The model presents the traffic flow on a highway with inflow or outflow of cars.Introducing injection or extraction of particles into the asymmetric simple-exclusion model drives the system asymptotically into a steady state exhibiting a self-organized criticality. The typical lifetime
An improved multi-value cellular automata model for heterogeneous bicycle traffic flow
NASA Astrophysics Data System (ADS)
Jin, Sheng; Qu, Xiaobo; Xu, Cheng; Ma, Dongfang; Wang, Dianhai
2015-10-01
This letter develops an improved multi-value cellular automata model for heterogeneous bicycle traffic flow taking the higher maximum speed of electric bicycles into consideration. The update rules of both regular and electric bicycles are improved, with maximum speeds of two and three cells per second respectively. Numerical simulation results for deterministic and stochastic cases are obtained. The fundamental diagrams and multiple states effects under different model parameters are analyzed and discussed. Field observations were made to calibrate the slowdown probabilities. The results imply that the improved extended Burgers cellular automata (IEBCA) model is more consistent with the field observations than previous models and greatly enhances the realism of the bicycle traffic model.
A traffic analyzer for multiple SpaceWire links
NASA Astrophysics Data System (ADS)
Liu, Scige J.; Giusi, Giovanni; Di Giorgio, Anna M.; Vertolli, Nello; Galli, Emanuele; Biondi, David; Farina, Maria; Pezzuto, Stefano; Spinoglio, Luigi
2014-07-01
Modern space missions are becoming increasingly complex: the interconnection of the units in a satellite is now a network of terminals linked together through routers, where devices with different level of automation and intelligence share the same data-network. The traceability of the network transactions is performed mostly at terminal level through log analysis and hence it is difficult to verify in real time the reliability of the interconnections and the interchange protocols. To improve and ease the traffic analysis in a SpaceWire network we implemented a low-level link analyzer, with the specific goal to simplify the integration and test phases in the development of space instrumentation. The traffic analyzer collects signals coming from pod probes connected in-series on the interested links between two SpaceWire terminals. With respect to the standard traffic analyzers, the design of this new tool includes the possibility to internally reshape the LVDS signal. This improvement increases the robustness of the analyzer towards environmental noise effects and guarantees a deterministic delay on all analyzed signals. The analyzer core is implemented on a Xilinx FPGA, programmed to decode the bidirectional LVDS signals at Link and Network level. Successively, the core packetizes protocol characters in homogeneous sets of time ordered events. The analyzer provides time-tagging functionality for each characters set, with a precision down to the FPGA Clock, i.e. about 20nsec in the adopted HW environment. The use of a common time reference for each character stream allows synchronous performance measurements. The collected information is then routed to an external computer for quick analysis: this is done via high-speed USB2 connection. With this analyzer it is possible to verify the link performances in terms of induced delays in the transmitted signals. A case study focused on the analysis of the Time-Code synchronization in presence of a SpaceWire Router is
Ground Motion and Variability from 3-D Deterministic Broadband Simulations
NASA Astrophysics Data System (ADS)
Withers, Kyle Brett
The accuracy of earthquake source descriptions is a major limitation in high-frequency (> 1 Hz) deterministic ground motion prediction, which is critical for performance-based design by building engineers. With the recent addition of realistic fault topography in 3D simulations of earthquake source models, ground motion can be deterministically calculated more realistically up to higher frequencies. We first introduce a technique to model frequency-dependent attenuation and compare its impact on strong ground motions recorded for the 2008 Chino Hills earthquake. Then, we model dynamic rupture propagation for both a generic strike-slip event and blind thrust scenario earthquakes matching the fault geometry of the 1994 Mw 6.7 Northridge earthquake along rough faults up to 8 Hz. We incorporate frequency-dependent attenuation via a power law above a reference frequency in the form Q0fn, with high accuracy down to Q values of 15, and include nonlinear effects via Drucker-Prager plasticity. We model the region surrounding the fault with and without small-scale medium complexity in both a 1D layered model characteristic of southern California rock and a 3D medium extracted from the SCEC CVMSi.426 including a near-surface geotechnical layer. We find that the spectral acceleration from our models are within 1-2 interevent standard deviations from recent ground motion prediction equations (GMPEs) and compare well with that of recordings from strong ground motion stations at both short and long periods. At periods shorter than 1 second, Q(f) is needed to match the decay of spectral acceleration seen in the GMPEs as a function of distance from the fault. We find that the similarity between the intraevent variability of our simulations and observations increases when small-scale heterogeneity and plasticity are included, extremely important as uncertainty in ground motion estimates dominates the overall uncertainty in seismic risk. In addition to GMPEs, we compare with simple
Effect of degree correlations on networked traffic dynamics
NASA Astrophysics Data System (ADS)
Sun, Jin-Tu; Wang, Sheng-Jun; Huang, Zi-Gang; Wang, Ying-Hai
2009-08-01
In order to enhance the transport capacity of scale-free networks, we study the relation between the degree correlation and the transport capacity of the network. We calculate the degree-degree correlation coefficient, the maximal betweenness and the critical value of the generating rate Rc (traffic congestion occurs for R>Rc). Numerical experiments indicate that both assortative mixing and disassortative mixing can enhance the transport capacity. We also reveal how the network structure affects the transport capacity. Assortative (disassortative) mixing changes distributions of nodes’ betweennesses, and as a result, the traffic decreases through nodes with the highest degree while it increases through the initially idle nodes.
Traffic-driven SIR epidemic spreading in networks
NASA Astrophysics Data System (ADS)
Pu, Cunlai; Li, Siyuan; Yang, XianXia; Xu, Zhongqi; Ji, Zexuan; Yang, Jian
2016-03-01
We study SIR epidemic spreading in networks driven by traffic dynamics, which are further governed by static routing protocols. We obtain the maximum instantaneous population of infected nodes and the maximum population of ever infected nodes through simulation. We find that generally more balanced load distribution leads to more intense and wide spread of an epidemic in networks. Increasing either average node degree or homogeneity of degree distribution will facilitate epidemic spreading. When packet generation rate ρ is small, increasing ρ favors epidemic spreading. However, when ρ is large enough, traffic congestion appears which inhibits epidemic spreading.
2015-01-01
Frequency seriation played a key role in the formation of archaeology as a discipline due to its ability to generate chronologies. Interest in its utility for exploring issues of contemporary interest beyond chronology, however, has been limited. This limitation is partly due to a lack of quantitative algorithms that can be used to build deterministic seriation solutions. When the number of assemblages becomes greater than just a handful, the resources required for evaluation of possible permutations easily outstrips available computing capacity. On the other hand, probabilistic approaches to creating seriations offer a computationally manageable alternative but rely upon a compressed description of the data to order assemblages. This compression removes the ability to use all of the features of our data to fit to the seriation model, obscuring violations of the model, and thus lessens our ability to understand the degree to which the resulting order is chronological, spatial, or a mixture. Recently, frequency seriation has been reconceived as a general method for studying the structure of cultural transmission through time and across space. The use of an evolution-based framework renews the potential for seriation but also calls for a computationally feasible algorithm that is capable of producing solutions under varying configurations, without manual trial and error fitting. Here, we introduce the Iterative Deterministic Seriation Solution (IDSS) for constructing frequency seriations, an algorithm that dramatically constrains the search for potential valid orders of assemblages. Our initial implementation of IDSS does not solve all the problems of seriation, but begins to moves towards a resolution of a long-standing problem in archaeology while opening up new avenues of research into the study of cultural relatedness. We demonstrate the utility of IDSS using late prehistoric decorated ceramics from the Mississippi River Valley. The results compare favorably to
Validation of a Deterministic Vibroacoustic Response Prediction Model
NASA Technical Reports Server (NTRS)
Caimi, Raoul E.; Margasahayam, Ravi
1997-01-01
This report documents the recently completed effort involving validation of a deterministic theory for the random vibration problem of predicting the response of launch pad structures in the low-frequency range (0 to 50 hertz). Use of the Statistical Energy Analysis (SEA) methods is not suitable in this range. Measurements of launch-induced acoustic loads and subsequent structural response were made on a cantilever beam structure placed in close proximity (200 feet) to the launch pad. Innovative ways of characterizing random, nonstationary, non-Gaussian acoustics are used for the development of a structure's excitation model. Extremely good correlation was obtained between analytically computed responses and those measured on the cantilever beam. Additional tests are recommended to bound the problem to account for variations in launch trajectory and inclination.
Turning Indium Oxide into a Superior Electrocatalyst: Deterministic Heteroatoms
NASA Astrophysics Data System (ADS)
Zhang, Bo; Zhang, Nan Nan; Chen, Jian Fu; Hou, Yu; Yang, Shuang; Guo, Jian Wei; Yang, Xiao Hua; Zhong, Ju Hua; Wang, Hai Feng; Hu, P.; Zhao, Hui Jun; Yang, Hua Gui
2013-10-01
The efficient electrocatalysts for many heterogeneous catalytic processes in energy conversion and storage systems must possess necessary surface active sites. Here we identify, from X-ray photoelectron spectroscopy and density functional theory calculations, that controlling charge density redistribution via the atomic-scale incorporation of heteroatoms is paramount to import surface active sites. We engineer the deterministic nitrogen atoms inserting the bulk material to preferentially expose active sites to turn the inactive material into a sufficient electrocatalyst. The excellent electrocatalytic activity of N-In2O3 nanocrystals leads to higher performance of dye-sensitized solar cells (DSCs) than the DSCs fabricated with Pt. The successful strategy provides the rational design of transforming abundant materials into high-efficient electrocatalysts. More importantly, the exciting discovery of turning the commonly used transparent conductive oxide (TCO) in DSCs into counter electrode material means that except for decreasing the cost, the device structure and processing techniques of DSCs can be simplified in future.
Derivation Of Probabilistic Damage Definitions From High Fidelity Deterministic Computations
Leininger, L D
2004-10-26
This paper summarizes a methodology used by the Underground Analysis and Planning System (UGAPS) at Lawrence Livermore National Laboratory (LLNL) for the derivation of probabilistic damage curves for US Strategic Command (USSTRATCOM). UGAPS uses high fidelity finite element and discrete element codes on the massively parallel supercomputers to predict damage to underground structures from military interdiction scenarios. These deterministic calculations can be riddled with uncertainty, especially when intelligence, the basis for this modeling, is uncertain. The technique presented here attempts to account for this uncertainty by bounding the problem with reasonable cases and using those bounding cases as a statistical sample. Probability of damage curves are computed and represented that account for uncertainty within the sample and enable the war planner to make informed decisions. This work is flexible enough to incorporate any desired damage mechanism and can utilize the variety of finite element and discrete element codes within the national laboratory and government contractor community.
Deterministic processes vary during community assembly for ecologically dissimilar taxa
Powell, Jeff R.; Karunaratne, Senani; Campbell, Colin D.; Yao, Huaiying; Robinson, Lucinda; Singh, Brajesh K.
2015-01-01
The continuum hypothesis states that both deterministic and stochastic processes contribute to the assembly of ecological communities. However, the contextual dependency of these processes remains an open question that imposes strong limitations on predictions of community responses to environmental change. Here we measure community and habitat turnover across multiple vertical soil horizons at 183 sites across Scotland for bacteria and fungi, both dominant and functionally vital components of all soils but which differ substantially in their growth habit and dispersal capability. We find that habitat turnover is the primary driver of bacterial community turnover in general, although its importance decreases with increasing isolation and disturbance. Fungal communities, however, exhibit a highly stochastic assembly process, both neutral and non-neutral in nature, largely independent of disturbance. These findings suggest that increased focus on dispersal limitation and biotic interactions are necessary to manage and conserve the key ecosystem services provided by these assemblages. PMID:26436640
Location deterministic biosensing from quantum-dot-nanowire assemblies
Liu, Chao; Kim, Kwanoh; Fan, D. L.
2014-08-25
Semiconductor quantum dots (QDs) with high fluorescent brightness, stability, and tunable sizes, have received considerable interest for imaging, sensing, and delivery of biomolecules. In this research, we demonstrate location deterministic biochemical detection from arrays of QD-nanowire hybrid assemblies. QDs with diameters less than 10 nm are manipulated and precisely positioned on the tips of the assembled Gold (Au) nanowires. The manipulation mechanisms are quantitatively understood as the synergetic effects of dielectrophoretic (DEP) and alternating current electroosmosis (ACEO) due to AC electric fields. The QD-nanowire hybrid sensors operate uniquely by concentrating bioanalytes to QDs on the tips of nanowires before detection, offering much enhanced efficiency and sensitivity, in addition to the position-predictable rationality. This research could result in advances in QD-based biomedical detection and inspires an innovative approach for fabricating various QD-based nanodevices.
Deterministic single-file dynamics in collisional representation.
Marchesoni, F; Taloni, A
2007-12-01
We re-examine numerically the diffusion of a deterministic, or ballistic single file with preassigned velocity distribution (Jepsen's gas) from a collisional viewpoint. For a two-modal velocity distribution, where half the particles have velocity +/-c, the collisional statistics is analytically proven to reproduce the continuous time representation. For a three-modal velocity distribution with equal fractions, where less than 12 of the particles have velocity +/-c, with the remaining particles at rest, the collisional process is shown to be inhomogeneous; its stationary properties are discussed here by combining exact and phenomenological arguments. Collisional memory effects are then related to the negative power-law tails in the velocity autocorrelation functions, predicted earlier in the continuous time formalism. Numerical and analytical results for Gaussian and four-modal Jepsen's gases are also reported for the sake of a comparison.
Fisher-Wright model with deterministic seed bank and selection.
Koopmann, Bendix; Müller, Johannes; Tellier, Aurélien; Živković, Daniel
2017-04-01
Seed banks are common characteristics to many plant species, which allow storage of genetic diversity in the soil as dormant seeds for various periods of time. We investigate an above-ground population following a Fisher-Wright model with selection coupled with a deterministic seed bank assuming the length of the seed bank is kept constant and the number of seeds is large. To assess the combined impact of seed banks and selection on genetic diversity, we derive a general diffusion model. The applied techniques outline a path of approximating a stochastic delay differential equation by an appropriately rescaled stochastic differential equation. We compute the equilibrium solution of the site-frequency spectrum and derive the times to fixation of an allele with and without selection. Finally, it is demonstrated that seed banks enhance the effect of selection onto the site-frequency spectrum while slowing down the time until the mutation-selection equilibrium is reached.
Reinforcement learning output feedback NN control using deterministic learning technique.
Xu, Bin; Yang, Chenguang; Shi, Zhongke
2014-03-01
In this brief, a novel adaptive-critic-based neural network (NN) controller is investigated for nonlinear pure-feedback systems. The controller design is based on the transformed predictor form, and the actor-critic NN control architecture includes two NNs, whereas the critic NN is used to approximate the strategic utility function, and the action NN is employed to minimize both the strategic utility function and the tracking error. A deterministic learning technique has been employed to guarantee that the partial persistent excitation condition of internal states is satisfied during tracking control to a periodic reference orbit. The uniformly ultimate boundedness of closed-loop signals is shown via Lyapunov stability analysis. Simulation results are presented to demonstrate the effectiveness of the proposed control.
Deterministic secure communications using two-mode squeezed states
Marino, Alberto M.; Stroud, C. R. Jr.
2006-08-15
We propose a scheme for quantum cryptography that uses the squeezing phase of a two-mode squeezed state to transmit information securely between two parties. The basic principle behind this scheme is the fact that each mode of the squeezed field by itself does not contain any information regarding the squeezing phase. The squeezing phase can only be obtained through a joint measurement of the two modes. This, combined with the fact that it is possible to perform remote squeezing measurements, makes it possible to implement a secure quantum communication scheme in which a deterministic signal can be transmitted directly between two parties while the encryption is done automatically by the quantum correlations present in the two-mode squeezed state.
Safe microburst penetration techniques: A deterministic, nonlinear, optimal control approach
NASA Technical Reports Server (NTRS)
Psiaki, Mark L.
1987-01-01
A relatively large amount of computer time was used for the calculation of a optimal trajectory, but it is subject to reduction with moderate effort. The Deterministic, Nonlinear, Optimal Control algorithm yielded excellent aircraft performance in trajectory tracking for the given microburst. It did so by varying the angle of attack to counteract the lift effects of microburst induced airspeed variations. Throttle saturation and aerodynamic stall limits were not a problem for the case considered, proving that the aircraft's performance capabilities were not violated by the given wind field. All closed loop control laws previously considered performed very poorly in comparison, and therefore do not come near to taking full advantage of aircraft performance.
Capillary-mediated interface perturbations: Deterministic pattern formation
NASA Astrophysics Data System (ADS)
Glicksman, Martin E.
2016-09-01
Leibniz-Reynolds analysis identifies a 4th-order capillary-mediated energy field that is responsible for shape changes observed during melting, and for interface speed perturbations during crystal growth. Field-theoretic principles also show that capillary-mediated energy distributions cancel over large length scales, but modulate the interface shape on smaller mesoscopic scales. Speed perturbations reverse direction at specific locations where they initiate inflection and branching on unstable interfaces, thereby enhancing pattern complexity. Simulations of pattern formation by several independent groups of investigators using a variety of numerical techniques confirm that shape changes during both melting and growth initiate at locations predicted from interface field theory. Finally, limit cycles occur as an interface and its capillary energy field co-evolve, leading to synchronized branching. Synchronous perturbations produce classical dendritic structures, whereas asynchronous perturbations observed in isotropic and weakly anisotropic systems lead to chaotic-looking patterns that remain nevertheless deterministic.
A Deterministic Computational Procedure for Space Environment Electron Transport
NASA Technical Reports Server (NTRS)
Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamcyk, Anne M.
2010-01-01
A deterministic computational procedure for describing the transport of electrons in condensed media is formulated to simulate the effects and exposures from spectral distributions typical of electrons trapped in planetary magnetic fields. The primary purpose for developing the procedure is to provide a means of rapidly performing numerous repetitive transport calculations essential for electron radiation exposure assessments for complex space structures. The present code utilizes well-established theoretical representations to describe the relevant interactions and transport processes. A combined mean free path and average trajectory approach is used in the transport formalism. For typical space environment spectra, several favorable comparisons with Monte Carlo calculations are made which have indicated that accuracy is not compromised at the expense of the computational speed.
YALINA analytical benchmark analyses using the deterministic ERANOS code system.
Gohar, Y.; Aliberti, G.; Nuclear Engineering Division
2009-08-31
The growing stockpile of nuclear waste constitutes a severe challenge for the mankind for more than hundred thousand years. To reduce the radiotoxicity of the nuclear waste, the Accelerator Driven System (ADS) has been proposed. One of the most important issues of ADSs technology is the choice of the appropriate neutron spectrum for the transmutation of Minor Actinides (MA) and Long Lived Fission Products (LLFP). This report presents the analytical analyses obtained with the deterministic ERANOS code system for the YALINA facility within: (a) the collaboration between Argonne National Laboratory (ANL) of USA and the Joint Institute for Power and Nuclear Research (JIPNR) Sosny of Belarus; and (b) the IAEA coordinated research projects for accelerator driven systems (ADS). This activity is conducted as a part of the Russian Research Reactor Fuel Return (RRRFR) Program and the Global Threat Reduction Initiative (GTRI) of DOE/NNSA.
Deterministic Production of Photon Number States via Quantum Feedback Control
NASA Astrophysics Data System (ADS)
Geremia, J. M.
2006-05-01
It is well-known that measurements reduce the state of a quantum system, at least approximately, to an eigenstate of the operator associated with the physical property being measured. Here, we employ a continuous measurement of cavity photon number to achieve a robust, nondestructively verifiable procedure for preparing number states of an optical cavity mode. Such Fock states are highly sought after for the enabling role they play in quantum computing, networking and precision metrology. Furthermore, we demonstrate that the particular Fock state produced in each application of the continuous photon number measurement can be controlled using techniques from real-time quantum feedback control. The result of the feedback- stabilized measurement is a deterministic source of (nearly ideal) cavity Fock states. An analysis of feedback stability and the experimental viability of a quantum optical implementation currently underway at the University of New Mexico will be presented.
Turning indium oxide into a superior electrocatalyst: deterministic heteroatoms.
Zhang, Bo; Zhang, Nan Nan; Chen, Jian Fu; Hou, Yu; Yang, Shuang; Guo, Jian Wei; Yang, Xiao Hua; Zhong, Ju Hua; Wang, Hai Feng; Hu, P; Zhao, Hui Jun; Yang, Hua Gui
2013-10-31
The efficient electrocatalysts for many heterogeneous catalytic processes in energy conversion and storage systems must possess necessary surface active sites. Here we identify, from X-ray photoelectron spectroscopy and density functional theory calculations, that controlling charge density redistribution via the atomic-scale incorporation of heteroatoms is paramount to import surface active sites. We engineer the deterministic nitrogen atoms inserting the bulk material to preferentially expose active sites to turn the inactive material into a sufficient electrocatalyst. The excellent electrocatalytic activity of N-In2O3 nanocrystals leads to higher performance of dye-sensitized solar cells (DSCs) than the DSCs fabricated with Pt. The successful strategy provides the rational design of transforming abundant materials into high-efficient electrocatalysts. More importantly, the exciting discovery of turning the commonly used transparent conductive oxide (TCO) in DSCs into counter electrode material means that except for decreasing the cost, the device structure and processing techniques of DSCs can be simplified in future.
Conservative deterministic spectral Boltzmann solver near the grazing collisions limit
NASA Astrophysics Data System (ADS)
Haack, Jeffrey R.; Gamba, Irene M.
2012-11-01
We present new results building on the conservative deterministic spectral method for the space homogeneous Boltzmann equation developed by Gamba and Tharkabhushaman. This approach is a two-step process that acts on the weak form of the Boltzmann equation, and uses the machinery of the Fourier transform to reformulate the collisional integral into a weighted convolution in Fourier space. A constrained optimization problem is solved to preserve the mass, momentum, and energy of the resulting distribution. Within this framework we have extended the formulation to the case of more general case of collision operators with anisotropic scattering mechanisms, which requires a new formulation of the convolution weights. We also derive the grazing collisions limit for the method, and show that it is consistent with the Fokker-Planck-Landau equations as the grazing collisions parameter goes to zero.
Deterministic and stochastic modeling of aquifer stratigraphy, South Carolina
Miller, R.B.; Castle, J.W.; Temples, T.J.
2000-04-01
Deterministic and stochastic methods of three-dimensional hydrogeologic modeling are applied to characterization of contaminated Eocene aquifers at the Savannah River Site, South Carolina. The results address several important issues, including the use of multiple types of data in creating high-resolution aquifer models and the application of sequence-stratigraphic constraints. Specific procedures used include defining grid architecture stratigraphically, upscaling, modeling lithologic properties, and creating multiple equiprobable realizations of aquifer stratigraphy. An important question answered by the study is how to incorporate gamma-ray borehole-geophysical data in areas of anomalous log response, which occurs commonly in aquifers and confining units of the Atlantic Coastal Plain and other areas. To overcome this problem, gamma-ray models were conditioned to grain-size and lithofacies realizations. The investigation contributes to identifying potential pathways for downward migration of contaminants, which have been detected in confined aquifers at the modeling site. The approach followed in this investigation produces quantitative, stratigraphically constrained, geocellular models that incorporate multiple types of data from borehole-geophysical logs and continuous cores. The use of core-based stochastic realizations in conditioning deterministic models provides the advantage of incorporating lithologic information based on direct observations of cores rather than using only indirect measurements from geophysical logs. The high resolution of the models is demonstrated by the representation of thin, discontinuous clay beds that act as local barriers to flow. The models are effective in depicting the contrasts in geometry and heterogeneity between sheet-like nearshore-transgressive sands and laterally discontinuous sands of complex shoreline environments.
Application of Stochastic and Deterministic Approaches to Modeling Interstellar Chemistry
NASA Astrophysics Data System (ADS)
Pei, Yezhe
This work is about simulations of interstellar chemistry using the deterministic rate equation (RE) method and the stochastic moment equation (ME) method. Primordial metal-poor interstellar medium (ISM) is of our interest and the socalled “Population-II” stars could have been formed in this environment during the “Epoch of Reionization” in the baby universe. We build a gas phase model using the RE scheme to describe the ionization-powered interstellar chemistry. We demonstrate that OH replaces CO as the most abundant metal-bearing molecule in such interstellar clouds of the early universe. Grain surface reactions play an important role in the studies of astrochemistry. But the lack of an accurate yet effective simulation method still presents a challenge, especially for large, practical gas-grain system. We develop a hybrid scheme of moment equations and rate equations (HMR) for large gas-grain network to model astrochemical reactions in the interstellar clouds. Specifically, we have used a large chemical gas-grain model, with stochastic moment equations to treat the surface chemistry and deterministic rate equations to treat the gas phase chemistry, to simulate astrochemical systems as of the ISM in the Milky Way, the Large Magellanic Cloud (LMC) and Small Magellanic Cloud (SMC). We compare the results to those of pure rate equations and modified rate equations and present a discussion about how moment equations improve our theoretical modeling and how the abundances of the assorted species are changed by varied metallicity. We also model the observed composition of H2O, CO and CO2 ices toward Young Stellar Objects in the LMC and show that the HMR method gives a better match to the observation than the pure RE method.
Synchronized flow in oversaturated city traffic.
Kerner, Boris S; Klenov, Sergey L; Hermanns, Gerhard; Hemmerle, Peter; Rehborn, Hubert; Schreckenberg, Michael
2013-11-01
Based on numerical simulations with a stochastic three-phase traffic flow model, we reveal that moving queues (moving jams) in oversaturated city traffic dissolve at some distance upstream of the traffic signal while transforming into synchronized flow. It is found that, as in highway traffic [Kerner, Phys. Rev. E 85, 036110 (2012)], such a jam-absorption effect in city traffic is explained by a strong driver's speed adaptation: Time headways (space gaps) between vehicles increase upstream of a moving queue (moving jam), resulting in moving queue dissolution. It turns out that at given traffic signal parameters, the stronger the speed adaptation effect, the shorter the mean distance between the signal location and the road location at which moving queues dissolve fully and oversaturated traffic consists of synchronized flow only. A comparison of the synchronized flow in city traffic found in this Brief Report with synchronized flow in highway traffic is made.
Percolation properties in a traffic model
NASA Astrophysics Data System (ADS)
Wang, Feilong; Li, Daqing; Xu, Xiaoyun; Wu, Ruoqian; Havlin, Shlomo
2015-11-01
As a dynamical complex system, traffic is characterized by a transition from free flow to congestions, which is mostly studied in highways. However, despite its importance in developing congestion mitigation strategies, the understanding of this common traffic phenomenon in a city scale is still missing. An open question is how the traffic in the network collapses from a global efficient traffic to isolated local flows in small clusters, i.e. the question of traffic percolation. Here we study the traffic percolation properties on a lattice by simulation of an agent-based model for traffic. A critical traffic volume in this model distinguishes the free state from the congested state of traffic. Our results show that the threshold of traffic percolation decreases with increasing traffic volume and reaches a minimum value at the critical traffic volume. We show that this minimal threshold is the result of longest spatial correlation between traffic flows at the critical traffic volume. These findings may help to develop congestion mitigation strategies in a network view.
Preliminary Benefits Assessment of Traffic Aware Strategic Aircrew Requests (TASAR)
NASA Technical Reports Server (NTRS)
Henderson, Jeff; Idris, Husni; Wing, David J.
2012-01-01
While en route, aircrews submit trajectory change requests to air traffic control (ATC) to better meet their objectives including reduced delays, reduced fuel burn, and passenger comfort. Aircrew requests are currently made with limited to no information on surrounding traffic. Consequently, these requests are uninformed about a key ATC objective, ensuring traffic separation, and therefore less likely to be accepted than requests informed by surrounding traffic and that avoids creating conflicts. This paper studies the benefits of providing aircrews with on-board decision support to generate optimized trajectory requests that are probed and cleared of known separation violations prior to issuing the request to ATC. These informed requests are referred to as traffic aware strategic aircrew requests (TASAR) and leverage traffic surveillance information available through Automatic Dependent Surveillance Broadcast (ADS-B) In capability. Preliminary fast-time simulation results show increased benefits with longer stage lengths since beneficial trajectory changes can be applied over a longer distance. Also, larger benefits were experienced between large hub airports as compared to other airport sizes. On average, an aircraft equipped with TASAR reduced its travel time by about one to four minutes per operation and fuel burn by about 50 to 550 lbs per operation depending on the objective of the aircrew (time, fuel, or weighted combination of time and fuel), class of airspace user, and aircraft type. These preliminary results are based on analysis of approximately one week of traffic in July 2012 and additional analysis is planned on a larger data set to confirm these initial findings.
Fast approximation of self-similar network traffic
Paxson, V.
1995-01-01
Recent network traffic studies argue that network arrival processes are much more faithfully modeled using statistically self-similar processes instead of traditional Poisson processes [LTWW94a, PF94]. One difficulty in dealing with self-similar models is how to efficiently synthesize traces (sample paths) corresponding to self-similar traffic. We present a fast Fourier transform method for synthesizing approximate self-similar sample paths and assess its performance and validity. We find that the method is as fast or faster than existing methods and appears to generate a closer approximation to true self-similar sample paths than the other known fast method (Random Midpoint Displacement). We then discuss issues in using such synthesized sample paths for simulating network traffic, and how an approximation used by our method can dramatically speed up evaluation of Whittle`s estimator for H, the Hurst parameter giving the strength of long-range dependence present in a self-similar time series.
Spontaneous density fluctuations in granular flow and traffic
NASA Astrophysics Data System (ADS)
Herrmann, Hans J.
It is known that spontaneous density waves appear in granular material flowing through pipes or hoppers. A similar phenomenon is known from traffic jams on highways. Using numerical simulations we show that several types of waves exist and find that the density fluctuations follow a power law spectrum. We also investigate one-dimensional traffic models. If positions and velocities are continuous variables the model shows self-organized criticality driven by the slowest car. Lattice gas and lattice Boltzmann models reproduce the experimentally observed effects. Density waves are spontaneously generated when the viscosity has a non-linear dependence on density or shear rate as it is the case in traffic or granular flow.
Modeling the Environmental Impact of Air Traffic Operations
NASA Technical Reports Server (NTRS)
Chen, Neil
2011-01-01
There is increased interest to understand and mitigate the impacts of air traffic on the climate, since greenhouse gases, nitrogen oxides, and contrails generated by air traffic can have adverse impacts on the climate. The models described in this presentation are useful for quantifying these impacts and for studying alternative environmentally aware operational concepts. These models have been developed by leveraging and building upon existing simulation and optimization techniques developed for the design of efficient traffic flow management strategies. Specific enhancements to the existing simulation and optimization techniques include new models that simulate aircraft fuel flow, emissions and contrails. To ensure that these new models are beneficial to the larger climate research community, the outputs of these new models are compatible with existing global climate modeling tools like the FAA's Aviation Environmental Design Tool.
Enterprise network control and management: traffic flow models
NASA Astrophysics Data System (ADS)
Maruyama, William; George, Mark S.; Hernandez, Eileen; LoPresto, Keith; Uang, Yea
1999-11-01
The exponential growth and dramatic increase in demand for network bandwidth is expanding the market for broadband satellite networks. It is critical to rapidly deliver ubiquitous satellite communication networks that are differentiated by lower cost and increased Quality of Service (QoS). There is a need to develop new network architectures, control and management systems to meet the future commercial and military traffic requirements, services and applications. The next generation communication networks must support legacy and emerging network traffic while providing user negotiated levels of QoS. Network resources control algorithms must be designed to provide the guaranteed performance levels for voice, video and data having different service requirements. To evaluate network architectures and performance, it is essential to understand the network traffic characteristics.
Stochastic Generator of Chemical Structure. 3. Reaction Network Generation
FAULON,JEAN-LOUP; SAULT,ALLEN G.
2000-07-15
A new method to generate chemical reaction network is proposed. The particularity of the method is that network generation and mechanism reduction are performed simultaneously using sampling techniques. Our method is tested for hydrocarbon thermal cracking. Results and theoretical arguments demonstrate that our method scales in polynomial time while other deterministic network generator scale in exponential time. This finding offers the possibility to investigate complex reacting systems such as those studied in petroleum refining and combustion.
Evolutionary Concepts for Decentralized Air Traffic Flow Management
NASA Technical Reports Server (NTRS)
Adams, Milton; Kolitz, Stephan; Milner, Joseph; Odoni, Amedeo
1997-01-01
Alternative concepts for modifying the policies and procedures under which the air traffic flow management system operates are described, and an approach to the evaluation of those concepts is discussed. Here, air traffic flow management includes all activities related to the management of the flow of aircraft and related system resources from 'block to block.' The alternative concepts represent stages in the evolution from the current system, in which air traffic management decision making is largely centralized within the FAA, to a more decentralized approach wherein the airlines and other airspace users collaborate in air traffic management decision making with the FAA. The emphasis in the discussion is on a viable medium-term partially decentralized scenario representing a phase of this evolution that is consistent with the decision-making approaches embodied in proposed Free Flight concepts for air traffic management. System-level metrics for analyzing and evaluating the various alternatives are defined, and a simulation testbed developed to generate values for those metrics is described. The fundamental issue of modeling airline behavior in decentralized environments is also raised, and an example of such a model, which deals with the preservation of flight bank integrity in hub airports, is presented.
Impact of traffic-related air pollution on health.
Jakubiak-Lasocka, J; Lasocki, J; Siekmeier, R; Chłopek, Z
2015-01-01
Road transport contributes significantly to air quality problems through vehicle emissions, which have various detrimental impacts on public health and the environment. The aim of this study was to assess the impact of traffic-related air pollution on health of Warsaw citizens, following the basics of the Health Impact Assessment (HIA) method, and evaluate its social cost. PM10 was chosen as an indicator of traffic-related air pollution. Exposure-response functions between air pollution and health impacts were employed. The value of statistical life (VSL) approach was used for the estimation of the cost of mortality attributable to traffic-related air pollution. Costs of hospitalizations and restricted activity days were assessed basing on the cost of illness (COI) method. According to the calculations, about 827 Warsaw citizens die in a year as a result of traffic-related air pollution. Also, about 566 and 250 hospital admissions due to cardiovascular and respiratory diseases, respectively, and more than 128,453 restricted activity days can be attributed to the traffic emissions. From the social perspective, these losses generate the cost of 1,604 million PLN (1 EUR-approx. 4.2 PLN). This cost is very high and, therefore, more attention should be paid for the integrated environmental health policy.
Traffic Aware Planner for Cockpit-Based Trajectory Optimization
NASA Technical Reports Server (NTRS)
Woods, Sharon E.; Vivona, Robert A.; Henderson, Jeffrey; Wing, David J.; Burke, Kelly A.
2016-01-01
The Traffic Aware Planner (TAP) software application is a cockpit-based advisory tool designed to be hosted on an Electronic Flight Bag and to enable and test the NASA concept of Traffic Aware Strategic Aircrew Requests (TASAR). The TASAR concept provides pilots with optimized route changes (including altitude) that reduce fuel burn and/or flight time, avoid interactions with known traffic, weather and restricted airspace, and may be used by the pilots to request a route and/or altitude change from Air Traffic Control. Developed using an iterative process, TAP's latest improvements include human-machine interface design upgrades and added functionality based on the results of human-in-the-loop simulation experiments and flight trials. Architectural improvements have been implemented to prepare the system for operational-use trials with partner commercial airlines. Future iterations will enhance coordination with airline dispatch and add functionality to improve the acceptability of TAP-generated route-change requests to pilots, dispatchers, and air traffic controllers.
NASA Astrophysics Data System (ADS)
Nicolay, S.; Brodie of Brodie, E. B.; Touchon, M.; d'Aubenton-Carafa, Y.; Thermes, C.; Arneodo, A.
2004-10-01
We use the continuous wavelet transform to perform a space-scale analysis of the AT and GC skews (strand asymmetries) in human genomic sequences, which have been shown to correlate with gene transcription. This study reveals the existence of a characteristic scale ℓ c≃25±10 kb that separates a monofractal long-range correlated noisy regime at small scales (ℓ<ℓ c) from relaxational oscillatory behavior at large-scale (ℓ>ℓ c). We show that these large scale nonlinear oscillations enlighten an organization of the human genome into adjacent domains ( ≈400 kb) with preferential gene orientation. When using classical techniques from dynamical systems theory, we demonstrate that these relaxational oscillations display all the characteristic properties of the chaotic strange attractor behavior observed nearby homoclinic orbits of Shil'nikov type. We discuss the possibility that replication and gene regulation processes are governed by a low-dimensional dynamical system that displays deterministic chaos.
NASA Technical Reports Server (NTRS)
Huber, Hans
2006-01-01
Air transport forms complex networks that can be measured in order to understand its structural characteristics and functional properties. Recent models for network growth (i.e., preferential attachment, etc.) remain stochastic and do not seek to understand other network-specific mechanisms that may account for their development in a more microscopic way. Air traffic is made up of many constituent airlines that are either privately or publicly owned and that operate their own networks. They follow more or less similar business policies each. The way these airline networks organize among themselves into distinct traffic distributions reveals complex interaction among them, which in turn can be aggregated into larger (macro-) traffic distributions. Our approach allows for a more deterministic methodology that will assess the impact of airline strategies on the distinct distributions for air traffic, particularly inside Europe. One key question this paper is seeking to answer is whether there are distinct patterns of preferential attachment for given classes of airline networks to distinct types of European airports. Conclusions about the advancing degree of concentration in this industry and the airline operators that accelerate this process can be drawn.
Traffic flow pattern and meteorology at two distinct urban junctions with impacts on air quality
NASA Astrophysics Data System (ADS)
Gokhale, Sharad
2011-04-01
Traffic during operation at a junction undergoes different flow conditions and modal events which result into dynamic fleet characteristics generating more emissions and stronger vehicle-induced heat and wakes generating obscure dispersion. Traffic in a manner operated at junctions often creates pockets of higher concentrations the locations of which shift as a result of the combine effects of traffic dynamics and random airflow. This research examined the impacts of traffic dynamics and meteorology on the levels and locations of higher concentrations of pollutant CO, NO 2 and PM within the influence of signalized traffic intersection and a conventional two-lane roundabout in a response to varying flow conditions and emissions resulted from the traffic operations. Three line source dispersion models have been used to determine the impact on air quality. Emissions have been calculated for different scenarios developed from different combinations of semi-empirical and field based time and space-mean speeds and lane-width based density when traffic undergoes free, interrupted and congested-flow conditions during operation. It has been found that the locations of highest concentrations within the domain change as traffic with different modal share encounters different flow conditions at different times of a day.
Deterministic and Stochastic Analysis of a Prey-Dependent Predator-Prey System
ERIC Educational Resources Information Center
Maiti, Alakes; Samanta, G. P.
2005-01-01
This paper reports on studies of the deterministic and stochastic behaviours of a predator-prey system with prey-dependent response function. The first part of the paper deals with the deterministic analysis of uniform boundedness, permanence, stability and bifurcation. In the second part the reproductive and mortality factors of the prey and…
Automatic drawing for traffic marking with MMS LIDAR intensity
NASA Astrophysics Data System (ADS)
Takahashi, G.; Takeda, H.; Shimano, Y.
2014-05-01
Upgrading the database of CYBER JAPAN has been strategically promoted because the "Basic Act on Promotion of Utilization of Geographical Information", was enacted in May 2007. In particular, there is a high demand for road information that comprises a framework in this database. Therefore, road inventory mapping work has to be accurate and eliminate variation caused by individual human operators. Further, the large number of traffic markings that are periodically maintained and possibly changed require an efficient method for updating spatial data. Currently, we apply manual photogrammetry drawing for mapping traffic markings. However, this method is not sufficiently efficient in terms of the required productivity, and data variation can arise from individual operators. In contrast, Mobile Mapping Systems (MMS) and high-density Laser Imaging Detection and Ranging (LIDAR) scanners are rapidly gaining popularity. The aim in this study is to build an efficient method for automatically drawing traffic markings using MMS LIDAR data. The key idea in this method is extracting lines using a Hough transform strategically focused on changes in local reflection intensity along scan lines. However, also note that this method processes every traffic marking. In this paper, we discuss a highly accurate and non-human-operator-dependent method that applies the following steps: (1) Binarizing LIDAR points by intensity and extracting higher intensity points; (2) Generating a Triangulated Irregular Network (TIN) from higher intensity points; (3) Deleting arcs by length and generating outline polygons on the TIN; (4) Generating buffers from the outline polygons; (5) Extracting points from the buffers using the original LIDAR points; (6) Extracting local-intensity-changing points along scan lines using the extracted points; (7) Extracting lines from intensity-changing points through a Hough transform; and (8) Connecting lines to generate automated traffic marking mapping data.
Cellular automata for traffic simulations
NASA Astrophysics Data System (ADS)
Wolf, Dietrich E.
1999-02-01
Traffic phenomena such as the transition from free to congested flow, lane inversion and platoon formation can be accurately reproduced using cellular automata. Being computationally extremely efficient, they simulate large traffic systems many times faster than real time so that predictions become feasible. A riview of recent results is given. The presence of metastable states at the jamming transition is discussed in detail. A simple new cellular automation is introduced, in which the interaction between cars is Galilei-invariant. It is shown that this type of interaction accounts for metastable states in a very natural way.
Traffic Flow Management and Optimization
NASA Technical Reports Server (NTRS)
Rios, Joseph Lucio
2014-01-01
This talk will present an overview of Traffic Flow Management (TFM) research at NASA Ames Research Center. Dr. Rios will focus on his work developing a large-scale, parallel approach to solving traffic flow management problems in the national airspace. In support of this talk, Dr. Rios will provide some background on operational aspects of TFM as well a discussion of some of the tools needed to perform such work including a high-fidelity airspace simulator. Current, on-going research related to TFM data services in the national airspace system and general aviation will also be presented.
Dynamic Density: An Air Traffic Management Metric
NASA Technical Reports Server (NTRS)
Laudeman, I. V.; Shelden, S. G.; Branstrom, R.; Brasil, C. L.
1998-01-01
The definition of a metric of air traffic controller workload based on air traffic characteristics is essential to the development of both air traffic management automation and air traffic procedures. Dynamic density is a proposed concept for a metric that includes both traffic density (a count of aircraft in a volume of airspace) and traffic complexity (a measure of the complexity of the air traffic in a volume of airspace). It was hypothesized that a metric that includes terms that capture air traffic complexity will be a better measure of air traffic controller workload than current measures based only on traffic density. A weighted linear dynamic density function was developed and validated operationally. The proposed dynamic density function includes a traffic density term and eight traffic complexity terms. A unit-weighted dynamic density function was able to account for an average of 22% of the variance in observed controller activity not accounted for by traffic density alone. A comparative analysis of unit weights, subjective weights, and regression weights for the terms in the dynamic density equation was conducted. The best predictor of controller activity was the dynamic density equation with regression-weighted complexity terms.
An Evaluation Methodology for Traffic Awareness Displays
NASA Technical Reports Server (NTRS)
DeMaio, Joe; Dearing, Munro
2004-01-01
An evaluation methodology for traffic awareness displays for helicopters and other vertical/short takeoff aircraft was developed. Pilots of vertical/short takeoff aircraft wil1 require more traffic information than would pilots of conventional aircraft to avoid both other vertical/short takeoff traffic and conventional traffic. The BF Goodrich Skywatch traffic advisory display was used as a candidate display to develop a procedure for evaluating the usefulness of such displays. Four high-time helicopter pilots participated in a 16-hour flight evaluation. They flew a closed circuit in the San Francisco Bay Area. In one-half of the flights the evaluation pilot had the traffic advisory display as an aid in detecting and locating traffic. In the other half of the flights the traffic advisory display was not available to the evaluation pilot. Data examined include measures of traffic advisory display performance and pilot performance in detecting traffic, as well as subjective workload and situation awareness data. The traffic advisory system did not help the pilots to detect more traffic. The importance of detection to traffic awareness is discussed.
Time Relevance of Convective Weather Forecast for Air Traffic Automation
NASA Technical Reports Server (NTRS)
Chan, William N.
2006-01-01
The Federal Aviation Administration (FAA) is handling nearly 120,000 flights a day through its Air Traffic Management (ATM) system and air traffic congestion is expected to increse substantially over the next 20 years. Weather-induced impacts to throughput and efficiency are the leading cause of flight delays accounting for 70% of all delays with convective weather accounting for 60% of all weather related delays. To support the Next Generation Air Traffic System goal of operating at 3X current capacity in the NAS, ATC decision support tools are being developed to create advisories to assist controllers in all weather constraints. Initial development of these decision support tools did not integrate information regarding weather constraints such as thunderstorms and relied on an additional system to provide that information. Future Decision Support Tools should move towards an integrated system where weather constraints are factored into the advisory of a Decision Support Tool (DST). Several groups such at NASA-Ames, Lincoln Laboratories, and MITRE are integrating convective weather data with DSTs. A survey of current convective weather forecast and observation data show they span a wide range of temporal and spatial resolutions. Short range convective observations can be obtained every 5 mins with longer range forecasts out to several days updated every 6 hrs. Today, the short range forecasts of less than 2 hours have a temporal resolution of 5 mins. Beyond 2 hours, forecasts have much lower temporal. resolution of typically 1 hour. Spatial resolutions vary from 1km for short range to 40km for longer range forecasts. Improving the accuracy of long range convective forecasts is a major challenge. A report published by the National Research Council states improvements for convective forecasts for the 2 to 6 hour time frame will only be achieved for a limited set of convective phenomena in the next 5 to 10 years. Improved longer range forecasts will be probabilistic
The deterministic prediction of failure of low pressure steam turbine disks
Liu, Chun; Macdonald, D.D.
1993-05-01
Localized corrosion phenomena, including pitting corrosion, stress corrosion cracking, and corrosion fatigue, are the principal causes of corrosion-induced damage in electric power generating facilities and typically result in more than 50% of the unscheduled outages. Prediction of damage, so that repairs and inspections can be made during scheduled outages, could have an enormous impact on the economics of electric power generation. To date, prediction of corrosion damage has been made on the basis of empirical/statistical methods that have proven to be insufficiently robust and accurate to form the basis for the desired inspection/repair protocol. In this paper, we describe a deterministic method for predicting localized corrosion damage. We have used the method to illustrate how pitting corrosion initiates stress corrosion cracking (SCC) for low pressure steam turbine disks downstream of the Wilson line, where a thin condensed liquid layer exists on the steel disk surfaces. Our calculations show that the SCC initiation and propagation are sensitive to the oxygen content of the steam, the environment in the thin liquid condensed layer, and the stresses that the disk experiences in service.
Traffic analysis on multimedia data services
NASA Astrophysics Data System (ADS)
Bescos, Jesus; Martinez, Jose M.; Cisneros, Guillermo
1996-11-01
This paper is motivated by the advantages of having transactional value added services (i.e. interactive multimedia ones) instead of solely providing information on electronic supports (i.e. CD-ROMs). Investment on network resources provision, or even on adding infrastructure to existing networks, seems to be the most cost-effective solution for the establishment of this kind of services. Support for multimedia data services requires a full characterization of both the forward and return channels (usually highly asymmetric) for one or several users, so that proper resources can be allocated or efficient new infrastructures can be designed. This paper firstly describes a fully interactive and general purpose multimedia client/server application (a currently working one), that provides to the user a common interface to remotely access heterogenous databases. Secondly, it presents the test architecture and configuration established to obtain a representative number of traffic measures that a single instance of this multimedia application generates over a TCP network. Data is then analyzed to extract the QoS traffic parameters that will define network capabilities for both the forward and return communication channels, first for a single user, and then to optimize a multi-user environment. Next, a methodology for the accurate characterization of the multi-user situation is presented. Finally, arguments for extrapolation of the results to most applications currently running over Internet, are discussed.
Traffic Aware Strategic Aircrew Requests (TASAR)
NASA Technical Reports Server (NTRS)
Wing, David J.
2014-01-01
The Traffic Aware Strategic Aircrew Request (TASAR) concept offers onboard automation for the purpose of advising the pilot of traffic compatible trajectory changes that would be beneficial to the flight. A fast-time simulation study was conducted to assess the benefits of TASAR to Alaska Airlines. The simulation compares historical trajectories without TASAR to trajectories developed with TASAR and evaluated by controllers against their objectives. It was estimated that between 8,000 and 12,000 gallons of fuel and 900 to 1,300 minutes could be saved annually per aircraft. These savings were applied fleet-wide to produce an estimated annual cost savings to Alaska Airlines in excess of $5 million due to fuel, maintenance, and depreciation cost savings. Switching to a more wind-optimal trajectory was found to be the use case that generated the highest benefits out of the three TASAR use cases analyzed. Alaska TASAR requests peaked at four to eight requests per hour in high-altitude Seattle center sectors south of Seattle-Tacoma airport..
D. Scott Lucas; D. S. Lucas
2005-09-01
An LDRD (Laboratory Directed Research and Development) project is underway at the Idaho National Laboratory (INL) to apply the three-dimensional multi-group deterministic neutron transport code (Attila®) to criticality, flux and depletion calculations of the Advanced Test Reactor (ATR). This paper discusses the development of Attila models for ATR, capabilities of Attila, the generation and use of different cross-section libraries, and comparisons to ATR data, MCNP, MCNPX and future applications.
Overview. Traffic Safety Facts, 2000.
ERIC Educational Resources Information Center
National Highway Traffic Safety Administration (DOT), Washington, DC.
This document provides statistical information on U.S. motor vehicle and traffic safety. Data include: (1) motor vehicle occupants and non-occupants killed and injured, 1990-2000; (2) persons killed and injured, and fatality and injury rates, 1990-2000; (3) restraint use rates for passenger car occupants in fatal crashes, 1990 and 2000; (4)…
Traffic Safety Facts, 2001. Overview.
ERIC Educational Resources Information Center
National Highway Traffic Safety Administration (DOT), Washington, DC.
This document provides statistical information on U.S. motor vehicle and traffic safety. Data include: (1) motor vehicle occupants and non-occupants killed and injured, 1991-2001; (2) persons killed and injured, and fatality and injury rates, 1991-2001; (3) restraint use rates for passenger car occupants in fatal crashes, 1991 and 2001; (4)…
Traffic Safety Facts, 2001: Children.
ERIC Educational Resources Information Center
National Highway Traffic Safety Administration (DOT), Washington, DC.
This document provides statistical information on the incidence of U.S. motor vehicle-related accidents and fatalities involving children. Data include: (1) total traffic fatalities among children 0-14 years old, by age group, 1991-2001; (2) total pedestrian fatalities among children 0-14 years old, by age group, 1991-2001; (3) total pedalcyclist…
Children. Traffic Safety Facts, 2000.
ERIC Educational Resources Information Center
National Highway Traffic Safety Administration (DOT), Washington, DC.
This document provides statistical information on the incidence of U.S. motor vehicle-related accidents and fatalities involving children. Data include: (1) total traffic fatalities among children 0-14 years old, by age group, 1990-2000; (2) total pedestrian fatalities among children 0-14 years old, by age group, 1990-2000; (3) total pedalcyclist…
Air-traffic surveillance systems
NASA Technical Reports Server (NTRS)
Macdoran, P. F.
1979-01-01
Passive ground-based radio-interferometry systems (RILS) monitor local air traffic by determining aircraft position in planes defined by surveillance area. Similar RILS arrangements are used to determine aircraft positions in three dimensions when combined with azimuth and range information obtained by radar. Information helps determine three-dimensional aircraft position without expensive encoding altimeters.
Broadcast control of air traffic
NASA Technical Reports Server (NTRS)
Litchford, G. B.
1972-01-01
The development of a system of broadcast control for improved flight safety and air traffic control is discussed. The system provides a balance of equality between improved cockpit guidance and control capability and ground control in order to provide the pilot with a greater degree of participation. The manner in which the system is operated and the equipment required for safe operation are examined.
49 CFR 1139.2 - Traffic study.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 8 2012-10-01 2012-10-01 false Traffic study. 1139.2 Section 1139.2... of General Commodities § 1139.2 Traffic study. (a) The respondents shall submit a traffic study for... “base-calendar year—actual.” The study shall include a probability sampling of the actual...
49 CFR 1139.2 - Traffic study.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 8 2013-10-01 2013-10-01 false Traffic study. 1139.2 Section 1139.2... of General Commodities § 1139.2 Traffic study. (a) The respondents shall submit a traffic study for... “base-calendar year—actual.” The study shall include a probability sampling of the actual...
30 CFR 57.9100 - Traffic control.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Traffic control. 57.9100 Section 57.9100 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE... Dumping Traffic Safety § 57.9100 Traffic control. To provide for the safe movement of...
30 CFR 56.9100 - Traffic control.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Traffic control. 56.9100 Section 56.9100 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE... Dumping Traffic Safety § 56.9100 Traffic control. To provide for the safe movement of...
30 CFR 57.9100 - Traffic control.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Traffic control. 57.9100 Section 57.9100 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE... Dumping Traffic Safety § 57.9100 Traffic control. To provide for the safe movement of...
30 CFR 56.9100 - Traffic control.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Traffic control. 56.9100 Section 56.9100 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE... Dumping Traffic Safety § 56.9100 Traffic control. To provide for the safe movement of...
49 CFR 1139.2 - Traffic study.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 8 2011-10-01 2011-10-01 false Traffic study. 1139.2 Section 1139.2... of General Commodities § 1139.2 Traffic study. (a) The respondents shall submit a traffic study for... “base-calendar year—actual.” The study shall include a probability sampling of the actual...
49 CFR 1139.2 - Traffic study.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 8 2010-10-01 2010-10-01 false Traffic study. 1139.2 Section 1139.2... of General Commodities § 1139.2 Traffic study. (a) The respondents shall submit a traffic study for... “base-calendar year—actual.” The study shall include a probability sampling of the actual...
49 CFR 1139.2 - Traffic study.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 8 2014-10-01 2014-10-01 false Traffic study. 1139.2 Section 1139.2... of General Commodities § 1139.2 Traffic study. (a) The respondents shall submit a traffic study for... “base-calendar year—actual.” The study shall include a probability sampling of the actual...
30 CFR 57.9100 - Traffic control.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Dumping Traffic Safety § 57.9100 Traffic control. To provide for the safe movement of self-propelled... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Traffic control. 57.9100 Section 57.9100 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL...
30 CFR 56.9100 - Traffic control.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Dumping Traffic Safety § 56.9100 Traffic control. To provide for the safe movement of self-propelled... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Traffic control. 56.9100 Section 56.9100 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL...
30 CFR 57.9100 - Traffic control.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Traffic control. 57.9100 Section 57.9100 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE... Dumping Traffic Safety § 57.9100 Traffic control. To provide for the safe movement of...
30 CFR 56.9100 - Traffic control.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Traffic control. 56.9100 Section 56.9100 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE... Dumping Traffic Safety § 56.9100 Traffic control. To provide for the safe movement of...
15 CFR 265.22 - Bicycle traffic.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 15 Commerce and Foreign Trade 1 2013-01-01 2013-01-01 false Bicycle traffic. 265.22 Section 265.22... STANDARDS AND TECHNOLOGY, DEPARTMENT OF COMMERCE REGULATIONS GOVERNING TRAFFIC AND CONDUCT REGULATIONS GOVERNING TRAFFIC AND CONDUCT ON THE GROUNDS OF THE NATIONAL INSTITUTE OF STANDARDS &...
49 CFR 236.381 - Traffic locking.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 4 2014-10-01 2014-10-01 false Traffic locking. 236.381 Section 236.381 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... and Tests § 236.381 Traffic locking. Traffic locking shall be tested when placed in service...
15 CFR 265.22 - Bicycle traffic.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 15 Commerce and Foreign Trade 1 2014-01-01 2014-01-01 false Bicycle traffic. 265.22 Section 265.22... STANDARDS AND TECHNOLOGY, DEPARTMENT OF COMMERCE REGULATIONS GOVERNING TRAFFIC AND CONDUCT REGULATIONS GOVERNING TRAFFIC AND CONDUCT ON THE GROUNDS OF THE NATIONAL INSTITUTE OF STANDARDS &...
36 CFR 1004.13 - Obstructing traffic.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 36 Parks, Forests, and Public Property 3 2011-07-01 2011-07-01 false Obstructing traffic. 1004.13 Section 1004.13 Parks, Forests, and Public Property PRESIDIO TRUST VEHICLES AND TRAFFIC SAFETY § 1004.13 Obstructing traffic. The following are prohibited: (a) Stopping or parking a vehicle upon a Presidio...
49 CFR 236.381 - Traffic locking.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 4 2012-10-01 2012-10-01 false Traffic locking. 236.381 Section 236.381 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... and Tests § 236.381 Traffic locking. Traffic locking shall be tested when placed in service...
15 CFR 265.22 - Bicycle traffic.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 15 Commerce and Foreign Trade 1 2012-01-01 2012-01-01 false Bicycle traffic. 265.22 Section 265.22... STANDARDS AND TECHNOLOGY, DEPARTMENT OF COMMERCE REGULATIONS GOVERNING TRAFFIC AND CONDUCT REGULATIONS GOVERNING TRAFFIC AND CONDUCT ON THE GROUNDS OF THE NATIONAL INSTITUTE OF STANDARDS &...
49 CFR 236.381 - Traffic locking.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 4 2011-10-01 2011-10-01 false Traffic locking. 236.381 Section 236.381 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... and Tests § 236.381 Traffic locking. Traffic locking shall be tested when placed in service...
30 CFR 57.9100 - Traffic control.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Traffic control. 57.9100 Section 57.9100 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE... Dumping Traffic Safety § 57.9100 Traffic control. To provide for the safe movement of...
36 CFR 1004.13 - Obstructing traffic.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 36 Parks, Forests, and Public Property 3 2013-07-01 2012-07-01 true Obstructing traffic. 1004.13 Section 1004.13 Parks, Forests, and Public Property PRESIDIO TRUST VEHICLES AND TRAFFIC SAFETY § 1004.13 Obstructing traffic. The following are prohibited: (a) Stopping or parking a vehicle upon a Presidio...
15 CFR 265.22 - Bicycle traffic.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 15 Commerce and Foreign Trade 1 2011-01-01 2011-01-01 false Bicycle traffic. 265.22 Section 265.22... STANDARDS AND TECHNOLOGY, DEPARTMENT OF COMMERCE REGULATIONS GOVERNING TRAFFIC AND CONDUCT REGULATIONS GOVERNING TRAFFIC AND CONDUCT ON THE GROUNDS OF THE NATIONAL INSTITUTE OF STANDARDS &...
49 CFR 236.381 - Traffic locking.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 4 2013-10-01 2013-10-01 false Traffic locking. 236.381 Section 236.381 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... and Tests § 236.381 Traffic locking. Traffic locking shall be tested when placed in service...
49 CFR 236.381 - Traffic locking.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Traffic locking. 236.381 Section 236.381 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... and Tests § 236.381 Traffic locking. Traffic locking shall be tested when placed in service...
36 CFR 1004.13 - Obstructing traffic.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 36 Parks, Forests, and Public Property 3 2012-07-01 2012-07-01 false Obstructing traffic. 1004.13 Section 1004.13 Parks, Forests, and Public Property PRESIDIO TRUST VEHICLES AND TRAFFIC SAFETY § 1004.13 Obstructing traffic. The following are prohibited: (a) Stopping or parking a vehicle upon a Presidio...
36 CFR 1004.13 - Obstructing traffic.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 36 Parks, Forests, and Public Property 3 2014-07-01 2014-07-01 false Obstructing traffic. 1004.13 Section 1004.13 Parks, Forests, and Public Property PRESIDIO TRUST VEHICLES AND TRAFFIC SAFETY § 1004.13 Obstructing traffic. The following are prohibited: (a) Stopping or parking a vehicle upon a Presidio...
36 CFR 1004.13 - Obstructing traffic.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Obstructing traffic. 1004.13 Section 1004.13 Parks, Forests, and Public Property PRESIDIO TRUST VEHICLES AND TRAFFIC SAFETY § 1004.13 Obstructing traffic. The following are prohibited: (a) Stopping or parking a vehicle upon a Presidio...
Predicting Information Flows in Network Traffic.
ERIC Educational Resources Information Center
Hinich, Melvin J.; Molyneux, Robert E.
2003-01-01
Discusses information flow in networks and predicting network traffic and describes a study that uses time series analysis on a day's worth of Internet log data. Examines nonlinearity and traffic invariants, and suggests that prediction of network traffic may not be possible with current techniques. (Author/LRW)
30 CFR 56.9100 - Traffic control.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Traffic control. 56.9100 Section 56.9100 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE... Dumping Traffic Safety § 56.9100 Traffic control. To provide for the safe movement of...
a Photogrammetric Appraoch for Automatic Traffic Assessment Using Conventional Cctv Camera
NASA Astrophysics Data System (ADS)
Zarrinpanjeh, N.; Dadrassjavan, F.; Fattahi, H.
2015-12-01
One of the most practical tools for urban traffic monitoring is CCTV imaging which is widely used for traffic map generation and updating through human surveillance. But due to the expansion of urban road network and the use of huge number of CCTV cameras, visual inspection and updating of traffic sometimes seems to be ineffective and time consuming and therefore not providing real-time robust update. In this paper a method for vehicle detection accounting and speed estimation is proposed to give a more automated solution for traffic assessment. Through removing violating objects and detection of vehicles via morphological filtering and also classification of moving objects at the scene vehicles are counted and traffic speed is estimated. The proposed method is developed and tested using two datasets and evaluation values are computed. The results show that the successfulness of the algorithm decreases by about 12 % due to decrease in illumination quality of imagery.
Road Traffic Accidents in Kazakhstan
AUBAKIROVA, Alma; KOSSUMOV, Alibek; IGISSINOV, Nurbek
2013-01-01
Background: The article provides the analysis of death rates in road traffic accidents in Kazakhstan from 2004 to 2010 and explores the use of sanitary aviation. Methods: Data of fatalities caused by road traffic accidents were collected and analysed. Descriptive and analytical methods of epidemiology and biomedical statistics were applied. Results: Totaly 27,003 people died as a result of road traffic accidents in this period. The death rate for the total population due to road traffic accidents was 25.0±2.10/0000. The death rate for men was (38.3±3.20/0000), which was higher (P<0.05) than that for women (12.6±1.10/0000). High death rates in the entire male population were identified among men of 30–39 years old, whereas the highest rates for women were attributed to the groups of 50–59 years old and 70–79 years old. In time dynamics, death rates tended to decrease: the total population (Tdec=−2.4%), men (Tdec=−2.3%) and women (Tdec=−1.4%). When researching territorial relevance, the rates were established as low (to 18.30/0000), average (between 18.3 and 24.00/0000) and high (from 24.00/0000 and above). Thus, the regions with high rates included Akmola region (24.30/0000), Mangistau region (25.90/0000), Zhambyl region (27.30/0000), Almaty region (29.30/0000) and South Kazakhstan region (32.40/0000). Conclusion: The identified epidemiological characteristics of the population deaths rates from road traffic accidents should be used in integrated and targeted interventions to enhance prevention of injuries in accidents. PMID:23641400
Deterministic lateral displacement for particle separation: a review.
McGrath, J; Jimenez, M; Bridle, H
2014-11-07
Deterministic lateral displacement (DLD), a hydrodynamic, microfluidic technology, was first reported by Huang et al. in 2004 to separate particles on the basis of size in continuous flow with a resolution of down to 10 nm. For 10 years, DLD has been extensively studied, employed and modified by researchers in terms of theory, design, microfabrication and application to develop newer, faster and more efficient tools for separation of millimetre, micrometre and even sub-micrometre sized particles. To extend the range of potential applications, the specific arrangement of geometric features in DLD has also been adapted and/or coupled with external forces (e.g. acoustic, electric, gravitational) to separate particles on the basis of other properties than size such as the shape, deformability and dielectric properties of particles. Furthermore, investigations into DLD performance where inertial and non-Newtonian effects are present have been conducted. However, the evolvement and application of DLD has not yet been reviewed. In this paper, we collate many interesting publications to provide a comprehensive review of the development and diversity of this technology but also provide scope for future direction and detail the fundamentals for those wishing to design such devices for the first time.
Particle separation using virtual deterministic lateral displacement (vDLD).
Collins, David J; Alan, Tuncay; Neild, Adrian
2014-05-07
We present a method for sensitive and tunable particle sorting that we term virtual deterministic lateral displacement (vDLD). The vDLD system is composed of a set of interdigital transducers (IDTs) within a microfluidic chamber that produce a force field at an angle to the flow direction. Particles above a critical diameter, a function of the force induced by viscous drag and the force field, are displaced laterally along the minimum force potential lines, while smaller particles continue in the direction of the fluid flow without substantial perturbations. We demonstrate the effective separation of particles in a continuous-flow system with size sensitivity comparable or better than other previously reported microfluidic separation techniques. Separation of 5.0 μm from 6.6 μm, 6.6 μm from 7.0 μm and 300 nm from 500 nm particles are all achieved using the same device architecture. With the high sensitivity and flexibility vDLD affords we expect to find application in a wide variety of microfluidic platforms.
Method to deterministically study photonic nanostructures in different experimental instruments.
Husken, B H; Woldering, L A; Blum, C; Vos, W L
2009-01-01
We describe an experimental method to recover a single, deterministically fabricated nanostructure in various experimental instruments without the use of artificially fabricated markers, with the aim to study photonic structures. Therefore, a detailed map of the spatial surroundings of the nanostructure is made during the fabrication of the structure. These maps are made using a series of micrographs with successively decreasing magnifications. The graphs reveal intrinsic and characteristic geometric features that can subsequently be used in different setups to act as markers. As an illustration, we probe surface cavities with radii of 65 nm on a silica opal photonic crystal with various setups: a focused ion beam workstation; a scanning electron microscope (SEM); a wide field optical microscope and a confocal microscope. We use cross-correlation techniques to recover a small area imaged with the SEM in a large area photographed with the optical microscope, which provides a possible avenue to automatic searching. We show how both structural and optical reflectivity data can be obtained from one and the same nanostructure. Since our approach does not use artificial grids or markers, it is of particular interest for samples whose structure is not known a priori, like samples created solely by self-assembly. In addition, our method is not restricted to conducting samples.
Agent-Based Deterministic Modeling of the Bone Marrow Homeostasis.
Kurhekar, Manish; Deshpande, Umesh
2016-01-01
Modeling of stem cells not only describes but also predicts how a stem cell's environment can control its fate. The first stem cell populations discovered were hematopoietic stem cells (HSCs). In this paper, we present a deterministic model of bone marrow (that hosts HSCs) that is consistent with several of the qualitative biological observations. This model incorporates stem cell death (apoptosis) after a certain number of cell divisions and also demonstrates that a single HSC can potentially populate the entire bone marrow. It also demonstrates that there is a production of sufficient number of differentiated cells (RBCs, WBCs, etc.). We prove that our model of bone marrow is biologically consistent and it overcomes the biological feasibility limitations of previously reported models. The major contribution of our model is the flexibility it allows in choosing model parameters which permits several different simulations to be carried out in silico without affecting the homeostatic properties of the model. We have also performed agent-based simulation of the model of bone marrow system proposed in this paper. We have also included parameter details and the results obtained from the simulation. The program of the agent-based simulation of the proposed model is made available on a publicly accessible website.
Equivalence of deterministic walks on regular lattices on the plane
NASA Astrophysics Data System (ADS)
Rechtman, Ana; Rechtman, Raúl
2017-01-01
We consider deterministic walks on square, triangular and hexagonal two dimensional lattices. In each case, there is a scatterer at every lattice site that can be in one of two states that forces the walker to turn either to his/her immediate right or left. After the walker is scattered, the scatterer changes state. A lattice with an arrangement of scatterers is an environment. We show that there are only two environments for which the scattering rules are injective, mirrors or rotators, on the three lattices. On hexagonal lattices Webb and Cohen (2014), proved that if a walker with a given initial position and velocity moves through an environment of mirrors (rotators) then there is an environment of rotators (mirrors) through which the walker would move with the same trajectory. We refer to these trajectories on mirror and rotator environments as equivalent walks. We prove the equivalence of walks on square and triangular lattices and include a proof of the equivalence of walks on hexagonal lattices. The proofs are based both on the geometry of the lattice and the structure of the scattering rule.
Deterministic methods for multi-control fuel loading optimization
NASA Astrophysics Data System (ADS)
Rahman, Fariz B. Abdul
We have developed a multi-control fuel loading optimization code for pressurized water reactors based on deterministic methods. The objective is to flatten the fuel burnup profile, which maximizes overall energy production. The optimal control problem is formulated using the method of Lagrange multipliers and the direct adjoining approach for treatment of the inequality power peaking constraint. The optimality conditions are derived for a multi-dimensional multi-group optimal control problem via calculus of variations. Due to the Hamiltonian having a linear control, our optimal control problem is solved using the gradient method to minimize the Hamiltonian and a Newton step formulation to obtain the optimal control. We are able to satisfy the power peaking constraint during depletion with the control at beginning of cycle (BOC) by building the proper burnup path forward in time and utilizing the adjoint burnup to propagate the information back to the BOC. Our test results show that we are able to achieve our objective and satisfy the power peaking constraint during depletion using either the fissile enrichment or burnable poison as the control. Our fuel loading designs show an increase of 7.8 equivalent full power days (EFPDs) in cycle length compared with 517.4 EFPDs for the AP600 first cycle.
Insights into the deterministic skill of air quality ensembles ...
Simulations from chemical weather models are subject to uncertainties in the input data (e.g. emission inventory, initial and boundary conditions) as well as those intrinsic to the model (e.g. physical parameterization, chemical mechanism). Multi-model ensembles can improve the forecast skill, provided that certain mathematical conditions are fulfilled. In this work, four ensemble methods were applied to two different datasets, and their performance was compared for ozone (O3), nitrogen dioxide (NO2) and particulate matter (PM10). Apart from the unconditional ensemble average, the approach behind the other three methods relies on adding optimum weights to members or constraining the ensemble to those members that meet certain conditions in time or frequency domain. The two different datasets were created for the first and second phase of the Air Quality Model Evaluation International Initiative (AQMEII). The methods are evaluated against ground level observations collected from the EMEP (European Monitoring and Evaluation Programme) and AirBase databases. The goal of the study is to quantify to what extent we can extract predictable signals from an ensemble with superior skill over the single models and the ensemble mean. Verification statistics show that the deterministic models simulate better O3 than NO2 and PM10, linked to different levels of complexity in the represented processes. The unconditional ensemble mean achieves higher skill compared to each stati
Agent-Based Deterministic Modeling of the Bone Marrow Homeostasis
2016-01-01
Modeling of stem cells not only describes but also predicts how a stem cell's environment can control its fate. The first stem cell populations discovered were hematopoietic stem cells (HSCs). In this paper, we present a deterministic model of bone marrow (that hosts HSCs) that is consistent with several of the qualitative biological observations. This model incorporates stem cell death (apoptosis) after a certain number of cell divisions and also demonstrates that a single HSC can potentially populate the entire bone marrow. It also demonstrates that there is a production of sufficient number of differentiated cells (RBCs, WBCs, etc.). We prove that our model of bone marrow is biologically consistent and it overcomes the biological feasibility limitations of previously reported models. The major contribution of our model is the flexibility it allows in choosing model parameters which permits several different simulations to be carried out in silico without affecting the homeostatic properties of the model. We have also performed agent-based simulation of the model of bone marrow system proposed in this paper. We have also included parameter details and the results obtained from the simulation. The program of the agent-based simulation of the proposed model is made available on a publicly accessible website. PMID:27340402
Deterministic versus evidence-based attitude towards clinical diagnosis.
Soltani, Akbar; Moayyeri, Alireza
2007-08-01
Generally, two basic classes have been proposed for scientific explanation of events. Deductive reasoning emphasizes on reaching conclusions about a hypothesis based on verification of universal laws pertinent to that hypothesis, while inductive or probabilistic reasoning explains an event by calculation of some probabilities for that event to be related to a given hypothesis. Although both types of reasoning are used in clinical practice, evidence-based medicine stresses on the advantages of the second approach for most instances in medical decision making. While 'probabilistic or evidence-based' reasoning seems to involve more mathematical formulas at the first look, this attitude is more dynamic and less imprisoned by the rigidity of mathematics comparing with 'deterministic or mathematical attitude'. In the field of medical diagnosis, appreciation of uncertainty in clinical encounters and utilization of likelihood ratio as measure of accuracy seem to be the most important characteristics of evidence-based doctors. Other characteristics include use of series of tests for refining probability, changing diagnostic thresholds considering external evidences and nature of the disease, and attention to confidence intervals to estimate uncertainty of research-derived parameters.
Turning Indium Oxide into a Superior Electrocatalyst: Deterministic Heteroatoms
Zhang, Bo; Zhang, Nan Nan; Chen, Jian Fu; Hou, Yu; Yang, Shuang; Guo, Jian Wei; Yang, Xiao Hua; Zhong, Ju Hua; Wang, Hai Feng; Hu, P.; Zhao, Hui Jun; Yang, Hua Gui
2013-01-01
The efficient electrocatalysts for many heterogeneous catalytic processes in energy conversion and storage systems must possess necessary surface active sites. Here we identify, from X-ray photoelectron spectroscopy and density functional theory calculations, that controlling charge density redistribution via the atomic-scale incorporation of heteroatoms is paramount to import surface active sites. We engineer the deterministic nitrogen atoms inserting the bulk material to preferentially expose active sites to turn the inactive material into a sufficient electrocatalyst. The excellent electrocatalytic activity of N-In2O3 nanocrystals leads to higher performance of dye-sensitized solar cells (DSCs) than the DSCs fabricated with Pt. The successful strategy provides the rational design of transforming abundant materials into high-efficient electrocatalysts. More importantly, the exciting discovery of turning the commonly used transparent conductive oxide (TCO) in DSCs into counter electrode material means that except for decreasing the cost, the device structure and processing techniques of DSCs can be simplified in future. PMID:24173503
Mesoscopic quantum emitters from deterministic aggregates of conjugated polymers
Stangl, Thomas; Wilhelm, Philipp; Remmerssen, Klaas; Höger, Sigurd; Vogelsang, Jan; Lupton, John M.
2015-01-01
An appealing definition of the term “molecule” arises from consideration of the nature of fluorescence, with discrete molecular entities emitting a stream of single photons. We address the question of how large a molecular object may become by growing deterministic aggregates from single conjugated polymer chains. Even particles containing dozens of individual chains still behave as single quantum emitters due to efficient excitation energy transfer, whereas the brightness is raised due to the increased absorption cross-section of the suprastructure. Excitation energy can delocalize between individual polymer chromophores in these aggregates by both coherent and incoherent coupling, which are differentiated by their distinct spectroscopic fingerprints. Coherent coupling is identified by a 10-fold increase in excited-state lifetime and a corresponding spectral red shift. Exciton quenching due to incoherent FRET becomes more significant as aggregate size increases, resulting in single-aggregate emission characterized by strong blinking. This mesoscale approach allows us to identify intermolecular interactions which do not exist in isolated chains and are inaccessible in bulk films where they are present but masked by disorder. PMID:26417079
Automated optimum design of wing structures. Deterministic and probabilistic approaches
NASA Technical Reports Server (NTRS)
Rao, S. S.
1982-01-01
The automated optimum design of airplane wing structures subjected to multiple behavior constraints is described. The structural mass of the wing is considered the objective function. The maximum stress, wing tip deflection, root angle of attack, and flutter velocity during the pull up maneuver (static load), the natural frequencies of the wing structure, and the stresses induced in the wing structure due to landing and gust loads are suitably constrained. Both deterministic and probabilistic approaches are used for finding the stresses induced in the airplane wing structure due to landing and gust loads. A wing design is represented by a uniform beam with a cross section in the form of a hollow symmetric double wedge. The airfoil thickness and chord length are the design variables, and a graphical procedure is used to find the optimum solutions. A supersonic wing design is represented by finite elements. The thicknesses of the skin and the web and the cross sectional areas of the flanges are the design variables, and nonlinear programming techniques are used to find the optimum solution.
Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information
Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing
2016-01-01
Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft’s algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms. PMID:27806102
Traffic flow theory and traffic flow simulation models. Transportation research record
1996-12-31
;Contents: Comparison of Simulation Modules of TRANSYT and INTEGRATION Models; Evaluation of SCATSIM-RTA Adaptive Traffic Network Simulation Model; Comparison NETSIM, NETFLO I, and NETFLO II Traffic Simulation Models for Fixed-Time Signal Control; Traffic Flow Simulation Through Parallel Processing; Cluster Analysis as Tool in Traffic Engineering; Traffic Platoon Dispersion Modeling on Arterial Streets; Hybrid Model for Estimating Permitted Left-Turn Saturations Flow Rate; and Passing Sight Distance and Overtaking Dilemma on Two-Lane Roads.
[Reduction of automobile traffic: urgent health promotion policy].
Tapia Granados, J A
1998-03-01
During the last few decades, traffic injuries have become one of the leading causes of death and disability in the world. In urban areas, traffic congestion, noise, and emissions from motor vehicles produce subjective disturbances and detectable pathological effects. More than one billion people are exposed to harmful levels of environmental pollution. Because its combustion engine generates carbon dioxide (CO2), the automobile is one of the chief sources of the gases that are causing the greenhouse effect. The latter has already caused a rise in the average ambient temperature, and over the next decades it will predictable cause significant climatic changes whose consequences, though uncertain, are likely to be harmful and possibly catastrophic. Aside from the greenhouse effect, the relentless growth of parking zones, traffic, and the roadway infrastructure in urban and rural areas is currently one of the leading causes of environmental degradation. Urban development, which is nearly always "planned" around traffic instead of people, leads to a significant deterioration in the quality of life, while it also destroys the social fabric. Unlike the private automobile, public transportation, bicycles, and walking help reduce pollution, congestion, and traffic volume, as well as the morbidity and mortality resulting from injuries and ailments related to pollution. Non-automobile transportation also encourages physical activity--with its positive effect on general health--and helps reduce the greenhouse effect. The drop in traffic volume and the increased use of alternate means of transportation are thus an integrated health promotion policy which should become an inherent part of the movement for the promotion of healthy cities and of transportation policies and economic policy in general.
Simulation Study of Traffic Accidents in Bidirectional Traffic Models
NASA Astrophysics Data System (ADS)
Moussa, Najem
Conditions for the occurrence of bidirectional collisions are developed based on the Simon-Gutowitz bidirectional traffic model. Three types of dangerous situations can occur in this model. We analyze those corresponding to head-on collision; rear-end collision and lane-changing collision. Using Monte Carlo simulations, we compute the probability of the occurrence of these collisions for different values of the oncoming cars' density. It is found that the risk of collisions is important when the density of cars in one lane is small and that of the other lane is high enough. The influence of different proportions of heavy vehicles is also studied. We found that heavy vehicles cause an important reduction of traffic flow on the home lane and provoke an increase of the risk of car accidents.
Cellular automata model for urban road traffic flow considering pedestrian crossing street
NASA Astrophysics Data System (ADS)
Zhao, Han-Tao; Yang, Shuo; Chen, Xiao-Xu
2016-11-01
In order to analyze the effect of pedestrians' crossing street on vehicle flows, we investigated traffic characteristics of vehicles and pedestrians. Based on that, rules of lane changing, acceleration, deceleration, randomization and update are modified. Then we established two urban two-lane cellular automata models of traffic flow, one of which is about sections with non-signalized crosswalk and the other is on uncontrolled sections with pedestrians crossing street at random. MATLAB is used for numerical simulation of the different traffic conditions; meanwhile space-time diagram and relational graphs of traffic flow parameters are generated and then comparatively analyzed. Simulation results indicate that when vehicle density is lower than around 25 vehs/(km lane), pedestrians have modest impact on traffic flow, whereas when vehicle density is higher than about 60 vehs/(km lane), traffic speed and volume will decrease significantly especially on sections with non-signal-controlled crosswalk. The results illustrate that the proposed models reconstruct the traffic flow's characteristic with the situation where there are pedestrians crossing and can provide some practical reference for urban traffic management.
320-Gb/s switch system to guarantee QoS of real-time traffic
NASA Astrophysics Data System (ADS)
Li, Wenjie; Liu, Bin; Xu, Yang
2003-08-01
To provide QoS control for real time traffic in core routers, this paper designs and evaluates a 320 Gb/s switch system, which supports 16 line cards, each operating at OC192c line rate (10 Gb/s). This switch system contains a high performance switch fabric and supports variable-length IP packet interface. These two characters provide advantages over traditional switch fabrics with a cell interface. This switch system supports eight priorities to both unicast and multicast traffic. The highest priority with strict QoS guarantee is for real time traffic, and other seven lower priorities with weighted round-robin (WRR) service discipline are for other common data traffic. Through simulation under multi-priority burst traffic model, we demonstrate this switch system not only can provide excellent performance for real time traffic, but also can efficiently allocate bandwidth among all kinds of traffic. As a result, this switch system can serve as a key node in high-speed networks, and it can also meet the challenge of multimedia traffic to the next generation Internet.
Automated mixed traffic transit vehicle microprocessor controller
NASA Technical Reports Server (NTRS)
Marks, R. A.; Cassell, P.; Johnston, A. R.
1981-01-01
An improved Automated Mixed Traffic Vehicle (AMTV) speed control system employing a microprocessor and transistor chopper motor current controller is described and its performance is presented in terms of velocity versus time curves. The on board computer hardware and software systems are described as is the software development system. All of the programming used in this controller was implemented using FORTRAN. This microprocessor controller made possible a number of safety features and improved the comfort associated with starting and shopping. In addition, most of the vehicle's performance characteristics can be altered by simple program parameter changes. A failure analysis of the microprocessor controller was generated and the results are included. Flow diagrams for the speed control algorithms and complete FORTRAN code listings are also included.
Stochastic model of tumor-induced angiogenesis: Ensemble averages and deterministic equations
NASA Astrophysics Data System (ADS)
Terragni, F.; Carretero, M.; Capasso, V.; Bonilla, L. L.
2016-02-01
A recent conceptual model of tumor-driven angiogenesis including branching, elongation, and anastomosis of blood vessels captures some of the intrinsic multiscale structures of this complex system, yet allowing one to extract a deterministic integro-partial-differential description of the vessel tip density [Phys. Rev. E 90, 062716 (2014), 10.1103/PhysRevE.90.062716]. Here we solve the stochastic model, show that ensemble averages over many realizations correspond to the deterministic equations, and fit the anastomosis rate coefficient so that the total number of vessel tips evolves similarly in the deterministic and ensemble-averaged stochastic descriptions.
Hybrid Monte Carlo-Deterministic Methods for Nuclear Reactor-Related Criticality Calculations
Edward W. Larson
2004-02-17
The overall goal of this project is to develop, implement, and test new Hybrid Monte Carlo-deterministic (or simply Hybrid) methods for the more efficient and more accurate calculation of nuclear engineering criticality problems. These new methods will make use of two (philosophically and practically) very different techniques - the Monte Carlo technique, and the deterministic technique - which have been developed completely independently during the past 50 years. The concept of this proposal is to merge these two approaches and develop fundamentally new computational techniques that enhance the strengths of the individual Monte Carlo and deterministic approaches, while minimizing their weaknesses.
NASA Astrophysics Data System (ADS)
Maggs, J. E.; Morales, G. J.
2011-10-01
The dynamics of transport at the edge of magnetized plasmas is deterministic chaos. The connection is made by a previous survey [M. A. Pedrosa , Phys. Rev. Lett. 82, 3621 (1999)PRLTAO0031-900710.1103/PhysRevLett.82.3621] of measurements of fluctuations that is shown to exhibit power spectra with exponential frequency dependence over a broad range, which is the signature of deterministic chaos. The exponential character arises from Lorentzian pulses. The results suggest that the generalization to complex times used in studies of deterministic chaos is a representation of Lorentzian pulses emerging from the chaotic dynamics.
Traffic Aware Planner (TAP) Flight Evaluation
NASA Technical Reports Server (NTRS)
Maris, John M.; Haynes, Mark A.; Wing, David J.; Burke, Kelly A.; Henderson, Jeff; Woods, Sharon E.
2014-01-01
NASA's Traffic Aware Planner (TAP) is a cockpit decision support tool that has the potential to achieve significant fuel and time savings when it is embedded in the data-rich Next Generation Air Transportation System (NextGen) airspace. To address a key step towards the operational deployment of TAP and the NASA concept of Traffic Aware Strategic Aircrew Requests (TASAR), a system evaluation was conducted in a representative flight environment in November, 2013. Numerous challenges were overcome to achieve this goal, including the porting of the foundational Autonomous Operations Planner (AOP) software from its original simulation-based, avionics-embedded environment to an Electronic Flight Bag (EFB) platform. A flight-test aircraft was modified to host the EFB, the TAP application, an Automatic Dependent Surveillance Broadcast (ADS-B) processor, and a satellite broadband datalink. Nine Evaluation Pilots conducted 26 hours of TAP assessments using four route profiles in the complex eastern and north-eastern United States airspace. Extensive avionics and video data were collected, supplemented by comprehensive inflight and post-flight questionnaires. TAP was verified to function properly in the live avionics and ADS-B environment, characterized by recorded data dropouts, latency, and ADS-B message fluctuations. Twelve TAP-generated optimization requests were submitted to ATC, of which nine were approved, and all of which resulted in fuel and/or time savings. Analysis of subjective workload data indicated that pilot interaction with TAP during flight operations did not induce additional cognitive loading. Additionally, analyses of post-flight questionnaire data showed that the pilots perceived TAP to be useful, understandable, intuitive, and easy to use. All program objectives were met, and the next phase of TAP development and evaluations with partner airlines is in planning for 2015.
The car following model considering traffic jerk
NASA Astrophysics Data System (ADS)
Ge, Hong-Xia; Zheng, Peng-jun; Wang, Wei; Cheng, Rong-Jun
2015-09-01
Based on optimal velocity car following model, a new model considering traffic jerk is proposed to describe the jamming transition in traffic flow on a highway. Traffic jerk means the sudden braking and acceleration of vehicles, which has a significant impact on traffic movement. The nature of the model is researched by using linear and nonlinear analysis method. A thermodynamic theory is formulated to describe the phase transition and critical phenomenon in traffic flow. The time-dependent Ginzburg-Landau (TDGL) equation and the modified Korteweg-de Vries (mKdV) equation are derived to describe the traffic flow near the critical point and the traffic jam. In addition, the connection between the TDGL and the mKdV equations are also given.
Development of a Deterministic Ethernet Building blocks for Space Applications
NASA Astrophysics Data System (ADS)
Fidi, C.; Jakovljevic, Mirko
2015-09-01
The benefits of using commercially based networking standards and protocols have been widely discussed and are expected to include reduction in overall mission cost, shortened integration and test (I&T) schedules, increased operations flexibility, and hardware and software upgradeability/scalability with developments ongoing in the commercial world. The deterministic Ethernet technology TTEthernet [1] diploid on the NASA Orion spacecraft has demonstrated the use of the TTEthernet technology for a safety critical human space flight application during the Exploration Flight Test 1 (EFT-1). The TTEthernet technology used within the NASA Orion program has been matured for the use within this mission but did not lead to a broader use in space applications or an international space standard. Therefore TTTech has developed a new version which allows to scale the technology for different applications not only the high end missions allowing to decrease the size of the building blocks leading to a reduction of size weight and power enabling the use in smaller applications. TTTech is currently developing a full space products offering for its TTEthernet technology to allow the use in different space applications not restricted to launchers and human spaceflight. A broad space market assessment and the current ESA TRP7594 lead to the development of a space grade TTEthernet controller ASIC based on the ESA qualified Atmel AT1C8RHA95 process [2]. In this paper we will describe our current TTEthernet controller development towards a space qualified network component allowing future spacecrafts to operate in significant radiation environments while using a single onboard network for reliable commanding and data transfer.
Automated leukocyte processing by microfluidic deterministic lateral displacement.
Civin, Curt I; Ward, Tony; Skelley, Alison M; Gandhi, Khushroo; Peilun Lee, Zendra; Dosier, Christopher R; D'Silva, Joseph L; Chen, Yu; Kim, MinJung; Moynihan, James; Chen, Xiaochun; Aurich, Lee; Gulnik, Sergei; Brittain, George C; Recktenwald, Diether J; Austin, Robert H; Sturm, James C
2016-12-01
We previously developed a Deterministic Lateral Displacement (DLD) microfluidic method in silicon to separate cells of various sizes from blood (Davis et al., Proc Natl Acad Sci 2006;103:14779-14784; Huang et al., Science 2004;304:987-990). Here, we present the reduction-to-practice of this technology with a commercially produced, high precision plastic microfluidic chip-based device designed for automated preparation of human leukocytes (white blood cells; WBCs) for flow cytometry, without centrifugation or manual handling of samples. After a human blood sample was incubated with fluorochrome-conjugated monoclonal antibodies (mAbs), the mixture was input to a DLD microfluidic chip (microchip) where it was driven through a micropost array designed to deflect WBCs via DLD on the basis of cell size from the Input flow stream into a buffer stream, thus separating WBCs and any larger cells from smaller cells and particles and washing them simultaneously. We developed a microfluidic cell processing protocol that recovered 88% (average) of input WBCs and removed 99.985% (average) of Input erythrocytes (red blood cells) and >99% of unbound mAb in 18 min (average). Flow cytometric evaluation of the microchip Product, with no further processing, lysis or centrifugation, revealed excellent forward and side light scattering and fluorescence characteristics of immunolabeled WBCs. These results indicate that cost-effective plastic DLD microchips can speed and automate leukocyte processing for high quality flow cytometry analysis, and suggest their utility for multiple other research and clinical applications involving enrichment or depletion of common or rare cell types from blood or tissue samples. © 2016 International Society for Advancement of Cytometry.
Contagion spreading on complex networks with local deterministic dynamics
NASA Astrophysics Data System (ADS)
Manshour, Pouya; Montakhab, Afshin
2014-07-01
Typically, contagion strength is modeled by a transmission rate λ, whereby all nodes in a network are treated uniformly in a mean-field approximation. However, local agents react differently to the same contagion based on their local characteristics. Following our recent work (Montakhab and Manshour, 2012 [42]), we investigate contagion spreading models with local dynamics on complex networks. We therefore quantify contagions by their quality, 0⩽α⩽1, and follow their spreading as their transmission condition (fitness) is evaluated by local agents. Instead of considering stochastic dynamics, here we consider various deterministic local rules. We find that initial spreading with exponential quality-dependent time scales is followed by a stationary state with a prevalence depending on the quality of the contagion. We also observe various interesting phenomena, for example, high prevalence without the participation of the hubs. This special feature of our "threshold rule" provides a mechanism for high prevalence spreading without the participation of "super-spreaders", in sharp contrast with many standard mechanism of spreading where hubs are believed to play the central role. On the other hand, if local nodes act as agents who stop the transmission once a threshold is reached, we find that spreading is severely hindered in a heterogeneous population while in a homogeneous one significant spreading may occur. We further decouple local characteristics from underlying topology in order to study the role of network topology in various models and find that as long as small-world effect exists, the underlying topology does not contribute to the final stationary state but only affects the initial spreading velocity.
"Eztrack": A single-vehicle deterministic tracking algorithm
Carrano, C J
2007-12-20
A variety of surveillance operations require the ability to track vehicles over a long period of time using sequences of images taken from a camera mounted on an airborne or similar platform. In order to be able to see and track a vehicle for any length of time, either a persistent surveillance imager is needed that can image wide fields of view over a long time-span or a highly maneuverable smaller field-of-view imager is needed that can follow the vehicle of interest. The algorithm described here was designed for the persistence surveillance case. In turns out that most vehicle tracking algorithms described in the literature[1,2,3,4] are designed for higher frame rates (> 5 FPS) and relatively short ground sampling distances (GSD) and resolutions ({approx} few cm to a couple tens of cm). But for our datasets, we are restricted to lower resolutions and GSD's ({ge}0.5 m) and limited frame-rates ({le}2.0 Hz). As a consequence, we designed our own simple approach in IDL which is a deterministic, motion-guided object tracker. The object tracking relies both on object features and path dynamics. The algorithm certainly has room for future improvements, but we have found it to be a useful tool in evaluating effects of frame-rate, resolution/GSD, and spectral content (eg. grayscale vs. color imaging ). A block diagram of the tracking approach is given in Figure 1. We describe each of the blocks of the diagram in the upcoming sections.
Ballistic deposition on deterministic fractals: Observation of discrete scale invariance
NASA Astrophysics Data System (ADS)
Horowitz, Claudio M.; Romá, Federico; Albano, Ezequiel V.
2008-12-01
The growth of ballistic aggregates on deterministic fractal substrates is studied by means of numerical simulations. First, we attempt the description of the evolving interface of the aggregates by applying the well-established Family-Vicsek dynamic scaling approach. Systematic deviations from that standard scaling law are observed, suggesting that significant scaling corrections have to be introduced in order to achieve a more accurate understanding of the behavior of the interface. Subsequently, we study the internal structure of the growing aggregates that can be rationalized in terms of the scaling behavior of frozen trees, i.e., structures inhibited for further growth, lying below the growing interface. It is shown that the rms height (hs) and width (ws) of the trees of size s obey power laws of the form hs∝sν∥ and ws∝sν⊥ , respectively. Also, the tree-size distribution (ns) behaves according to ns˜s-τ . Here, ν∥ and ν⊥ are the correlation length exponents in the directions parallel and perpendicular to the interface, respectively. Also, τ is a critical exponent. However, due to the interplay between the discrete scale invariance of the underlying fractal substrates and the dynamics of the growing process, all these power laws are modulated by logarithmic periodic oscillations. The fundamental scaling ratios, characteristic of these oscillations, can be linked to the (spatial) fundamental scaling ratio of the underlying fractal by means of relationships involving critical exponents. We argue that the interplay between the spatial discrete scale invariance of the fractal substrate and the dynamics of the physical process occurring in those media is a quite general phenomenon that leads to the observation of logarithmic-periodic modulations of physical observables.
Deterministic and Stochastic Descriptions of Gene Expression Dynamics
NASA Astrophysics Data System (ADS)
Marathe, Rahul; Bierbaum, Veronika; Gomez, David; Klumpp, Stefan
2012-09-01
A key goal of systems biology is the predictive mathematical description of gene regulatory circuits. Different approaches are used such as deterministic and stochastic models, models that describe cell growth and division explicitly or implicitly etc. Here we consider simple systems of unregulated (constitutive) gene expression and compare different mathematical descriptions systematically to obtain insight into the errors that are introduced by various common approximations such as describing cell growth and division by an effective protein degradation term. In particular, we show that the population average of protein content of a cell exhibits a subtle dependence on the dynamics of growth and division, the specific model for volume growth and the age structure of the population. Nevertheless, the error made by models with implicit cell growth and division is quite small. Furthermore, we compare various models that are partially stochastic to investigate the impact of different sources of (intrinsic) noise. This comparison indicates that different sources of noise (protein synthesis, partitioning in cell division) contribute comparable amounts of noise if protein synthesis is not or only weakly bursty. If protein synthesis is very bursty, the burstiness is the dominant noise source, independent of other details of the model. Finally, we discuss two sources of extrinsic noise: cell-to-cell variations in protein content due to cells being at different stages in the division cycles, which we show to be small (for the protein concentration and, surprisingly, also for the protein copy number per cell) and fluctuations in the growth rate, which can have a significant impact.
Development of a First-of-a-Kind Deterministic Decision-Making Tool for Supervisory Control System
Cetiner, Sacit M; Kisner, Roger A; Muhlheim, Michael David; Fugate, David L
2015-07-01
Decision-making is the process of identifying and choosing alternatives where each alternative offers a different approach or path to move from a given state or condition to a desired state or condition. The generation of consistent decisions requires that a structured, coherent process be defined, immediately leading to a decision-making framework. The overall objective of the generalized framework is for it to be adopted into an autonomous decision-making framework and tailored to specific requirements for various applications. In this context, automation is the use of computing resources to make decisions and implement a structured decision-making process with limited or no human intervention. The overriding goal of automation is to replace or supplement human decision makers with reconfigurable decision- making modules that can perform a given set of tasks reliably. Risk-informed decision making requires a probabilistic assessment of the likelihood of success given the status of the plant/systems and component health, and a deterministic assessment between plant operating parameters and reactor protection parameters to prevent unnecessary trips and challenges to plant safety systems. The implementation of the probabilistic portion of the decision-making engine of the proposed supervisory control system was detailed in previous milestone reports. Once the control options are identified and ranked based on the likelihood of success, the supervisory control system transmits the options to the deterministic portion of the platform. The deterministic multi-attribute decision-making framework uses variable sensor data (e.g., outlet temperature) and calculates where it is within the challenge state, its trajectory, and margin within the controllable domain using utility functions to evaluate current and projected plant state space for different control decisions. Metrics to be evaluated include stability, cost, time to complete (action), power level, etc. The
Jaramillo-Villegas, Jose A; Xue, Xiaoxiao; Wang, Pei-Hsun; Leaird, Daniel E; Weiner, Andrew M
2015-04-20
A path within the parameter space of detuning and pump power is demonstrated in order to obtain a single cavity soliton (CS) with certainty in SiN microring resonators in the anomalous dispersion regime. Once the single CS state is reached, it is possible to continue a path to compress it, broadening the corresponding single free spectral range (FSR) Kerr frequency comb. The first step to achieve this goal is to identify the stable regions in the parameter space via numerical simulations of the Lugiato-Lefever equation (LLE). Later, using this identification, we define a path from the stable modulation instability (SMI) region to the stable cavity solitons (SCS) region avoiding the chaotic and unstable regions.
Crowding Effects in Vehicular Traffic
Combinido, Jay Samuel L.; Lim, May T.
2012-01-01
While the impact of crowding on the diffusive transport of molecules within a cell is widely studied in biology, it has thus far been neglected in traffic systems where bulk behavior is the main concern. Here, we study the effects of crowding due to car density and driving fluctuations on the transport of vehicles. Using a microscopic model for traffic, we found that crowding can push car movement from a superballistic down to a subdiffusive state. The transition is also associated with a change in the shape of the probability distribution of positions from a negatively-skewed normal to an exponential distribution. Moreover, crowding broadens the distribution of cars’ trap times and cluster sizes. At steady state, the subdiffusive state persists only when there is a large variability in car speeds. We further relate our work to prior findings from random walk models of transport in cellular systems. PMID:23139762
Air traffic management evaluation tool
NASA Technical Reports Server (NTRS)
Sridhar, Banavar (Inventor); Sheth, Kapil S. (Inventor); Chatterji, Gano Broto (Inventor); Bilimoria, Karl D. (Inventor); Grabbe, Shon (Inventor); Schipper, John F. (Inventor)
2010-01-01
Method and system for evaluating and implementing air traffic management tools and approaches for managing and avoiding an air traffic incident before the incident occurs. The invention provides flight plan routing and direct routing or wind optimal routing, using great circle navigation and spherical Earth geometry. The invention provides for aircraft dynamics effects, such as wind effects at each altitude, altitude changes, airspeed changes and aircraft turns to provide predictions of aircraft trajectory (and, optionally, aircraft fuel use). A second system provides several aviation applications using the first system. These applications include conflict detection and resolution, miles-in trail or minutes-in-trail aircraft separation, flight arrival management, flight re-routing, weather prediction and analysis and interpolation of weather variables based upon sparse measurements.
Crowding effects in vehicular traffic.
Combinido, Jay Samuel L; Lim, May T
2012-01-01
While the impact of crowding on the diffusive transport of molecules within a cell is widely studied in biology, it has thus far been neglected in traffic systems where bulk behavior is the main concern. Here, we study the effects of crowding due to car density and driving fluctuations on the transport of vehicles. Using a microscopic model for traffic, we found that crowding can push car movement from a superballistic down to a subdiffusive state. The transition is also associated with a change in the shape of the probability distribution of positions from a negatively-skewed normal to an exponential distribution. Moreover, crowding broadens the distribution of cars' trap times and cluster sizes. At steady state, the subdiffusive state persists only when there is a large variability in car speeds. We further relate our work to prior findings from random walk models of transport in cellular systems.
Personal Property Traffic Management Regulation
1991-10-01
transportation in domestic, domestic ( offshore ), and international traffic and determine the DOD interest and action required within the scope of the DOD...S. 0MRS 4D. LMLISERMU3S TEMPORARY j~i~ o~ ~l h PAYING P WO/F AONAWV AND MAROC O CRPS 13 . em e L a.C*KT Cn1%f ~m STAYSOF L*OAL RIQENCS 7. EmnTIEMENTS
CG Vessel Traffic Service Program.
1980-06-01
the plan called for the construction of a traffic control center, seven remote communications sites linked to the control center by microwave equipment...three or four remote LLLTV sites. 40 In New Orleans, as in New York, the potential for catastrophe cannot be discounted, as vessel density is high and...your RCP simply will not fit a literal use of the question’s wording, base your score on the sense of the wording in the context of the whole question
Kinetic energy management in road traffic injury prevention: a call for action
Khorasani-Zavareh, Davoud; Bigdeli, Maryam; Saadat, Soheil; Mohammadi, Reza
2015-01-01
Abstract: By virtue of their variability, mass and speed have important roles in transferring energies during a crash incidence (kinetic energy). The sum of kinetic energy is important in determining an injury severity and that is equal to one half of the vehicle mass multiplied by the square of the vehicle speed. To meet the Vision Zero policy (a traffic safety policy) prevention activities should be focused on vehicle speed management. Understanding the role of kinetic energy will help to develop measures to reduce the generation, distribution, and effects of this energy during a road traffic crash. Road traffic injury preventive activities necessitate Kinetic energy management to improve road user safety. PMID:24284810
Traffic accident analysis using GIS: a case study of Kyrenia City
NASA Astrophysics Data System (ADS)
Kara, Can; Akçit, Nuhcan
2015-06-01
Traffic accidents are causing major deaths in urban environments, so analyzing locations of the traffic accidents and their reasons is crucial. In this manner, patterns of accidents and hotspot distribution are analyzed by using geographic information technology. Locations of the traffic accidents in the years 2011, 2012 and 2013 are combined to generate the kernel distribution map of Kyrenia City. This analysis aims to find high dense intersections and segments within the city. Additionally, spatial autocorrelation methods Local Morans I and Getis-Ord Gi are employed . The results are discussed in detail for further analysis. Finally, required changes for numerous intersections are suggested to decrease potential risks of high dense accident locations.
A system for large-scale automatic traffic sign recognition and mapping
NASA Astrophysics Data System (ADS)
Chigorin, A.; Konushin, A.
2013-10-01
We present a system for the large-scale automatic traffic signs recognition and mapping and experimentally justify design choices made for different components of the system. Our system works with more than 140 different classes of traffic signs and does not require labor-intensive labelling of a large amount of training data due to the training on synthetically generated images. We evaluated our system on the large dataset of Russian traffic signs and made this dataset publically available to encourage future comparison.
Memory effects in microscopic traffic models and wide scattering in flow-density data
NASA Astrophysics Data System (ADS)
Treiber, Martin; Helbing, Dirk
2003-10-01
By means of microscopic simulations we show that noninstantaneous adaptation of the driving behavior to the traffic situation together with the conventional method to measure flow-density data provides a possible explanation for the observed inverse-λ shape and the wide scattering of flow-density data in “synchronized” congested traffic. We model a memory effect in the response of drivers to the traffic situation for a wide class of car-following models by introducing an additional dynamical variable (the “subjective level of service”) describing the adaptation of drivers to the surrounding traffic situation during the past few minutes and couple this internal state to parameters of the underlying model that are related to the driving style. For illustration, we use the intelligent-driver model (IDM) as the underlying model, characterize the level of service solely by the velocity, and couple the internal variable to the IDM parameter “time gap” to model an increase of the time gap in congested traffic (“frustration effect”), which is supported by single-vehicle data. We simulate open systems with a bottleneck and obtain flow-density data by implementing “virtual detectors.” The shape, relative size, and apparent “stochasticity” of the region of the scattered data points agree nearly quantitatively with empirical data. Wide scattering is even observed for identical vehicles, although the proposed model is a time-continuous, deterministic, single-lane car-following model with a unique fundamental diagram.
Transition characteristic analysis of traffic evolution process for urban traffic network.
Wang, Longfei; Chen, Hong; Li, Yang
2014-01-01
The characterization of the dynamics of traffic states remains fundamental to seeking for the solutions of diverse traffic problems. To gain more insights into traffic dynamics in the temporal domain, this paper explored temporal characteristics and distinct regularity in the traffic evolution process of urban traffic network. We defined traffic state pattern through clustering multidimensional traffic time series using self-organizing maps and construct a pattern transition network model that is appropriate for representing and analyzing the evolution progress. The methodology is illustrated by an application to data flow rate of multiple road sections from Network of Shenzhen's Nanshan District, China. Analysis and numerical results demonstrated that the methodology permits extracting many useful traffic transition characteristics including stability, preference, activity, and attractiveness. In addition, more information about the relationships between these characteristics was extracted, which should be helpful in understanding the complex behavior of the temporal evolution features of traffic patterns.
Wei, Kun; Gao, Shilong; Zhong, Suchuan; Ma, Hong
2012-01-01
In dynamical systems theory, a system which can be described by differential equations is called a continuous dynamical system. In studies on genetic oscillation, most deterministic models at early stage are usually built on ordinary differential equations (ODE). Therefore, gene transcription which is a vital part in genetic oscillation is presupposed to be a continuous dynamical system by default. However, recent studies argued that discontinuous transcription might be more common than continuous transcription. In this paper, by appending the inserted silent interval lying between two neighboring transcriptional events to the end of the preceding event, we established that the running time for an intact transcriptional event increases and gene transcription thus shows slow dynamics. By globally replacing the original time increment for each state increment by a larger one, we introduced fractional differential equations (FDE) to describe such globally slow transcription. The impact of fractionization on genetic oscillation was then studied in two early stage models--the Goodwin oscillator and the Rössler oscillator. By constructing a "dual memory" oscillator--the fractional delay Goodwin oscillator, we suggested that four general requirements for generating genetic oscillation should be revised to be negative feedback, sufficient nonlinearity, sufficient memory and proper balancing of timescale. The numerical study of the fractional Rössler oscillator implied that the globally slow transcription tends to lower the chance of a coupled or more complex nonlinear genetic oscillatory system behaving chaotically.
A three-variable model of deterministic chaos in the Belousov-Zhabotinsky reaction
NASA Astrophysics Data System (ADS)
Györgyi, László; Field, Richard J.
1992-02-01
CHAOS is exhibited by a wide variety of systems governed by nonlinear dynamic laws1-3. Its most striking feature is an apparent randomness which seems to contradict its deterministic origin. The best-studied chaotic chemical system is the Belousov-Zhabotinsky (BZ) reaction4-6 in a continuous-flow stirred-tank reactor (CSTR). Here we present a simple mechanism for the BZ reaction which allows us to develop a description in terms of a set of differential equations containing only three variables, the minimum number required to generate chaos in a continuous (non-iterative) dynamical system2. In common with experiments, our model shows aperiodicity and transitions between periodicity and chaos near bifurcations between oscillatory and steady-state behaviour, which occur at both low and high CSTR flow rates. While remaining closely related to a real chaotic chemical system, our model is sufficiently simple to allow detailed mathematical analysis. It also reproduces many other features of the BZ reaction better than does the simple Oregonator7 (which cannot produce chaos).
NASA Astrophysics Data System (ADS)
Guo, H.; Karpov, M.; Lucas, E.; Kordts, A.; Pfeiffer, M. H. P.; Brasch, V.; Lihachev, G.; Lobanov, V. E.; Gorodetsky, M. L.; Kippenberg, T. J.
2017-01-01
Temporal dissipative Kerr solitons in optical microresonators enable the generation of ultrashort pulses and low-noise frequency combs at microwave repetition rates. They have been demonstrated in a growing number of microresonator platforms, enabling chip-scale frequency combs, optical synthesis of low-noise microwaves and multichannel coherent communications. In all these applications, accessing and maintaining a single-soliton state is a key requirement--one that remains an outstanding challenge. Here, we study the dynamics of multiple-soliton states and report the discovery of a simple mechanism that deterministically switches the soliton state by reducing the number of solitons one by one. We demonstrate this control in Si3N4 and MgF2 resonators and, moreover, we observe a secondary peak to emerge in the response of the system to a pump modulation, an effect uniquely associated with the soliton regime. Exploiting this feature, we map the multi-stability diagram of a microresonator experimentally. Our measurements show the physical mechanism of the soliton switching and provide insight into soliton dynamics in microresonators. The technique provides a method to sequentially reduce, monitor and stabilize an arbitrary state with solitons, in particular allowing for feedback stabilization of single-soliton states, which is necessary for practical applications.
Dynamic avalanche breakdown of a p-n junction: Deterministic triggering of a plane streamer front
NASA Astrophysics Data System (ADS)
Rodin, Pavel; Grekhov, Igor
2005-06-01
We discuss the dynamic impact ionization breakdown of a high voltage p-n junction which occurs when the electric field is increased above the threshold of avalanche impact ionization on a time scale smaller than the inverse thermogeneration rate. The avalanche-to-streamer transition characterized by generation of dense electron-hole plasma capable of screening the applied external electric field occurs in such regimes. We argue that the experimentally observed deterministic triggering of the plane streamer front at the electric-field strength above the threshold of avalanche impact ionization, yet below the threshold of band-to-band tunneling, is generally caused by field-enhanced ionization of deep-level centers. We suggest that the process-induced sulfur centers and native defects such as EL2, HB2, and HB5 centers initiate the front in Si and GaAs structures, respectively. In deep-level-free structures the plane streamer front is triggered by Zener band-to-band tunneling.
Bright, Joanne N; Evans, Denis J; Searles, Debra J
2005-05-15
Deterministic thermostats are frequently employed in nonequilibrium molecular dynamics simulations in order to remove the heat produced irreversibly over the course of such simulations. The simplest thermostat is the Gaussian thermostat, which satisfies Gauss's principle of least constraint and fixes the peculiar kinetic energy. There are of course infinitely many ways to thermostat systems, e.g., by fixing sigma(i)/p(i)/mu+l. In the present paper we provide, for the first time, convincing arguments as to why the conventional Gaussian isokinetic thermostat (mu = 1) is unique in this class. We show that this thermostat minimizes the phase space compression and is the only thermostat for which the conjugate pairing rule holds. Moreover, it is shown that for finite sized systems in the absence of an applied dissipative field, all other thermostats (mu not = 1) perform work on the system in the same manner as a dissipative field while simultaneously removing the dissipative heat so generated. All other thermostats (mu not = 1) are thus autodissipative. Among all mu, thermostats, only the mu = 1 Gaussian thermostat permits an equilibrium state.
Efficiency of transport in periodic potentials: dichotomous noise contra deterministic force
NASA Astrophysics Data System (ADS)
Spiechowicz, J.; Łuczka, J.; Machura, L.
2016-05-01
We study the transport of an inertial Brownian particle moving in a symmetric and periodic one-dimensional potential, and subjected to both a symmetric, unbiased external harmonic force as well as biased dichotomic noise η (t) also known as a random telegraph signal or a two state continuous-time Markov process. In doing so, we concentrate on the previously reported regime (Spiechowicz et al 2014 Phys. Rev. E 90 032104) for which non-negative biased noise η (t) in the form of generalized white Poissonian noise can induce anomalous transport processes similar to those generated by a deterministic constant force F=< η (t)> but significantly more effective than F, i.e. the particle moves much faster, the velocity fluctuations are noticeably reduced and the transport efficiency is enhanced several times. Here, we confirm this result for the case of dichotomous fluctuations which, in contrast to white Poissonian noise, can assume positive as well as negative values and examine the role of thermal noise in the observed phenomenon. We focus our attention on the impact of bidirectionality of dichotomous fluctuations and reveal that the effect of nonequilibrium noise enhanced efficiency is still detectable. This result may explain transport phenomena occurring in strongly fluctuating environments of both physical and biological origin. Our predictions can be corroborated experimentally by use of a setup that consists of a resistively and capacitively shunted Josephson junction.
Wei, Kun; Gao, Shilong; Zhong, Suchuan; Ma, Hong
2012-01-01
In dynamical systems theory, a system which can be described by differential equations is called a continuous dynamical system. In studies on genetic oscillation, most deterministic models at early stage are usually built on ordinary differential equations (ODE). Therefore, gene transcription which is a vital part in genetic oscillation is presupposed to be a continuous dynamical system by default. However, recent studies argued that discontinuous transcription might be more common than continuous transcription. In this paper, by appending the inserted silent interval lying between two neighboring transcriptional events to the end of the preceding event, we established that the running time for an intact transcriptional event increases and gene transcription thus shows slow dynamics. By globally replacing the original time increment for each state increment by a larger one, we introduced fractional differential equations (FDE) to describe such globally slow transcription. The impact of fractionization on genetic oscillation was then studied in two early stage models – the Goodwin oscillator and the Rössler oscillator. By constructing a “dual memory” oscillator – the fractional delay Goodwin oscillator, we suggested that four general requirements for generating genetic oscillation should be revised to be negative feedback, sufficient nonlinearity, sufficient memory and proper balancing of timescale. The numerical study of the fractional Rössler oscillator implied that the globally slow transcription tends to lower the chance of a coupled or more complex nonlinear genetic oscillatory system behaving chaotically. PMID:22679500
Outer Space Traffic Safety Standards
NASA Astrophysics Data System (ADS)
Larsen, Paul B.
2013-09-01
Management of traffic in outer space is a major safety problem. Traffic is increasing. Most satellites are navigable but they have to co-exist with space debris which is not navigable. We need minimum safety rules for outer space traffic. We have the possible beginnings of international safety standards in the form of national space object tracking; Global Navigation Satellite Systems (GNSS) standardization through ICAO and the International Committee on GNSS (ICG); the IADC space debris guidelines; and the proposed Code of Conduct. However, safety could be improved by standards for such activities as licensing launches of satellites into outer space; standards for accident investigation and search and rescue: operational safety zones around space objects such as the International Space Station. This paper describes legal authority for minimum safety standards. It considers safety standards established by private agreements among commercial operators. Finally it examines a number of options for an international forum to establish safety standards, including self-regulation, COPUOS, ICAO, ITU, a space code of conduct, and a new space organization.
Multi-scale dynamical behavior of spatially distributed systems: a deterministic point of view
NASA Astrophysics Data System (ADS)
Mangiarotti, S.; Le Jean, F.; Drapeau, L.; Huc, M.
2015-12-01
Physical and biophysical systems are spatially distributed systems. Their behavior can be observed or modelled spatially at various resolutions. In this work, a deterministic point of view is adopted to analyze multi-scale behavior taking a set of ordinary differential equation (ODE) as elementary part of the system.To perform analyses, scenes of study are thus generated based on ensembles of identical elementary ODE systems. Without any loss of generality, their dynamics is chosen chaotic in order to ensure sensitivity to initial conditions, that is, one fundamental property of atmosphere under instable conditions [1]. The Rössler system [2] is used for this purpose for both its topological and algebraic simplicity [3,4].Two cases are thus considered: the chaotic oscillators composing the scene of study are taken either independent, or in phase synchronization. Scale behaviors are analyzed considering the scene of study as aggregations (basically obtained by spatially averaging the signal) or as associations (obtained by concatenating the time series). The global modeling technique is used to perform the numerical analyses [5].One important result of this work is that, under phase synchronization, a scene of aggregated dynamics can be approximated by the elementary system composing the scene, but modifying its parameterization [6]. This is shown based on numerical analyses. It is then demonstrated analytically and generalized to a larger class of ODE systems. Preliminary applications to cereal crops observed from satellite are also presented.[1] Lorenz, Deterministic nonperiodic flow. J. Atmos. Sci., 20, 130-141 (1963).[2] Rössler, An equation for continuous chaos, Phys. Lett. A, 57, 397-398 (1976).[3] Gouesbet & Letellier, Global vector-field reconstruction by using a multivariate polynomial L2 approximation on nets, Phys. Rev. E 49, 4955-4972 (1994).[4] Letellier, Roulin & Rössler, Inequivalent topologies of chaos in simple equations, Chaos, Solitons
Cortes, Jesus M.; Desroches, Mathieu; Rodrigues, Serafim; Veltz, Romain; Muñoz, Miguel A.; Sejnowski, Terrence J.
2013-01-01
Short-term synaptic plasticity strongly affects the neural dynamics of cortical networks. The Tsodyks and Markram (TM) model for short-term synaptic plasticity accurately accounts for a wide range of physiological responses at different types of cortical synapses. Here, we report a route to chaotic behavior via a Shilnikov homoclinic bifurcation that dynamically organizes some of the responses in the TM model. In particular, the presence of such a homoclinic bifurcation strongly affects the shape of the trajectories in the phase space and induces highly irregular transient dynamics; indeed, in the vicinity of the Shilnikov homoclinic bifurcation, the number of population spikes and their precise timing are unpredictable and highly sensitive to the initial conditions. Such an irregular deterministic dynamics has its counterpart in stochastic/network versions of the TM model: The existence of the Shilnikov homoclinic bifurcation generates complex and irregular spiking patterns and—acting as a sort of springboard—facilitates transitions between the down-state and unstable periodic orbits. The interplay between the (deterministic) homoclinic bifurcation and stochastic effects may give rise to some of the complex dynamics observed in neural systems. PMID:24062464
Deterministic Computer-Controlled Polishing Process for High-Energy X-Ray Optics
NASA Technical Reports Server (NTRS)
Khan, Gufran S.; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian
2010-01-01
A deterministic computer-controlled polishing process for large X-ray mirror mandrels is presented. Using tool s influence function and material removal rate extracted from polishing experiments, design considerations of polishing laps and optimized operating parameters are discussed
A deterministic particle method for one-dimensional reaction-diffusion equations
NASA Technical Reports Server (NTRS)
Mascagni, Michael
1995-01-01
We derive a deterministic particle method for the solution of nonlinear reaction-diffusion equations in one spatial dimension. This deterministic method is an analog of a Monte Carlo method for the solution of these problems that has been previously investigated by the author. The deterministic method leads to the consideration of a system of ordinary differential equations for the positions of suitably defined particles. We then consider the time explicit and implicit methods for this system of ordinary differential equations and we study a Picard and Newton iteration for the solution of the implicit system. Next we solve numerically this system and study the discretization error both analytically and numerically. Numerical computation shows that this deterministic method is automatically adaptive to large gradients in the solution.
Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan D.; Falcao Salles, Joana
2015-03-17
Despite growing recognition that deterministic and stochastic factors simultaneously influence bacterial communities, little is known about mechanisms shifting their relative importance. To better understand underlying mechanisms, we developed a conceptual model linking ecosystem development during primary succession to shifts in the stochastic/deterministic balance. To evaluate the conceptual model we coupled spatiotemporal data on soil bacterial communities with environmental conditions spanning 105 years of salt marsh development. At the local scale there was a progression from stochasticity to determinism due to Na accumulation with increasing ecosystem age, supporting a main element of the conceptual model. At the regional-scale, soil organic matter (SOM) governed the relative influence of stochasticity and the type of deterministic ecological selection, suggesting scale-dependency in how deterministic ecological selection is imposed. Analysis of a new ecological simulation model supported these conceptual inferences. Looking forward, we propose an extended conceptual model that integrates primary and secondary succession in microbial systems.
Recent Achievements of the Neo-Deterministic Seismic Hazard Assessment in the CEI Region
Panza, G. F.; Kouteva, M.; Vaccari, F.; Peresan, A.; Romanelli, F.; Cioflan, C. O.; Radulian, M.; Marmureanu, G.; Paskaleva, I.; Gribovszki, K.; Varga, P.; Herak, M.; Zaichenco, A.; Zivcic, M.
2008-07-08
A review of the recent achievements of the innovative neo-deterministic approach for seismic hazard assessment through realistic earthquake scenarios has been performed. The procedure provides strong ground motion parameters for the purpose of earthquake engineering, based on the deterministic seismic wave propagation modelling at different scales--regional, national and metropolitan. The main advantage of this neo-deterministic procedure is the simultaneous treatment of the contribution of the earthquake source and seismic wave propagation media to the strong motion at the target site/region, as required by basic physical principles. The neo-deterministic seismic microzonation procedure has been successfully applied to numerous metropolitan areas all over the world in the framework of several international projects. In this study some examples focused on CEI region concerning both regional seismic hazard assessment and seismic microzonation of the selected metropolitan areas are shown.
A deterministic and statistical energy analysis of tyre cavity resonance noise
NASA Astrophysics Data System (ADS)
Mohamed, Zamri; Wang, Xu
2016-03-01
Tyre cavity resonance was studied using a combination of deterministic analysis and statistical energy analysis where its deterministic part was implemented using the impedance compact mobility matrix method and its statistical part was done by the statistical energy analysis method. While the impedance compact mobility matrix method can offer a deterministic solution to the cavity pressure response and the compliant wall vibration velocity response in the low frequency range, the statistical energy analysis method can offer a statistical solution of the responses in the high frequency range. In the mid frequency range, a combination of the statistical energy analysis and deterministic analysis methods can identify system coupling characteristics. Both methods have been compared to those from commercial softwares in order to validate the results. The combined analysis result has been verified by the measurement result from a tyre-cavity physical model. The analysis method developed in this study can be applied to other similar toroidal shape structural-acoustic systems.
Conflict-free trajectory planning for air traffic control automation
NASA Technical Reports Server (NTRS)
Slattery, Rhonda; Green, Steve
1994-01-01
As the traffic demand continues to grow within the National Airspace System (NAS), the need for long-range planning (30 minutes plus) of arrival traffic increases greatly. Research into air traffic control (ATC) automation at ARC has led to the development of the Center-TRACON Automation System (CTAS). CTAS determines optimum landing schedules for arrival traffic and assists controllers in meeting those schedules safely and efficiently. One crucial element in the development of CTAS is the capability to perform long-range (20 minutes) and short-range (5 minutes) conflict prediction and resolution once landing schedules are determined. The determination of conflict-free trajectories within the Center airspace is particularly difficult because of large variations in speed and altitude. The paper describes the current design and implementation of the conflict prediction and resolution tools used to generate CTAS advisories in Center airspace. Conflict criteria (separation requirements) are defined and the process of separation prediction is described. The major portion of the paper will describe the current implementation of CTAS conflict resolution algorithms in terms of the degrees of freedom for resolutions as well as resolution search techniques. The tools described in this paper have been implemented in a research system designed to rapidly develop and evaluate prototype concepts and will form the basis for an operational ATC automation system.
Neo-Deterministic and Probabilistic Seismic Hazard Assessments: a Comparative Analysis
NASA Astrophysics Data System (ADS)
Peresan, Antonella; Magrin, Andrea; Nekrasova, Anastasia; Kossobokov, Vladimir; Panza, Giuliano F.
2016-04-01
Objective testing is the key issue towards any reliable seismic hazard assessment (SHA). Different earthquake hazard maps must demonstrate their capability in anticipating ground shaking from future strong earthquakes before an appropriate use for different purposes - such as engineering design, insurance, and emergency management. Quantitative assessment of maps performances is an essential step also in scientific process of their revision and possible improvement. Cross-checking of probabilistic models with available observations and independent physics based models is recognized as major validation procedure. The existing maps from the classical probabilistic seismic hazard analysis (PSHA), as well as those from the neo-deterministic analysis (NDSHA), which have been already developed for several regions worldwide (including Italy, India and North Africa), are considered to exemplify the possibilities of the cross-comparative analysis in spotting out limits and advantages of different methods. Where the data permit, a comparative analysis versus the documented seismic activity observed in reality is carried out, showing how available observations about past earthquakes can contribute to assess performances of the different methods. Neo-deterministic refers to a scenario-based approach, which allows for consideration of a wide range of possible earthquake sources as the starting point for scenarios constructed via full waveforms modeling. The method does not make use of empirical attenuation models (i.e. Ground Motion Prediction Equations, GMPE) and naturally supplies realistic time series of ground shaking (i.e. complete synthetic seismograms), readily applicable to complete engineering analysis and other mitigation actions. The standard NDSHA maps provide reliable envelope estimates of maximum seismic ground motion from a wide set of possible scenario earthquakes, including the largest deterministically or historically defined credible earthquake. In addition
Implementation of Gy-Eq for deterministic effects limitation in shield design
NASA Technical Reports Server (NTRS)
Wilson, John W.; Kim, Myung-Hee Y.; De Angelis, Giovanni; Cucinotta, Francis A.; Yoshizawa, Nobuaki; Badavi, Francis F.
2002-01-01
The NCRP has recently defined RBE values and a new quantity (Gy-Eq) for use in estimation of deterministic effects in space shielding and operations. The NCRP's RBE for neutrons is left ambiguous and not fully defined. In the present report we will suggest a complete definition of neutron RBE consistent with the NCRP recommendations and evaluate attenuation properties of deterministic effects (Gy-Eq) in comparison with other dosimetric quantities.
On the application of deterministic optimization methods to stochastic control problems
NASA Technical Reports Server (NTRS)
Kramer, L. C.; Athans, M.
1974-01-01
A technique is presented by which deterministic optimization techniques, for example, the maximum principle of Pontriagin, can be applied to stochastic optimal control problems formulated around linear systems with Gaussian noises and general cost criteria. Using this technique, the stochastic nature of the problem is suppressed but for two expectation operations, the optimization being deterministic. The use of the technique in treating problems with quadratic and nonquadratic costs is illustrated.
Deterministic methods in radiation transport. A compilation of papers presented February 4-5, 1992
Rice, A. F.; Roussin, R. W.
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.
Deterministic methods in radiation transport. A compilation of papers presented February 4--5, 1992
Rice, A.F.; Roussin, R.W.
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.
Nextgen Technologies for Mid-Term and Far-Term Air Traffic Control Operations
NASA Technical Reports Server (NTRS)
Prevot, Thomas
2009-01-01
This paper describes technologies for mid-term and far-term air traffic control operations in the Next Generation Air Transportation System (NextGen). The technologies were developed and evaluated with human-in-the-loop simulations in the Airspace Operations Laboratory (AOL) at the NASA Ames Research Center. The simulations were funded by several research focus areas within NASA's Airspace Systems program and some were co-funded by the FAA's Air Traffic Organization for Planning, Research and Technology.
Coexistence of up- and downstream traffic waves on a ring road
NASA Astrophysics Data System (ADS)
van der Weele, K.; Kanellopoulos, G.
2016-09-01
It is an observational fact that density waves in vehicle traffic can move in either direction: small-amplitude waves travel in the same direction as the cars (downstream) whereas high-amplitude waves or "jams" travel in the opposite direction (upstream). We construct a model of ring road traffic to demonstrate how this comes about. Our model shows the spontaneous generation of these density waves, explains their stability properties, and pinpoints the precise density level at which the wave speed changes direction.
Self-Organization in 2D Traffic Flow Model with Jam-Avoiding Drive
NASA Astrophysics Data System (ADS)
Nagatani, Takashi
1995-04-01
A stochastic cellular automaton (CA) model is presented to investigate the traffic jam by self-organization in the two-dimensional (2D) traffic flow. The CA model is the extended version of the 2D asymmetric exclusion model to take into account jam-avoiding drive. Each site contains either a car moving to the up, a car moving to the right, or is empty. A up car can shift right with probability p ja if it is blocked ahead by other cars. It is shown that the three phases (the low-density phase, the intermediate-density phase and the high-density phase) appear in the traffic flow. The intermediate-density phase is characterized by the right moving of up cars. The jamming transition to the high-density jamming phase occurs with higher density of cars than that without jam-avoiding drive. The jamming transition point p 2c increases with the shifting probability p ja. In the deterministic limit of p ja=1, it is found that a new jamming transition occurs from the low-density synchronized-shifting phase to the high-density moving phase with increasing density of cars. In the synchronized-shifting phase, all up cars do not move to the up but shift to the right by synchronizing with the move of right cars. We show that the jam-avoiding drive has an important effect on the dynamical jamming transition.
Confined Crystal Growth in Space. Deterministic vs Stochastic Vibroconvective Effects
NASA Astrophysics Data System (ADS)
Ruiz, Xavier; Bitlloch, Pau; Ramirez-Piscina, Laureano; Casademunt, Jaume
The analysis of the correlations between characteristics of the acceleration environment and the quality of the crystalline materials grown in microgravity remains an open and interesting question. Acceleration disturbances in space environments usually give rise to effective gravity pulses, gravity pulse trains of finite duration, quasi-steady accelerations or g-jitters. To quantify these disturbances, deterministic translational plane polarized signals have largely been used in the literature [1]. In the present work, we take an alternative approach which models g-jitters in terms of a stochastic process in the form of the so-called narrow-band noise, which is designed to capture the main statistical properties of realistic g-jitters. In particular we compare their effects so single-frequency disturbances. The crystalline quality has been characterized, following previous analyses, in terms of two parameters, the longitudinal and the radial segregation coefficients. The first one averages transversally the dopant distribution, providing continuous longitudinal information of the degree of segregation along the growth process. The radial segregation characterizes the degree of lateral non-uniformity of the dopant in the solid-liquid interface at each instant of growth. In order to complete the description, and because the heat flux fluctuations at the interface have a direct impact on the crystal growth quality -growth striations -the time dependence of a Nusselt number associated to the growing interface has also been monitored. For realistic g-jitters acting orthogonally to the thermal gradient, the longitudinal segregation remains practically unperturbed in all simulated cases. Also, the Nusselt number is not significantly affected by the noise. On the other hand, radial segregation, despite its low magnitude, exhibits a peculiar low-frequency response in all realizations. [1] X. Ruiz, "Modelling of the influence of residual gravity on the segregation in
Traffic Flow Management Wrap-Up
NASA Technical Reports Server (NTRS)
Grabbe, Shon
2011-01-01
Traffic Flow Management involves the scheduling and routing of air traffic subject to airport and airspace capacity constraints, and the efficient use of available airspace. Significant challenges in this area include: (1) weather integration and forecasting, (2) accounting for user preferences in the Traffic Flow Management decision making process, and (3) understanding and mitigating the environmental impacts of air traffic on the environment. To address these challenges, researchers in the Traffic Flow Management area are developing modeling, simulation and optimization techniques to route and schedule air traffic flights and flows while accommodating user preferences, accounting for system uncertainties and considering the environmental impacts of aviation. This presentation will highlight some of the major challenges facing researchers in this domain, while also showcasing recent innovations designed to address these challenges.
Bayesian neural networks for internet traffic classification.
Auld, Tom; Moore, Andrew W; Gull, Stephen F
2007-01-01
Internet traffic identification is an important tool for network management. It allows operators to better predict future traffic matrices and demands, security personnel to detect anomalous behavior, and researchers to develop more realistic traffic models. We present here a traffic classifier that can achieve a high accuracy across a range of application types without any source or destination host-address or port information. We use supervised machine learning based on a Bayesian trained neural network. Though our technique uses training data with categories derived from packet content, training and testing were done using features derived from packet streams consisting of one or more packet headers. By providing classification without access to the contents of packets, our technique offers wider application than methods that require full packet/payloads for classification. This is a powerful advantage, using samples of classified traffic to permit the categorization of traffic based only upon commonly available information.
STOP: Can We Minimize OR Traffic?
Elliott, Sara; Parker, Stacy; Mills, Judi; Meeusen, Lindsay; Frana, Theresa; Anderson, Marie; Storsveen, Amy; White, Amy
2015-10-01
Perioperative nurses at our institution voiced concerns about the amount of traffic in the ORs. We formed a workgroup consisting of perioperative nurses, educators, and leaders and initiated a quality improvement (QI) project to identify the amount of OR traffic that occurs during a procedure. The workgroup developed a check sheet to record door swings, staff classifications, reasons for opening the door, and the number of people in the OR at 15-minute intervals. Baseline results showed that average door swings ranged from 33 per hour in general surgery to 54 per hour in cardiac surgery. Nurses accounted for the most traffic, citing retrieving supplies as the main reason. Interventions focused on decreasing nurse traffic for retrieval of supplies in general surgery. Follow-up observations showed that average door swings increased to 41 per hour in general surgery, but nurse traffic decreased. Monitoring and limiting traffic could positively affect patient safety and outcomes.
A System for Traffic Violation Detection
Aliane, Nourdine; Fernandez, Javier; Mata, Mario; Bemposta, Sergio
2014-01-01
This paper describes the framework and components of an experimental platform for an advanced driver assistance system (ADAS) aimed at providing drivers with a feedback about traffic violations they have committed during their driving. The system is able to detect some specific traffic violations, record data associated to these faults in a local data-base, and also allow visualization of the spatial and temporal information of these traffic violations in a geographical map using the standard Google Earth tool. The test-bed is mainly composed of two parts: a computer vision subsystem for traffic sign detection and recognition which operates during both day and nighttime, and an event data recorder (EDR) for recording data related to some specific traffic violations. The paper covers firstly the description of the hardware architecture and then presents the policies used for handling traffic violations. PMID:25421737
Empirical synchronized flow in oversaturated city traffic
NASA Astrophysics Data System (ADS)
Kerner, Boris S.; Hemmerle, Peter; Koller, Micha; Hermanns, Gerhard; Klenov, Sergey L.; Rehborn, Hubert; Schreckenberg, Michael
2014-09-01
Based on a study of anonymized GPS probe vehicle traces measured by personal navigation devices in vehicles randomly distributed in city traffic, empirical synchronized flow in oversaturated city traffic has been revealed. It turns out that real oversaturated city traffic resulting from speed breakdown in a city in most cases can be considered random spatiotemporal alternations between sequences of moving queues and synchronized flow patterns in which the moving queues do not occur.
Empirical synchronized flow in oversaturated city traffic.
Kerner, Boris S; Hemmerle, Peter; Koller, Micha; Hermanns, Gerhard; Klenov, Sergey L; Rehborn, Hubert; Schreckenberg, Michael
2014-09-01
Based on a study of anonymized GPS probe vehicle traces measured by personal navigation devices in vehicles randomly distributed in city traffic, empirical synchronized flow in oversaturated city traffic has been revealed. It turns out that real oversaturated city traffic resulting from speed breakdown in a city in most cases can be considered random spatiotemporal alternations between sequences of moving queues and synchronized flow patterns in which the moving queues do not occur.
Traffic Management for Satellite-ATM Networks
NASA Technical Reports Server (NTRS)
Goyal, Rohit; Jain, Raj; Fahmy, Sonia; Vandalore, Bobby; Goyal, Mukul
1998-01-01
Various issues associated with "Traffic Management for Satellite-ATM Networks" are presented in viewgraph form. Specific topics include: 1) Traffic management issues for TCP/IP based data services over satellite-ATM networks; 2) Design issues for TCP/IP over ATM; 3) Optimization of the performance of TCP/IP over ATM for long delay networks; and 4) Evaluation of ATM service categories for TCP/IP traffic.
Network traffic analysis using dispersion patterns
Khan, F. N.
2010-03-15
The Verilog code us used to map a measurement solution on FPGA to analyze network traffic. It realizes a set of Bloom filters and counters, besides associated control logic that can quickly measure statistics like InDegree, OutDegree, Depth, in the context of Traffic Dispersion Graphs. Such patterns are helpful in classification of network activity, like Peer to Peer and Port-Scanning, in the traffic.
STOL Traffic environment and operational procedures
NASA Technical Reports Server (NTRS)
Schlundt, R. W.; Dewolf, R. W.; Ausrotas, R. A.; Curry, R. E.; Demaio, D.; Keene, D. W.; Speyer, J. L.; Weinreich, M.; Zeldin, S.
1972-01-01
The expected traffic environment for an intercity STOL transportation system is examined, and operational procedures are discussed in order to identify problem areas which impact STOL avionics requirements. Factors considered include: traffic densities, STOL/CTOL/VTOL traffic mix, the expect ATC environment, aircraft noise models and community noise models and community noise impact, flight paths for noise abatement, wind considerations affecting landing, approach and landing considerations, STOLport site selection, runway capacity, and STOL operations at jetports, suburban airports, and separate STOLports.
Modeling self-consistent multi-class dynamic traffic flow
NASA Astrophysics Data System (ADS)
Cho, Hsun-Jung; Lo, Shih-Ching
2002-09-01
In this study, we present a systematic self-consistent multiclass multilane traffic model derived from the vehicular Boltzmann equation and the traffic dispersion model. The multilane domain is considered as a two-dimensional space and the interaction among vehicles in the domain is described by a dispersion model. The reason we consider a multilane domain as a two-dimensional space is that the driving behavior of road users may not be restricted by lanes, especially motorcyclists. The dispersion model, which is a nonlinear Poisson equation, is derived from the car-following theory and the equilibrium assumption. Under the concept that all kinds of users share the finite section, the density is distributed on a road by the dispersion model. In addition, the dynamic evolution of the traffic flow is determined by the systematic gas-kinetic model derived from the Boltzmann equation. Multiplying Boltzmann equation by the zeroth, first- and second-order moment functions, integrating both side of the equation and using chain rules, we can derive continuity, motion and variance equation, respectively. However, the second-order moment function, which is the square of the individual velocity, is employed by previous researches does not have physical meaning in traffic flow. Although the second-order expansion results in the velocity variance equation, additional terms may be generated. The velocity variance equation we propose is derived from multiplying Boltzmann equation by the individual velocity variance. It modifies the previous model and presents a new gas-kinetic traffic flow model. By coupling the gas-kinetic model and the dispersion model, a self-consistent system is presented.
Study and Simulation of Traffic Behavior in Cellular Network
NASA Astrophysics Data System (ADS)
Madhup, D. K.; Shrestha, C. L.; Sharma, R. K.
2007-07-01
Cellular radio systems accommodate a large number of users with a limited radio spectrum. The concept of trunking allows a large number of users to share the relatively small number of channels in a cell by providing access to each user, on demand, from a pool of available channels. Traffic engineering deals with provisioning of communication circuits in a given area for a number of subscribers with a required grade of service. Traffic in any cell depends upon the number of users, the average request rate and average call duration. Certain number of channels is required for the required GOS. To design an optimum capacity cellular system, traffic behavior on that system is important. The number of channel required can be estimated by using Erlang formula and Erlang table. Erlang table is not always useful to calculate the probability of blocking in various complex scenarios such as channel borrowing strategies. When the total number of channel available in a given cell are divided to serve partly for newly generated calls and partly for handover calls, and if they use dynamic channel assignment strategies like channel borrowing, then the probability of blocking can't be calculated from Erlang table. Simulation model of the behavior help us to determine the blocking and the channel utilization while using various channel assignment strategies. The title "Study and Simulation of Traffic Behavior in Cellular Network" entail the study of the blocking probability of traffic in cellular network for static channel assignment strategies and dynamic channel borrowing strategies through MATLAB programming language and graphic user interface (GUI). The result shows that the dynamic scheme can perform better than static maximizing the overall utilization of the circuits and minimizing the overall blocking.
ATM traffic-policing and traffic-shaping measurements
NASA Astrophysics Data System (ADS)
Pugaczewski, Jack T.; Sanderson, Dave; Hoffman, David W.
1997-10-01
ATM (Asynchronous Transfer Mode) defines several Quality of Service (QoS) levels such as CBR (Constant Bit Rate) and VBR (Variable Bit Rate). Each service has parameters that are associated with its performance characteristic. The parameters used for specifying CBR service are Peak Cell Rate (PCR) and Cell Delay Variation Tolerance (CDVTOL). The parameters used for specifying VBR service are PCR, SCR (Sustainable Cell Rate), and BT (Burst Tolerance). The network takes responsibility for monitoring and policing the traffic of each VCC/VPC at the ingress of the network. The CPE needs to traffic shape at the ingress of the network in order to meet the contracted QoS requirements. Measurement procedures and tools are required in order to provide a means of tuning the CPE and network service to meet the customer's end-to-end QoS expectations. The following paper describes how the Generic Cell Rate Algorithm variables can be adjusted to provide acceptable levels of QoS in a mixed VBR, and CBR environment. In addition, the paper examines a methodology for measuring the performance of VBR and CBR services. The authors uses this information to build a QoS measurement tool.
Air traffic management evaluation tool
NASA Technical Reports Server (NTRS)
Sridhar, Banavar (Inventor); Sheth, Kapil S. (Inventor); Chatterji, Gano Broto (Inventor); Bilimoria, Karl D. (Inventor); Grabbe, Shon (Inventor); Schipper, John F. (Inventor)
2012-01-01
Methods for evaluating and implementing air traffic management tools and approaches for managing and avoiding an air traffic incident before the incident occurs. A first system receives parameters for flight plan configurations (e.g., initial fuel carried, flight route, flight route segments followed, flight altitude for a given flight route segment, aircraft velocity for each flight route segment, flight route ascent rate, flight route descent route, flight departure site, flight departure time, flight arrival time, flight destination site and/or alternate flight destination site), flight plan schedule, expected weather along each flight route segment, aircraft specifics, airspace (altitude) bounds for each flight route segment, navigational aids available. The invention provides flight plan routing and direct routing or wind optimal routing, using great circle navigation and spherical Earth geometry. The invention provides for aircraft dynamics effects, such as wind effects at each altitude, altitude changes, airspeed changes and aircraft turns to provide predictions of aircraft trajectory (and, optionally, aircraft fuel use). A second system provides several aviation applications using the first system. Several classes of potential incidents are analyzed and averted, by appropriate change en route of one or more parameters in the flight plan configuration, as provided by a conflict detection and resolution module and/or traffic flow management modules. These applications include conflict detection and resolution, miles-in trail or minutes-in-trail aircraft separation, flight arrival management, flight re-routing, weather prediction and analysis and interpolation of weather variables based upon sparse measurements. The invention combines these features to provide an aircraft monitoring system and an aircraft user system that interact and negotiate changes with each other.
Temporal Statistic of Traffic Accidents in Turkey
NASA Astrophysics Data System (ADS)
Erdogan, S.; Yalcin, M.; Yilmaz, M.; Korkmaz Takim, A.
2015-10-01
Traffic accidents form clusters in terms of geographic space and over time which themselves exhibit distinct spatial and temporal patterns. There is an imperative need to understand how, where and when traffic accidents occur in order to develop appropriate accident reduction strategies. An improved understanding of the location, time and reasons for traffic accidents makes a significant contribution to preventing them. Traffic accident occurrences have been extensively studied from different spatial and temporal points of view using a variety of methodological approaches. In literature, less research has been dedicated to the temporal patterns of traffic accidents. In this paper, the numbers of traffic accidents are normalized according to the traffic volume and the distribution and fluctuation of these accidents is examined in terms of Islamic time intervals. The daily activities and worship of Muslims are arranged according to these time intervals that are spaced fairly throughout the day according to the position of the sun. The Islamic time intervals are never been used before to identify the critical hour for traffic accidents in the world. The results show that the sunrise is the critical time that acts as a threshold in the rate of traffic accidents throughout Turkey in Islamic time intervals.
Synchronized flow in oversaturated city traffic
NASA Astrophysics Data System (ADS)
Kerner, Boris S.; Klenov, Sergey L.; Hermanns, Gerhard; Hemmerle, Peter; Rehborn, Hubert; Schreckenberg, Michael
2013-11-01
Based on numerical simulations with a stochastic three-phase traffic flow model, we reveal that moving queues (moving jams) in oversaturated city traffic dissolve at some distance upstream of the traffic signal while transforming into synchronized flow. It is found that, as in highway traffic [Kerner, Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.85.036110 85, 036110 (2012)], such a jam-absorption effect in city traffic is explained by a strong driver's speed adaptation: Time headways (space gaps) between vehicles increase upstream of a moving queue (moving jam), resulting in moving queue dissolution. It turns out that at given traffic signal parameters, the stronger the speed adaptation effect, the shorter the mean distance between the signal location and the road location at which moving queues dissolve fully and oversaturated traffic consists of synchronized flow only. A comparison of the synchronized flow in city traffic found in this Brief Report with synchronized flow in highway traffic is made.
Air Traffic Management Research at NASA Ames
NASA Technical Reports Server (NTRS)
Davis, Thomas J.
2012-01-01
The Aviation Systems Division at the NASA Ames Research Center conducts leading edge research in air traffic management concepts and technologies. This overview will present concepts and simulation results for research in traffic flow management, safe and efficient airport surface operations, super density terminal area operations, separation assurance and system wide modeling and simulation. A brief review of the ongoing air traffic management technology demonstration (ATD-1) will also be presented. A panel discussion, with Mr. Davis serving as a panelist, on air traffic research will follow the briefing.
NASA Astrophysics Data System (ADS)
Szymanowski, Mariusz; Kryza, Maciej
2017-02-01
Our study examines the role of auxiliary variables in the process of spatial modelling and mapping of climatological elements, with air temperature in Poland used as an example. The multivariable algorithms are the most frequently applied for spatialization of air temperature, and their results in many studies are proved to be better in comparison to those obtained by various one-dimensional techniques. In most of the previous studies, two main strategies were used to perform multidimensional spatial interpolation of air temperature. First, it was accepted that all variables significantly correlated with air temperature should be incorporated into the model. Second, it was assumed that the more spatial variation of air temperature was deterministically explained, the better was the quality of spatial interpolation. The main goal of the paper was to examine both above-mentioned assumptions. The analysis was performed using data from 250 meteorological stations and for 69 air temperature cases aggregated on different levels: from daily means to 10-year annual mean. Two cases were considered for detailed analysis. The set of potential auxiliary variables covered 11 environmental predictors of air temperature. Another purpose of the study was to compare the results of interpolation given by various multivariable methods using the same set of explanatory variables. Two regression models: multiple linear (MLR) and geographically weighted (GWR) method, as well as their extensions to the regression-kriging form, MLRK and GWRK, respectively, were examined. Stepwise regression was used to select variables for the individual models and the cross-validation method was used to validate the results with a special attention paid to statistically significant improvement of the model using the mean absolute error (MAE) criterion. The main results of this study led to rejection of both assumptions considered. Usually, including more than two or three of the most significantly