Same Time Same Place: Do MALL Classrooms Exist?
ERIC Educational Resources Information Center
Byrne, Jason
2016-01-01
This paper seeks to help clarify whether Mobile-Assisted Language Learning (MALL) is primarily an independent self-study activity or whether MALL classrooms exist. The research hypothesised that a large number of users frequently using specific MALL apps, at the same time and in the same city location, may indicate the existence of MALL…
NASA Astrophysics Data System (ADS)
Yin, Huicheng; Zhao, Wenbin
2018-01-01
This paper is a continuation of the works in [35] and [37], where the authors have established the global existence of smooth compressible flows in infinitely expanding balls for inviscid gases and viscid gases, respectively. In this paper, we are concerned with the global existence and large time behavior of compressible Boltzmann gases in an infinitely expanding ball. Such a problem is one of the interesting models in studying the theory of global smooth solutions to multidimensional compressible gases with time dependent boundaries and vacuum states at infinite time. Due to the conservation of mass, the fluid in the expanding ball becomes rarefied and eventually tends to a vacuum state meanwhile there are no appearances of vacuum domains in any part of the expansive ball, which is easily observed in finite time. In the present paper, we will confirm this physical phenomenon for the Boltzmann equation by obtaining the exact lower and upper bound on the macroscopic density function.
Global existence of the three-dimensional viscous quantum magnetohydrodynamic model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Jianwei, E-mail: yangjianwei@ncwu.edu.cn; Ju, Qiangchang, E-mail: qiangchang-ju@yahoo.com
2014-08-15
The global-in-time existence of weak solutions to the viscous quantum Magnetohydrodynamic equations in a three-dimensional torus with large data is proved. The global existence of weak solutions to the viscous quantum Magnetohydrodynamic equations is shown by using the Faedo-Galerkin method and weak compactness techniques.
Yu, Qiang; Wei, Dingbang; Huo, Hongwei
2018-06-18
Given a set of t n-length DNA sequences, q satisfying 0 < q ≤ 1, and l and d satisfying 0 ≤ d < l < n, the quorum planted motif search (qPMS) finds l-length strings that occur in at least qt input sequences with up to d mismatches and is mainly used to locate transcription factor binding sites in DNA sequences. Existing qPMS algorithms have been able to efficiently process small standard datasets (e.g., t = 20 and n = 600), but they are too time consuming to process large DNA datasets, such as ChIP-seq datasets that contain thousands of sequences or more. We analyze the effects of t and q on the time performance of qPMS algorithms and find that a large t or a small q causes a longer computation time. Based on this information, we improve the time performance of existing qPMS algorithms by selecting a sample sequence set D' with a small t and a large q from the large input dataset D and then executing qPMS algorithms on D'. A sample sequence selection algorithm named SamSelect is proposed. The experimental results on both simulated and real data show (1) that SamSelect can select D' efficiently and (2) that the qPMS algorithms executed on D' can find implanted or real motifs in a significantly shorter time than when executed on D. We improve the ability of existing qPMS algorithms to process large DNA datasets from the perspective of selecting high-quality sample sequence sets so that the qPMS algorithms can find motifs in a short time in the selected sample sequence set D', rather than take an unfeasibly long time to search the original sequence set D. Our motif discovery method is an approximate algorithm.
NASA Technical Reports Server (NTRS)
Mjolsness, Eric; Castano, Rebecca; Mann, Tobias; Wold, Barbara
2000-01-01
We provide preliminary evidence that existing algorithms for inferring small-scale gene regulation networks from gene expression data can be adapted to large-scale gene expression data coming from hybridization microarrays. The essential steps are (I) clustering many genes by their expression time-course data into a minimal set of clusters of co-expressed genes, (2) theoretically modeling the various conditions under which the time-courses are measured using a continuous-time analog recurrent neural network for the cluster mean time-courses, (3) fitting such a regulatory model to the cluster mean time courses by simulated annealing with weight decay, and (4) analysing several such fits for commonalities in the circuit parameter sets including the connection matrices. This procedure can be used to assess the adequacy of existing and future gene expression time-course data sets for determining transcriptional regulatory relationships such as coregulation.
ERIC Educational Resources Information Center
Korkofingas, Con; Macri, Joseph
2013-01-01
This paper examines, using regression modelling, whether a statistically significant relationship exists between the time spent by a student using the course website and the student's assessment performance for a large third year university business forecasting course. We utilise the online tracking system in Blackboard, a web-based software…
Forced cubic Schrödinger equation with Robin boundary data: large-time asymptotics
Kaikina, Elena I.
2013-01-01
We consider the initial-boundary-value problem for the cubic nonlinear Schrödinger equation, formulated on a half-line with inhomogeneous Robin boundary data. We study traditionally important problems of the theory of nonlinear partial differential equations, such as the global-in-time existence of solutions to the initial-boundary-value problem and the asymptotic behaviour of solutions for large time. PMID:24204185
NASA Astrophysics Data System (ADS)
Huang, Feimin; Li, Tianhong; Yu, Huimin; Yuan, Difan
2018-06-01
We are concerned with the global existence and large time behavior of entropy solutions to the one-dimensional unipolar hydrodynamic model for semiconductors in the form of Euler-Poisson equations in a bounded interval. In this paper, we first prove the global existence of entropy solution by vanishing viscosity and compensated compactness framework. In particular, the solutions are uniformly bounded with respect to space and time variables by introducing modified Riemann invariants and the theory of invariant region. Based on the uniform estimates of density, we further show that the entropy solution converges to the corresponding unique stationary solution exponentially in time. No any smallness condition is assumed on the initial data and doping profile. Moreover, the novelty in this paper is about the unform bound with respect to time for the weak solutions of the isentropic Euler-Poisson system.
Selected Papers on Low-Energy Antiprotons and Possible Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noble, Robert
1998-09-19
The only realistic means by which to create a facility at Fermilab to produce large amounts of low energy antiprotons is to use resources which already exist. There is simply too little money and manpower at this point in time to generate new accelerators on a time scale before the turn of the century. Therefore, innovation is required to modify existing equipment to provide the services required by experimenters.
Wigner time-delay distribution in chaotic cavities and freezing transition.
Texier, Christophe; Majumdar, Satya N
2013-06-21
Using the joint distribution for proper time delays of a chaotic cavity derived by Brouwer, Frahm, and Beenakker [Phys. Rev. Lett. 78, 4737 (1997)], we obtain, in the limit of the large number of channels N, the large deviation function for the distribution of the Wigner time delay (the sum of proper times) by a Coulomb gas method. We show that the existence of a power law tail originates from narrow resonance contributions, related to a (second order) freezing transition in the Coulomb gas.
Decay estimates of solutions to the bipolar non-isentropic compressible Euler-Maxwell system
NASA Astrophysics Data System (ADS)
Tan, Zhong; Wang, Yong; Tong, Leilei
2017-10-01
We consider the global existence and large time behavior of solutions near a constant equilibrium state to the bipolar non-isentropic compressible Euler-Maxwell system in {R}3 , where the background magnetic field could be non-zero. The global existence is established under the assumption that the H 3 norm of the initial data is small, but its higher order derivatives could be large. Combining the negative Sobolev (or Besov) estimates with the interpolation estimates, we prove the optimal time decay rates of the solution and its higher order spatial derivatives. In this sense, our results improve the similar ones in Wang et al (2012 SIAM J. Math. Anal. 44 3429-57).
Graph-based real-time fault diagnostics
NASA Technical Reports Server (NTRS)
Padalkar, S.; Karsai, G.; Sztipanovits, J.
1988-01-01
A real-time fault detection and diagnosis capability is absolutely crucial in the design of large-scale space systems. Some of the existing AI-based fault diagnostic techniques like expert systems and qualitative modelling are frequently ill-suited for this purpose. Expert systems are often inadequately structured, difficult to validate and suffer from knowledge acquisition bottlenecks. Qualitative modelling techniques sometimes generate a large number of failure source alternatives, thus hampering speedy diagnosis. In this paper we present a graph-based technique which is well suited for real-time fault diagnosis, structured knowledge representation and acquisition and testing and validation. A Hierarchical Fault Model of the system to be diagnosed is developed. At each level of hierarchy, there exist fault propagation digraphs denoting causal relations between failure modes of subsystems. The edges of such a digraph are weighted with fault propagation time intervals. Efficient and restartable graph algorithms are used for on-line speedy identification of failure source components.
Building occupancy simulation and data assimilation using a graph-based agent-oriented model
NASA Astrophysics Data System (ADS)
Rai, Sanish; Hu, Xiaolin
2018-07-01
Building occupancy simulation and estimation simulates the dynamics of occupants and estimates their real-time spatial distribution in a building. It requires a simulation model and an algorithm for data assimilation that assimilates real-time sensor data into the simulation model. Existing building occupancy simulation models include agent-based models and graph-based models. The agent-based models suffer high computation cost for simulating large numbers of occupants, and graph-based models overlook the heterogeneity and detailed behaviors of individuals. Recognizing the limitations of existing models, this paper presents a new graph-based agent-oriented model which can efficiently simulate large numbers of occupants in various kinds of building structures. To support real-time occupancy dynamics estimation, a data assimilation framework based on Sequential Monte Carlo Methods is also developed and applied to the graph-based agent-oriented model to assimilate real-time sensor data. Experimental results show the effectiveness of the developed model and the data assimilation framework. The major contributions of this work are to provide an efficient model for building occupancy simulation that can accommodate large numbers of occupants and an effective data assimilation framework that can provide real-time estimations of building occupancy from sensor data.
Global low-energy weak solution and large-time behavior for the compressible flow of liquid crystals
NASA Astrophysics Data System (ADS)
Wu, Guochun; Tan, Zhong
2018-06-01
In this paper, we consider the weak solution of the simplified Ericksen-Leslie system modeling compressible nematic liquid crystal flows in R3. When the initial data are of small energy and initial density is positive and essentially bounded, we prove the existence of a global weak solution in R3. The large-time behavior of a global weak solution is also established.
Inclusion of Part-Time Faculty for the Benefit of Faculty and Students
ERIC Educational Resources Information Center
Meixner, Cara; Kruck, S. E.; Madden, Laura T.
2010-01-01
The new majority of faculty in today's colleges and universities are part-time, yet sizable gaps exist in the research on their needs, interests, and experiences. Further, the peer-reviewed scholarship is largely quantitative. Principally, it focuses on the utility of the adjunct work force, comparisons between part-time and full-time faculty, and…
A new Brewster angle microscope
NASA Astrophysics Data System (ADS)
Lheveder, C.; Hénon, S.; Mercier, R.; Tissot, G.; Fournet, P.; Meunier, J.
1998-03-01
We present a new Brewster angle microscope for the study of very thin layers as thin as monolayers, using a custom-made objective. This objective avoids the drawbacks of the models existing at the present time. Its optical axis is perpendicular to the studied layer and consequently gives an image in focus in all the plane contrary to the existing models which give images in focus along a narrow strip. The objective allows one to obtain images with a good resolution (less than 1 μm) without scanning the surface, at the video frequency, allowing for dynamic studies. A large frontal distance associated with a very large aperture is obtained by using a large lens at the entrance of the objective.
DOT National Transportation Integrated Search
2010-03-01
Incidents account for a large portion of all congestion and a need clearly exists for tools to predict and estimate incident effects. This study examined (1) congestion back propagation to estimate the length of the queue and travel time from upstrea...
Mathematical theory of exchange-driven growth
NASA Astrophysics Data System (ADS)
Esenturk, Emre
2018-07-01
Exchange-driven growth is a process in which pairs of clusters interact by exchanging single unit of mass at a time. The rate of exchange is given by an interaction kernel which depends on the masses of the two interacting clusters. In this paper we establish the fundamental mathematical properties of the mean field rate equations of this process for the first time. We find two different classes of behavior depending on whether is symmetric or not. For the non-symmetric case, we prove global existence and uniqueness of solutions for kernels satisfying . This result is optimal in the sense that we show for a large class of initial conditions and kernels satisfying the solutions cannot exist. On the other hand, for symmetric kernels, we prove global existence of solutions for ( while existence is lost for ( In the intermediate regime we can only show local existence. We conjecture that the intermediate regime exhibits finite-time gelation in accordance with the heuristic results obtained for particular kernels.
Isosurface Extraction in Time-Varying Fields Using a Temporal Hierarchical Index Tree
NASA Technical Reports Server (NTRS)
Shen, Han-Wei; Gerald-Yamasaki, Michael (Technical Monitor)
1998-01-01
Many high-performance isosurface extraction algorithms have been proposed in the past several years as a result of intensive research efforts. When applying these algorithms to large-scale time-varying fields, the storage overhead incurred from storing the search index often becomes overwhelming. this paper proposes an algorithm for locating isosurface cells in time-varying fields. We devise a new data structure, called Temporal Hierarchical Index Tree, which utilizes the temporal coherence that exists in a time-varying field and adoptively coalesces the cells' extreme values over time; the resulting extreme values are then used to create the isosurface cell search index. For a typical time-varying scalar data set, not only does this temporal hierarchical index tree require much less storage space, but also the amount of I/O required to access the indices from the disk at different time steps is substantially reduced. We illustrate the utility and speed of our algorithm with data from several large-scale time-varying CID simulations. Our algorithm can achieve more than 80% of disk-space savings when compared with the existing techniques, while the isosurface extraction time is nearly optimal.
Large storage operations under climate change: expanding uncertainties and evolving tradeoffs
NASA Astrophysics Data System (ADS)
Giuliani, Matteo; Anghileri, Daniela; Castelletti, Andrea; Vu, Phuong Nam; Soncini-Sessa, Rodolfo
2016-03-01
In a changing climate and society, large storage systems can play a key role for securing water, energy, and food, and rebalancing their cross-dependencies. In this letter, we study the role of large storage operations as flexible means of adaptation to climate change. In particular, we explore the impacts of different climate projections for different future time horizons on the multi-purpose operations of the existing system of large dams in the Red River basin (China-Laos-Vietnam). We identify the main vulnerabilities of current system operations, understand the risk of failure across sectors by exploring the evolution of the system tradeoffs, quantify how the uncertainty associated to climate scenarios is expanded by the storage operations, and assess the expected costs if no adaptation is implemented. Results show that, depending on the climate scenario and the time horizon considered, the existing operations are predicted to change on average from -7 to +5% in hydropower production, +35 to +520% in flood damages, and +15 to +160% in water supply deficit. These negative impacts can be partially mitigated by adapting the existing operations to future climate, reducing the loss of hydropower to 5%, potentially saving around 34.4 million US year-1 at the national scale. Since the Red River is paradigmatic of many river basins across south east Asia, where new large dams are under construction or are planned to support fast growing economies, our results can support policy makers in prioritizing responses and adaptation strategies to the changing climate.
Long time existence from interior gluing
NASA Astrophysics Data System (ADS)
Chruściel, Piotr T.
2017-07-01
We prove completeness-to-the-future of null hypersurfaces emanating outwards from large spheres, in vacuum space-times evolving from general asymptotically flat data with well-defined energy-momentum. The proof uses scaling and a gluing construction to reduce the problem to Bieri’s stability theorem.
Mutation load and the extinction of large populations
NASA Astrophysics Data System (ADS)
Bernardes, A. T.
1996-02-01
In the time evolution of finite populations, the accumulation of harmful mutations in further generations might lead to a temporal decay in the mean fitness of the whole population that, after sufficient time, would reduce population size and so lead to extinction. This joint action of mutation load and population reduction is called Mutational Meltdown and is usually considered only to occur in small asexual or very small sexual populations. However, the problem of extinction cannot be discussed in a proper way if one previously assumes the existence of an equilibrium state, as initially discussed in this paper. By performing simulations in a genetically inspired model for time-changing populations, we show that mutational meltdown also occurs in large asexual populations and that the mean time to extinction is a nonmonotonic function of the selection coefficient. The stochasticity of the extinction process is also discussed. The extinction of small sexual N ∼ 700 populations is shown and our results confirm the assumption that the existence of recombination might be a powerful mechanism to avoid extinction.
Efficient Bayesian mixed model analysis increases association power in large cohorts
Loh, Po-Ru; Tucker, George; Bulik-Sullivan, Brendan K; Vilhjálmsson, Bjarni J; Finucane, Hilary K; Salem, Rany M; Chasman, Daniel I; Ridker, Paul M; Neale, Benjamin M; Berger, Bonnie; Patterson, Nick; Price, Alkes L
2014-01-01
Linear mixed models are a powerful statistical tool for identifying genetic associations and avoiding confounding. However, existing methods are computationally intractable in large cohorts, and may not optimize power. All existing methods require time cost O(MN2) (where N = #samples and M = #SNPs) and implicitly assume an infinitesimal genetic architecture in which effect sizes are normally distributed, which can limit power. Here, we present a far more efficient mixed model association method, BOLT-LMM, which requires only a small number of O(MN)-time iterations and increases power by modeling more realistic, non-infinitesimal genetic architectures via a Bayesian mixture prior on marker effect sizes. We applied BOLT-LMM to nine quantitative traits in 23,294 samples from the Women’s Genome Health Study (WGHS) and observed significant increases in power, consistent with simulations. Theory and simulations show that the boost in power increases with cohort size, making BOLT-LMM appealing for GWAS in large cohorts. PMID:25642633
Unitary limit of two-nucleon interactions in strong magnetic fields
Detmold, William; Orginos, Kostas; Parreño, Assumpta; ...
2016-03-14
In this study, two-nucleon systems are shown to exhibit large scattering lengths in strong magnetic fields at unphysical quark masses, and the trends toward the physical values indicate that such features may exist in nature. Lattice QCD calculations of the energies of one and two nucleons systems are performed at pion masses of m π ~ 450 and 806 MeV in uniform, time-independent magnetic fields of strength |B| ~ 10 19 – 10 20 Gauss to determine the response of these hadronic systems to large magnetic fields. Fields of this strength may exist inside magnetars and in peripheral relativistic heavymore » ion collisions, and the unitary behavior at large scattering lengths may have important consequences for these systems.« less
Quantum digital-to-analog conversion algorithm using decoherence
NASA Astrophysics Data System (ADS)
SaiToh, Akira
2015-08-01
We consider the problem of mapping digital data encoded on a quantum register to analog amplitudes in parallel. It is shown to be unlikely that a fully unitary polynomial-time quantum algorithm exists for this problem; NP becomes a subset of BQP if it exists. In the practical point of view, we propose a nonunitary linear-time algorithm using quantum decoherence. It tacitly uses an exponentially large physical resource, which is typically a huge number of identical molecules. Quantumness of correlation appearing in the process of the algorithm is also discussed.
NASA Astrophysics Data System (ADS)
Vysotskii, V. I.; Kornilova, A. A.; Vasilenko, A. O.; Krit, T. B.; Vysotskyy, M. V.
2017-07-01
The problems of the existence, generation, propagation and registration of long-distant undamped thermal waves formed in pulse radiative processes have been theoretically analyzed and confirmed experimentally. These waves may be used for the analysis of short-time processes of interaction of particles or electromagnetic fields with different targets. Such undamped waves can only exist in environments with a finite (nonzero) time of local thermal relaxation and their frequencies are determined by this time. The results of successful experiments on the generation and registration of undamped thermal waves at a large distance (up to 2 m) are also presented.
Developing NDE Techniques for Large Cryogenic Tanks
NASA Technical Reports Server (NTRS)
Parker, Don; Starr, Stan
2009-01-01
The Shuttle and Constellation Programs require very large cryogenic ground storage tanks in which to store liquid oxygen and hydrogen. The existing LC-39 pad tanks, which will be passed onto Constellation, are 40 years old and have received minimal refurbishment or even inspection, because they can only be temperature cycled a few times before being overhauled (a costly operation in both time and dollars). Numerous questions exist on the performance and reliability of these old tanks which could cause a major Program schedule disruption. Consequently, with the passing of the first two tanks to Constellation to occur this year, there is growing awareness that NDE is needed to detect problems early in these tanks so that corrective actions can be scheduled when least disruptive. Time series thermal images of two sides of the Pad B LH2 tank have been taken over multiple days to demonstrate the effects of environmental conditions to the solar heating of the tank and therefore the effectiveness of thermal imaging.
The large-amplitude combustion oscillation in a single-side expansion scramjet combustor
NASA Astrophysics Data System (ADS)
Ouyang, Hao; Liu, Weidong; Sun, Mingbo
2015-12-01
The combustion oscillation in scramjet combustor is believed not existing and ignored for a long time. Compared with the flame pulsation, the large-amplitude combustion oscillation in scramjet combustor is indeed unfamiliar and difficult to be observed. In this study, the specifically designed experiments are carried out to investigate this unusual phenomenon in a single-side expansion scramjet combustor. The entrance parameter of combustor corresponds to scramjet flight Mach number 4.0 with a total temperature of 947 K. The obtained results show that the large-amplitude combustion oscillation can exist in scramjet combustor, which is not occasional and can be reproduced. Under the given conditions of this study, moreover, the large-amplitude combustion oscillation is regular and periodic, whose principal frequency is about 126 Hz. The proceeding of the combustion oscillation is accompanied by the transformation of the flame-holding pattern and combustion mode transition between scramjet mode combustion and ramjet mode combustion.
NASA Astrophysics Data System (ADS)
Hudnut, K. W.; Given, D.; King, N. E.; Lisowski, M.; Langbein, J. O.; Murray-Moraleda, J. R.; Gomberg, J. S.
2011-12-01
Over the past several years, USGS has developed the infrastructure for integrating real-time GPS with seismic data in order to improve our ability to respond to earthquakes and volcanic activity. As part of this effort, we have tested real-time GPS processing software components , and identified the most robust and scalable options. Simultaneously, additional near-field monitoring stations have been built using a new station design that combines dual-frequency GPS with high quality strong-motion sensors and dataloggers. Several existing stations have been upgraded in this way, using USGS Multi-Hazards Demonstration Project and American Recovery and Reinvestment Act funds in southern California. In particular, existing seismic stations have been augmented by the addition of GPS and vice versa. The focus of new instrumentation as well as datalogger and telemetry upgrades to date has been along the southern San Andreas fault in hopes of 1) capturing a large and potentially damaging rupture in progress and augmenting inputs to earthquake early warning systems, and 2) recovering high quality recordings on scale of large dynamic displacement waveforms, static displacements and immediate and long-term post-seismic transient deformation. Obtaining definitive records of large ground motions close to a large San Andreas or Cascadia rupture (or volcanic activity) would be a fundamentally important contribution to understanding near-source large ground motions and the physics of earthquakes, including the rupture process and friction associated with crack propagation and healing. Soon, telemetry upgrades will be completed in Cascadia and throughout the Plate Boundary Observatory as well. By collaborating with other groups on open-source automation system development, we will be ready to process the newly available real-time GPS data streams and to fold these data in with existing strong-motion and other seismic data. Data from these same stations will also serve the very practical purpose of enabling earthquake early warning and greatly improving rapid finite-fault source modeling. Multiple uses of the effectively very broad-band data obtained by these stations, for operational and research purposes, are bound to occur especially because all data will be freely, openly and instantly available.
Langbein, J.; Bock, Y.
2004-01-01
A network of 13 continuous GPS stations near Parkfield, California has been converted from 30 second to 1 second sampling with positions of the stations estimated in real-time relative to a master station. Most stations are near the trace of the San Andreas fault, which exhibits creep. The noise spectra of the instantaneous 1 Hz positions show flicker noise at high frequencies and change to frequency independence at low frequencies; the change in character occurs between 6 to 8 hours. Our analysis indicates that 1-second sampled GPS can estimate horizontal displacements of order 6 mm at the 99% confidence level from a few seconds to a few hours. High frequency GPS can augment existing measurements in capturing large creep events and postseismic slip that would exceed the range of existing creepmeters, and can detect large seismic displacements. Copyright 2004 by the American Geophysical Union.
Mechanical energy fluctuations in granular chains: the possibility of rogue fluctuations or waves.
Han, Ding; Westley, Matthew; Sen, Surajit
2014-09-01
The existence of rogue or freak waves in the ocean has been known for some time. They have been reported in the context of optical lattices and the financial market. We ask whether such waves are generic to late time behavior in nonlinear systems. In that vein, we examine the dynamics of an alignment of spherical elastic beads held within fixed, rigid walls at zero precompression when they are subjected to sufficiently rich initial conditions. Here we define such waves generically as unusually large energy fluctuations that sustain for short periods of time. Our simulations suggest that such unusually large fluctuations ("hot spots") and occasional series of such fluctuations through space and time ("rogue fluctuations") are likely to exist in the late time dynamics of the granular chain system at zero dissipation. We show that while hot spots are common in late time evolution, rogue fluctuations are seen in purely nonlinear systems (i.e., no precompression) at late enough times. We next show that the number of such fluctuations grows exponentially with increasing nonlinearity whereas rogue fluctuations decrease superexponentially with increasing precompression. Dissipation-free granular alignment systems may be possible to realize as integrated circuits and hence our observations may potentially be testable in the laboratory.
A Parallel Pipelined Renderer for the Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Chiueh, Tzi-Cker; Ma, Kwan-Liu
1997-01-01
This paper presents a strategy for efficiently rendering time-varying volume data sets on a distributed-memory parallel computer. Time-varying volume data take large storage space and visualizing them requires reading large files continuously or periodically throughout the course of the visualization process. Instead of using all the processors to collectively render one volume at a time, a pipelined rendering process is formed by partitioning processors into groups to render multiple volumes concurrently. In this way, the overall rendering time may be greatly reduced because the pipelined rendering tasks are overlapped with the I/O required to load each volume into a group of processors; moreover, parallelization overhead may be reduced as a result of partitioning the processors. We modify an existing parallel volume renderer to exploit various levels of rendering parallelism and to study how the partitioning of processors may lead to optimal rendering performance. Two factors which are important to the overall execution time are re-source utilization efficiency and pipeline startup latency. The optimal partitioning configuration is the one that balances these two factors. Tests on Intel Paragon computers show that in general optimal partitionings do exist for a given rendering task and result in 40-50% saving in overall rendering time.
Evidence for the timing of sea-level events during MIS 3
NASA Astrophysics Data System (ADS)
Siddall, M.
2005-12-01
Four large sea-level peaks of millennial-scale duration occur during MIS 3. In addition smaller peaks may exist close to the sensitivity of existing methods to derive sea level during these periods. Millennial-scale changes in temperature during MIS 3 are well documented across much of the planet and are linked in some unknown, yet fundamental way to changes in ice volume / sea level. It is therefore highly likely that the timing of the sea level events during MIS 3 will prove to be a `Rosetta Stone' for understanding millennial scale climate variability. I will review observational and mechanistic arguments for the variation of sea level on Antarctic, Greenland and absolute time scales.
DOT National Transportation Integrated Search
2016-12-01
This research produced an arrival notification system for paratransit passengers with disabilities. Almost all existing curb-to-curb paratransit services have significantly large pick-up time window ranging from 20 to 40 minutes from the scheduled ti...
Characterization of neural development in zebrafish embryos using real-time quantitative PCR.
Chemicals adversely affecting the developing nervous system may cause long-term consequences on human health. Little information exists on a large number of environmental compounds to guide developmental neurotoxicity risk assessments. Because developmental neurotoxicity studies ...
Prethermal Phases of Matter Protected by Time-Translation Symmetry
NASA Astrophysics Data System (ADS)
Else, Dominic V.; Bauer, Bela; Nayak, Chetan
2017-01-01
In a periodically driven (Floquet) system, there is the possibility for new phases of matter, not present in stationary systems, protected by discrete time-translation symmetry. This includes topological phases protected in part by time-translation symmetry, as well as phases distinguished by the spontaneous breaking of this symmetry, dubbed "Floquet time crystals." We show that such phases of matter can exist in the prethermal regime of periodically driven systems, which exists generically for sufficiently large drive frequency, thereby eliminating the need for integrability or strong quenched disorder, which limited previous constructions. We prove a theorem that states that such a prethermal regime persists until times that are nearly exponentially long in the ratio of certain couplings to the drive frequency. By similar techniques, we can also construct stationary systems that spontaneously break continuous time-translation symmetry. Furthermore, we argue that for driven systems coupled to a cold bath, the prethermal regime could potentially persist to infinite time.
Shu, Xu; Schaubel, Douglas E
2016-06-01
Times between successive events (i.e., gap times) are of great importance in survival analysis. Although many methods exist for estimating covariate effects on gap times, very few existing methods allow for comparisons between gap times themselves. Motivated by the comparison of primary and repeat transplantation, our interest is specifically in contrasting the gap time survival functions and their integration (restricted mean gap time). Two major challenges in gap time analysis are non-identifiability of the marginal distributions and the existence of dependent censoring (for all but the first gap time). We use Cox regression to estimate the (conditional) survival distributions of each gap time (given the previous gap times). Combining fitted survival functions based on those models, along with multiple imputation applied to censored gap times, we then contrast the first and second gap times with respect to average survival and restricted mean lifetime. Large-sample properties are derived, with simulation studies carried out to evaluate finite-sample performance. We apply the proposed methods to kidney transplant data obtained from a national organ transplant registry. Mean 10-year graft survival of the primary transplant is significantly greater than that of the repeat transplant, by 3.9 months (p=0.023), a result that may lack clinical importance. © 2015, The International Biometric Society.
Multiscale structure of time series revealed by the monotony spectrum.
Vamoş, Călin
2017-03-01
Observation of complex systems produces time series with specific dynamics at different time scales. The majority of the existing numerical methods for multiscale analysis first decompose the time series into several simpler components and the multiscale structure is given by the properties of their components. We present a numerical method which describes the multiscale structure of arbitrary time series without decomposing them. It is based on the monotony spectrum defined as the variation of the mean amplitude of the monotonic segments with respect to the mean local time scale during successive averagings of the time series, the local time scales being the durations of the monotonic segments. The maxima of the monotony spectrum indicate the time scales which dominate the variations of the time series. We show that the monotony spectrum can correctly analyze a diversity of artificial time series and can discriminate the existence of deterministic variations at large time scales from the random fluctuations. As an application we analyze the multifractal structure of some hydrological time series.
Assessment and Rehabilitation Issues Concerning Existing 70’s Structural Stock
NASA Astrophysics Data System (ADS)
Sabareanu, E.
2017-06-01
The last 30 years were very demanding in terms of norms and standards change concerning the structural calculus for buildings, leaving a large stock of structures erected during 70-90 decades in a weak position concerning seismic loads and loads level for live loads, wind and snow. In the same time, taking into account that a large amount of buildings are in service all over the country, they cannot be demolished, but suitable rehabilitation methods should be proposed, structural durability being achieved. The paper proposes some rehabilitation methods suitable in terms of structural safety and cost optimization for diaphragm reinforced concrete structures, with an example on an existing multi storey building.
Large Efficient Intelligent Heating Relay Station System
NASA Astrophysics Data System (ADS)
Wu, C. Z.; Wei, X. G.; Wu, M. Q.
2017-12-01
The design of large efficient intelligent heating relay station system aims at the improvement of the existing heating system in our country, such as low heating efficiency, waste of energy and serious pollution, and the control still depends on the artificial problem. In this design, we first improve the existing plate heat exchanger. Secondly, the ATM89C51 is used to control the whole system and realize the intelligent control. The detection part is using the PT100 temperature sensor, pressure sensor, turbine flowmeter, heating temperature, detection of user end liquid flow, hydraulic, and real-time feedback, feedback signal to the microcontroller through the heating for users to adjust, realize the whole system more efficient, intelligent and energy-saving.
Online Community Detection for Large Complex Networks
Pan, Gang; Zhang, Wangsheng; Wu, Zhaohui; Li, Shijian
2014-01-01
Complex networks describe a wide range of systems in nature and society. To understand complex networks, it is crucial to investigate their community structure. In this paper, we develop an online community detection algorithm with linear time complexity for large complex networks. Our algorithm processes a network edge by edge in the order that the network is fed to the algorithm. If a new edge is added, it just updates the existing community structure in constant time, and does not need to re-compute the whole network. Therefore, it can efficiently process large networks in real time. Our algorithm optimizes expected modularity instead of modularity at each step to avoid poor performance. The experiments are carried out using 11 public data sets, and are measured by two criteria, modularity and NMI (Normalized Mutual Information). The results show that our algorithm's running time is less than the commonly used Louvain algorithm while it gives competitive performance. PMID:25061683
BIODIVERSITY CONSERVATION INCENTIVE PROGRAMS FOR PRIVATELY OWNED FORESTS
In many countries, a large proportion of forest biodiversity exists on private land. Legal restrictions are often inadequate to prevent loss of habitat and encourage forest owners to manage areas for biodiversity, especially when these management actions require time, money, and ...
Conversion of Successionally Stable Even-Aged Oak Stands to an Uneven-Aged Structure
Edward F. Loewenstein; James M. Guldin
2004-01-01
Developing a silvicultural prescription to convert an even-aged or unmanaged oak stand to an uneven-aged structure depends in large part on the length of time the existing overstory will live. Four conversion prescriptions, representing three initial stand conditions, are presented. Each prescription partitions the cut of the original overstory differently in time and...
ERIC Educational Resources Information Center
Van Iddekinge, Chad H.; Ferris, Gerald R.; Perrewe, Pamela L.; Perryman, Alexa A.; Blass, Fred R.; Heetderks, Thomas D.
2009-01-01
Surprisingly few data exist concerning whether and how utilization of job-related selection and training procedures affects different aspects of unit or organizational performance over time. The authors used longitudinal data from a large fast-food organization (N = 861 units) to examine how change in use of selection and training relates to…
Non-Tenure-Track Faculty's Social Construction of a Supportive Work Environment
ERIC Educational Resources Information Center
Kezar, Adrianna
2013-01-01
Background: The number of non-tenure-track faculty (NTTF), including both full-time (FT) and part-time (PT) positions, has risen to two-thirds of faculty positions across the academy. To date, most of the studies of NTTF have relied on secondary data or large-scale surveys. Few qualitative studies exist that examine the experience, working…
Closed-Loop Control of Complex Networks: A Trade-Off between Time and Energy
NASA Astrophysics Data System (ADS)
Sun, Yong-Zheng; Leng, Si-Yang; Lai, Ying-Cheng; Grebogi, Celso; Lin, Wei
2017-11-01
Controlling complex nonlinear networks is largely an unsolved problem at the present. Existing works focus either on open-loop control strategies and their energy consumptions or on closed-loop control schemes with an infinite-time duration. We articulate a finite-time, closed-loop controller with an eye toward the physical and mathematical underpinnings of the trade-off between the control time and energy as well as their dependence on the network parameters and structure. The closed-loop controller is tested on a large number of real systems including stem cell differentiation, food webs, random ecosystems, and spiking neuronal networks. Our results represent a step forward in developing a rigorous and general framework to control nonlinear dynamical networks with a complex topology.
Emergency response to mass casualty incidents in Lebanon.
El Sayed, Mazen J
2013-08-01
The emergency response to mass casualty incidents in Lebanon lacks uniformity. Three recent large-scale incidents have challenged the existing emergency response process and have raised the need to improve and develop incident management for better resilience in times of crisis. We describe some simple emergency management principles that are currently applied in the United States. These principles can be easily adopted by Lebanon and other developing countries to standardize and improve their emergency response systems using existing infrastructure.
2014-11-01
response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and...over time , when a bipha- sic soft tissue is subjected to dynamic loading. Also, after the initial transient, the variation of solid skeleton stresses...will be naturally calculated as the fluid phase pressure dissipates over time . This is important for developing physiologically- relevant degradation
NASA Astrophysics Data System (ADS)
Kawajiri, Shota; Matunaga, Saburo
2017-10-01
This study examines a low-complexity control method that satisfies mechanical constraints by using control moment gyros for an agile maneuver. The method is designed based on the fact that a simple rotation around an Euler's principal axis corresponds to a well-approximated solution of a time-optimal rest-to-rest maneuver. With respect to an agile large-angle maneuver using CMGs, it is suggested that there exists a coasting period in which all gimbal angles are constant, and a constant body angular velocity is almost along the Euler's principal axis. The gimbals are driven such that the coasting period is generated in the proposed method. This allows the problem to be converted into obtaining only a coasting time and gimbal angles such that their combination maximizes body angular velocity along the rotational axis of the maneuver. The effectiveness of the proposed method is demonstrated by using numerical simulations. The results indicate that the proposed method shortens the settling time by 20-70% when compared to that of a traditional feedback method. Additionally, a comparison with an existing path planning method shows that the proposed method achieves a low computational complexity (that is approximately 150 times faster) and a certain level of shortness in the settling time.
2013-01-01
Background Mass distribution of long-lasting insecticide treated bed nets (LLINs) has led to large increases in LLIN coverage in many African countries. As LLIN ownership levels increase, planners of future mass distributions face the challenge of deciding whether to ignore the nets already owned by households or to take these into account and attempt to target individuals or households without nets. Taking existing nets into account would reduce commodity costs but require more sophisticated, and potentially more costly, distribution procedures. The decision may also have implications for the average age of nets in use and therefore on the maintenance of universal LLIN coverage over time. Methods A stochastic simulation model based on the NetCALC algorithm was used to determine the scenarios under which it would be cost saving to take existing nets into account, and the potential effects of doing so on the age profile of LLINs owned. The model accounted for variability in timing of distributions, concomitant use of continuous distribution systems, population growth, sampling error in pre-campaign coverage surveys, variable net ‘decay’ parameters and other factors including the feasibility and accuracy of identifying existing nets in the field. Results Results indicate that (i) where pre-campaign coverage is around 40% (of households owning at least 1 LLIN), accounting for existing nets in the campaign will have little effect on the mean age of the net population and (ii) even at pre-campaign coverage levels above 40%, an approach that reduces LLIN distribution requirements by taking existing nets into account may have only a small chance of being cost-saving overall, depending largely on the feasibility of identifying nets in the field. Based on existing literature the epidemiological implications of such a strategy is likely to vary by transmission setting, and the risks of leaving older nets in the field when accounting for existing nets must be considered. Conclusions Where pre-campaign coverage levels established by a household survey are below 40% we recommend that planners do not take such LLINs into account and instead plan a blanket mass distribution. At pre-campaign coverage levels above 40%, campaign planners should make explicit consideration of the cost and feasibility of accounting for existing LLINs before planning blanket mass distributions. Planners should also consider restricting the coverage estimates used for this decision to only include nets under two years of age in order to ensure that old and damaged nets do not compose too large a fraction of existing net coverage. PMID:23763773
Yukich, Joshua; Bennett, Adam; Keating, Joseph; Yukich, Rudy K; Lynch, Matt; Eisele, Thomas P; Kolaczinski, Kate
2013-06-14
Mass distribution of long-lasting insecticide treated bed nets (LLINs) has led to large increases in LLIN coverage in many African countries. As LLIN ownership levels increase, planners of future mass distributions face the challenge of deciding whether to ignore the nets already owned by households or to take these into account and attempt to target individuals or households without nets. Taking existing nets into account would reduce commodity costs but require more sophisticated, and potentially more costly, distribution procedures. The decision may also have implications for the average age of nets in use and therefore on the maintenance of universal LLIN coverage over time. A stochastic simulation model based on the NetCALC algorithm was used to determine the scenarios under which it would be cost saving to take existing nets into account, and the potential effects of doing so on the age profile of LLINs owned. The model accounted for variability in timing of distributions, concomitant use of continuous distribution systems, population growth, sampling error in pre-campaign coverage surveys, variable net 'decay' parameters and other factors including the feasibility and accuracy of identifying existing nets in the field. Results indicate that (i) where pre-campaign coverage is around 40% (of households owning at least 1 LLIN), accounting for existing nets in the campaign will have little effect on the mean age of the net population and (ii) even at pre-campaign coverage levels above 40%, an approach that reduces LLIN distribution requirements by taking existing nets into account may have only a small chance of being cost-saving overall, depending largely on the feasibility of identifying nets in the field. Based on existing literature the epidemiological implications of such a strategy is likely to vary by transmission setting, and the risks of leaving older nets in the field when accounting for existing nets must be considered. Where pre-campaign coverage levels established by a household survey are below 40% we recommend that planners do not take such LLINs into account and instead plan a blanket mass distribution. At pre-campaign coverage levels above 40%, campaign planners should make explicit consideration of the cost and feasibility of accounting for existing LLINs before planning blanket mass distributions. Planners should also consider restricting the coverage estimates used for this decision to only include nets under two years of age in order to ensure that old and damaged nets do not compose too large a fraction of existing net coverage.
NASA Astrophysics Data System (ADS)
Lü, Boqiang; Shi, Xiaoding; Zhong, Xin
2018-06-01
We are concerned with the Cauchy problem of the two-dimensional (2D) nonhomogeneous incompressible Navier–Stokes equations with vacuum as far-field density. It is proved that if the initial density decays not too slow at infinity, the 2D Cauchy problem of the density-dependent Navier–Stokes equations on the whole space admits a unique global strong solution. Note that the initial data can be arbitrarily large and the initial density can contain vacuum states and even have compact support. Furthermore, we also obtain the large time decay rates of the spatial gradients of the velocity and the pressure, which are the same as those of the homogeneous case.
NASA Astrophysics Data System (ADS)
Christiansen, E. H.
2016-12-01
Simple models describing silicic magma reservoirs and their connections with volcanic rocks have been denigrated as "big red blobs" and "balloons-and-soda straws." Although these models are certainly generalized to convey complex relations, there are multiple reasons to accept the existence of large magma chambers and direct connections between volcanoes and plutonic rocks. These include:-Geophysical evidence (seismic, magnetotelluric, and geodetic) for the existence of large bodies of magma in the crust today. Magma is a mixture of liquids, solids, and fluids. It does not have to be melt rich, nor does it need to be mobile and eruptible; it just has to have melt present. -Eruptions of large volumes (>1,000 km3) of dacitic to rhyolitic magma and large collapse calderas (30-50 km across). -The thermal lifetimes of large bodies are extended by high recharge rates. Individual bodies of magma may exist for tens to hundreds of thousands of years.-Geochronological evidence that pluton lifetimes are similar to those of volcanic fields.-Evidence for incremental emplacement of a pluton is not evidence against the former existence of a large magma reservoir, but the natural consequence of ongoing replenishment and crystallization after eruptions cease. Thus, what might have been a large liquid-dominated system at the time of eruption of a large ignimbrite, is subsequently intruded by new batches of magma as it crystallizes and closes down. This destroys the evidence for a large red blob and creates a composite pluton. -Direct and indirect evidence connect plutons to large eruptions. This is shown by field relations, geochronology, as well as chemical, mineralogical, and isotopic similarities of volcanic and plutonic rocks. -Volcanic and plutonic differentiation patterns are very similar, but differ in some ways because cumulates are preserved in the plutonic record and because intrusions continue to differentiate (liquids separate from solids) until the last bit of liquid is consumed. Highly evolved liquids are present in the volcanic record, but are less common than in intrusions. Most plutonic rocks appear to be mixtures of cumulate minerals and interstitial melt unable to separate from the coarsening mush.
ERIC Educational Resources Information Center
Siyanova-Chanturia, Anna; Martinez, Ron
2015-01-01
John Sinclair's Idiom Principle famously posited that most texts are largely composed of multi-word expressions that "constitute single choices" in the mental lexicon. At the time that assertion was made, little actual psycholinguistic evidence existed in support of that holistic, "single choice," view of formulaic language. In…
NASA Technical Reports Server (NTRS)
Xue, Min; Rios, Joseph
2017-01-01
Small Unmanned Aerial Vehicles (sUAVs), typically 55 lbs and below, are envisioned to play a major role in surveilling critical assets, collecting important information, and delivering goods. Large scale small UAV operations are expected to happen in low altitude airspace in the near future. Many static and dynamic constraints exist in low altitude airspace because of manned aircraft or helicopter activities, various wind conditions, restricted airspace, terrain and man-made buildings, and conflict-avoidance among sUAVs. High sensitivity and high maneuverability are unique characteristics of sUAVs that bring challenges to effective system evaluations and mandate such a simulation platform different from existing simulations that were built for manned air traffic system and large unmanned fixed aircraft. NASA's Unmanned aircraft system Traffic Management (UTM) research initiative focuses on enabling safe and efficient sUAV operations in the future. In order to help define requirements and policies for a safe and efficient UTM system to accommodate a large amount of sUAV operations, it is necessary to develop a fast-time simulation platform that can effectively evaluate requirements, policies, and concepts in a close-to-reality environment. This work analyzed the impacts of some key factors including aforementioned sUAV's characteristics and demonstrated the importance of these factors in a successful UTM fast-time simulation platform.
NASA Astrophysics Data System (ADS)
Wang, Yan; Mohanty, Soumya D.
2017-04-01
The advent of next generation radio telescope facilities, such as the Square Kilometer Array (SKA), will usher in an era where a pulsar timing array (PTA) based search for gravitational waves (GWs) will be able to use hundreds of well timed millisecond pulsars rather than the few dozens in existing PTAs. A realistic assessment of the performance of such an extremely large PTA must take into account the data analysis challenge posed by an exponential increase in the parameter space volume due to the large number of so-called pulsar phase parameters. We address this problem and present such an assessment for isolated supermassive black hole binary (SMBHB) searches using a SKA era PTA containing 1 03 pulsars. We find that an all-sky search will be able to confidently detect nonevolving sources with a redshifted chirp mass of 1 010 M⊙ out to a redshift of about 28 (corresponding to a rest-frame chirp mass of 3.4 ×1 08 M⊙). We discuss the important implications that the large distance reach of a SKA era PTA has on GW observations from optically identified SMBHB candidates. If no SMBHB detections occur, a highly unlikely scenario in the light of our results, the sky-averaged upper limit on strain amplitude will be improved by about 3 orders of magnitude over existing limits.
Wang, Yan; Mohanty, Soumya D
2017-04-14
The advent of next generation radio telescope facilities, such as the Square Kilometer Array (SKA), will usher in an era where a pulsar timing array (PTA) based search for gravitational waves (GWs) will be able to use hundreds of well timed millisecond pulsars rather than the few dozens in existing PTAs. A realistic assessment of the performance of such an extremely large PTA must take into account the data analysis challenge posed by an exponential increase in the parameter space volume due to the large number of so-called pulsar phase parameters. We address this problem and present such an assessment for isolated supermassive black hole binary (SMBHB) searches using a SKA era PTA containing 10^{3} pulsars. We find that an all-sky search will be able to confidently detect nonevolving sources with a redshifted chirp mass of 10^{10} M_{⊙} out to a redshift of about 28 (corresponding to a rest-frame chirp mass of 3.4×10^{8} M_{⊙}). We discuss the important implications that the large distance reach of a SKA era PTA has on GW observations from optically identified SMBHB candidates. If no SMBHB detections occur, a highly unlikely scenario in the light of our results, the sky-averaged upper limit on strain amplitude will be improved by about 3 orders of magnitude over existing limits.
NASA Technical Reports Server (NTRS)
Xue, Min; Rios, Joseph
2017-01-01
Small Unmanned Aerial Vehicles (sUAVs), typically 55 lbs and below, are envisioned to play a major role in surveilling critical assets, collecting important information, and delivering goods. Large scale small UAV operations are expected to happen in low altitude airspace in the near future. Many static and dynamic constraints exist in low altitude airspace because of manned aircraft or helicopter activities, various wind conditions, restricted airspace, terrain and man-made buildings, and conflict-avoidance among sUAVs. High sensitivity and high maneuverability are unique characteristics of sUAVs that bring challenges to effective system evaluations and mandate such a simulation platform different from existing simulations that were built for manned air traffic system and large unmanned fixed aircraft. NASA's Unmanned aircraft system Traffic Management (UTM) research initiative focuses on enabling safe and efficient sUAV operations in the future. In order to help define requirements and policies for a safe and efficient UTM system to accommodate a large amount of sUAV operations, it is necessary to develop a fast-time simulation platform that can effectively evaluate requirements, policies, and concepts in a close-to-reality environment. This work analyzed the impacts of some key factors including aforementioned sUAV's characteristics and demonstrated the importance of these factors in a successful UTM fast-time simulation platform.
Analysing home-ownership of couples: the effect of selecting couples at the time of the survey.
Mulder, C H
1996-09-01
"The analysis of events encountered by couple and family households may suffer from sample selection bias when data are restricted to couples existing at the moment of interview. The paper discusses the effect of sample selection bias on event history analyses of buying a home [in the Netherlands] by comparing analyses performed on a sample of existing couples with analyses of a more complete sample including past as well as current partner relationships. The results show that, although home-buying in relationships that have ended differs clearly from behaviour in existing relationships, sample selection bias is not alarmingly large." (SUMMARY IN FRE) excerpt
NASA Astrophysics Data System (ADS)
Kim, Bong-Sik
Three dimensional (3D) Navier-Stokes-alpha equations are considered for uniformly rotating geophysical fluid flows (large Coriolis parameter f = 2O). The Navier-Stokes-alpha equations are a nonlinear dispersive regularization of usual Navier-Stokes equations obtained by Lagrangian averaging. The focus is on the existence and global regularity of solutions of the 3D rotating Navier-Stokes-alpha equations and the uniform convergence of these solutions to those of the original 3D rotating Navier-Stokes equations for large Coriolis parameters f as alpha → 0. Methods are based on fast singular oscillating limits and results are obtained for periodic boundary conditions for all domain aspect ratios, including the case of three wave resonances which yields nonlinear "2½-dimensional" limit resonant equations for f → 0. The existence and global regularity of solutions of limit resonant equations is established, uniformly in alpha. Bootstrapping from global regularity of the limit equations, the existence of a regular solution of the full 3D rotating Navier-Stokes-alpha equations for large f for an infinite time is established. Then, the uniform convergence of a regular solution of the 3D rotating Navier-Stokes-alpha equations (alpha ≠ 0) to the one of the original 3D rotating NavierStokes equations (alpha = 0) for f large but fixed as alpha → 0 follows; this implies "shadowing" of trajectories of the limit dynamical systems by those of the perturbed alpha-dynamical systems. All the estimates are uniform in alpha, in contrast with previous estimates in the literature which blow up as alpha → 0. Finally, the existence of global attractors as well as exponential attractors is established for large f and the estimates are uniform in alpha.
Hieu, Nguyen Trong; Brochier, Timothée; Tri, Nguyen-Huu; Auger, Pierre; Brehmer, Patrice
2014-09-01
We consider a fishery model with two sites: (1) a marine protected area (MPA) where fishing is prohibited and (2) an area where the fish population is harvested. We assume that fish can migrate from MPA to fishing area at a very fast time scale and fish spatial organisation can change from small to large clusters of school at a fast time scale. The growth of the fish population and the catch are assumed to occur at a slow time scale. The complete model is a system of five ordinary differential equations with three time scales. We take advantage of the time scales using aggregation of variables methods to derive a reduced model governing the total fish density and fishing effort at the slow time scale. We analyze this aggregated model and show that under some conditions, there exists an equilibrium corresponding to a sustainable fishery. Our results suggest that in small pelagic fisheries the yield is maximum for a fish population distributed among both small and large clusters of school.
DOT National Transportation Integrated Search
2011-02-01
An understanding of traffic flow in time and space is fundamental to the development of : strategies for the efficient use of the existing transportation infrastructure in large : metropolitan areas. Thus, this project involved developing the methods...
Black holes from large N singlet models
NASA Astrophysics Data System (ADS)
Amado, Irene; Sundborg, Bo; Thorlacius, Larus; Wintergerst, Nico
2018-03-01
The emergent nature of spacetime geometry and black holes can be directly probed in simple holographic duals of higher spin gravity and tensionless string theory. To this end, we study time dependent thermal correlation functions of gauge invariant observables in suitably chosen free large N gauge theories. At low temperature and on short time scales the correlation functions encode propagation through an approximate AdS spacetime while interesting departures emerge at high temperature and on longer time scales. This includes the existence of evanescent modes and the exponential decay of time dependent boundary correlations, both of which are well known indicators of bulk black holes in AdS/CFT. In addition, a new time scale emerges after which the correlation functions return to a bulk thermal AdS form up to an overall temperature dependent normalization. A corresponding length scale was seen in equal time correlation functions in the same models in our earlier work.
Diffusion with stochastic resetting at power-law times.
Nagar, Apoorva; Gupta, Shamik
2016-06-01
What happens when a continuously evolving stochastic process is interrupted with large changes at random intervals τ distributed as a power law ∼τ^{-(1+α)};α>0? Modeling the stochastic process by diffusion and the large changes as abrupt resets to the initial condition, we obtain exact closed-form expressions for both static and dynamic quantities, while accounting for strong correlations implied by a power law. Our results show that the resulting dynamics exhibits a spectrum of rich long-time behavior, from an ever-spreading spatial distribution for α<1, to one that is time independent for α>1. The dynamics has strong consequences on the time to reach a distant target for the first time; we specifically show that there exists an optimal α that minimizes the mean time to reach the target, thereby offering a step towards a viable strategy to locate targets in a crowded environment.
Memory effect in M ≥ 6 earthquakes of South-North Seismic Belt, Mainland China
NASA Astrophysics Data System (ADS)
Wang, Jeen-Hwa
2013-07-01
The M ≥ 6 earthquakes occurred in the South-North Seismic Belt, Mainland China, during 1901-2008 are taken to study the possible existence of memory effect in large earthquakes. The fluctuation analysis technique is applied to analyze the sequences of earthquake magnitude and inter-event time represented in the natural time domain. Calculated results show that the exponents of scaling law of fluctuation versus window length are less than 0.5 for the sequences of earthquake magnitude and inter-event time. The migration of earthquakes in study is taken to discuss the possible correlation between events. The phase portraits of two sequent magnitudes and two sequent inter-event times are also applied to explore if large (or small) earthquakes are followed by large (or small) events. Together with all kinds of given information, we conclude that the earthquakes in study is short-term correlated and thus the short-term memory effect would be operative.
Quiet-time magnetospheric field depression at 2.3 to 3.6 R sub E
NASA Technical Reports Server (NTRS)
Sugiura, M.
1972-01-01
Fluxgate magnetometer data obtained by OGO-5 near perigee were used to establish the existence of large field depressions in the magnetosphere under conditions of varying degree of disturbance at distances ranging from 2.3 to 3.6 R sub E at all local times. The results also provide the average delta B at these distances when Dst, as being derived at present, is zero.
Late-time cosmological phase transitions
NASA Technical Reports Server (NTRS)
Schramm, David N.
1991-01-01
It is shown that the potential galaxy formation and large scale structure problems of objects existing at high redshifts (Z approx. greater than 5), structures existing on scales of 100 M pc as well as velocity flows on such scales, and minimal microwave anisotropies ((Delta)T/T) (approx. less than 10(exp -5)) can be solved if the seeds needed to generate structure form in a vacuum phase transition after decoupling. It is argued that the basic physics of such a phase transition is no more exotic than that utilized in the more traditional GUT scale phase transitions, and that, just as in the GUT case, significant random Gaussian fluctuations and/or topological defects can form. Scale lengths of approx. 100 M pc for large scale structure as well as approx. 1 M pc for galaxy formation occur naturally. Possible support for new physics that might be associated with such a late-time transition comes from the preliminary results of the SAGE solar neutrino experiment, implying neutrino flavor mixing with values similar to those required for a late-time transition. It is also noted that a see-saw model for the neutrino masses might also imply a tau neutrino mass that is an ideal hot dark matter candidate. However, in general either hot or cold dark matter can be consistent with a late-time transition.
Improving transmission efficiency of large sequence alignment/map (SAM) files.
Sakib, Muhammad Nazmus; Tang, Jijun; Zheng, W Jim; Huang, Chin-Tser
2011-01-01
Research in bioinformatics primarily involves collection and analysis of a large volume of genomic data. Naturally, it demands efficient storage and transfer of this huge amount of data. In recent years, some research has been done to find efficient compression algorithms to reduce the size of various sequencing data. One way to improve the transmission time of large files is to apply a maximum lossless compression on them. In this paper, we present SAMZIP, a specialized encoding scheme, for sequence alignment data in SAM (Sequence Alignment/Map) format, which improves the compression ratio of existing compression tools available. In order to achieve this, we exploit the prior knowledge of the file format and specifications. Our experimental results show that our encoding scheme improves compression ratio, thereby reducing overall transmission time significantly.
Prostitute Homicides: A Descriptive Study
ERIC Educational Resources Information Center
Salfati, C. Gabrielle; James, Alison R.; Ferguson, Lynn
2008-01-01
It has been estimated that women involved in street prostitution are 60 to 100 times more likely to be murdered than are nonprostitute females. In addition, homicides of prostitutes are notoriously difficult to investigate and, as such, many cases remain unsolved. Despite this large risk factor, little literature exists on homicides of…
Revisiting the homogenization of dammed rivers in the southeastern US
Ryan A. McManamay; Donald J. Orth; Charles A. Dolloff
2012-01-01
For some time, ecologists have attempted to make generalizations concerning how disturbances influence natural ecosystems, especially river systems. The existing literature suggests that dams homogenize the hydrologic variability of rivers. However, this might insinuate that dams affect river systems similarly despite a large gradient in natural hydrologic character....
Detecting Item Drift in Large-Scale Testing
ERIC Educational Resources Information Center
Guo, Hongwen; Robin, Frederic; Dorans, Neil
2017-01-01
The early detection of item drift is an important issue for frequently administered testing programs because items are reused over time. Unfortunately, operational data tend to be very sparse and do not lend themselves to frequent monitoring analyses, particularly for on-demand testing. Building on existing residual analyses, the authors propose…
Reengineering Real-Time Software Systems
1993-09-09
reengineering existing large-scale (or real-time) systems; systems designed prior to or during the advent of applied SE (Parnas 1979, Freeman 1980). Is... Advisor : Yutaka Kanayama Approved for public release; distribution is unlimited. 93-29769 93 12 6 098 Form Appmoved REPORT DOCUMENTATION PAGE 1o No. PI rep...trm b Idn 1o tl# caik t al wdornon s easnated to waere 1how per response. fr4ikcdm the time rem matnodons. siauide exetig da"a siuo a i and mami diqw
Content-level deduplication on mobile internet datasets
NASA Astrophysics Data System (ADS)
Hou, Ziyu; Chen, Xunxun; Wang, Yang
2017-06-01
Various systems and applications involve a large volume of duplicate items. Based on high data redundancy in real world datasets, data deduplication can reduce storage capacity and improve the utilization of network bandwidth. However, chunks of existing deduplications range in size from 4KB to over 16KB, existing systems are not applicable to the datasets consisting of short records. In this paper, we propose a new framework called SF-Dedup which is able to implement the deduplication process on a large set of Mobile Internet records, the size of records can be smaller than 100B, or even smaller than 10B. SF-Dedup is a short fingerprint, in-line, hash-collisions-resolved deduplication. Results of experimental applications illustrate that SH-Dedup is able to reduce storage capacity and shorten query time on relational database.
NASA Astrophysics Data System (ADS)
Fan, Longlong; Chen, Jun; Ren, Yang; Pan, Zhao; Zhang, Linxing; Xing, Xianran
2016-01-01
The origin of the excellent piezoelectric properties at the morphotropic phase boundary is generally attributed to the existence of a monoclinic phase in various piezoelectric systems. However, there exist no experimental studies that reveal the role of the monoclinic phase in the piezoelectric behavior in phase-pure ceramics. In this work, a single monoclinic phase has been identified in Pb (Zr ,Ti )O3 ceramics at room temperature by in situ high-energy synchrotron x-ray diffraction, and its response to electric field has been characterized for the first time. Unique piezoelectric properties of the monoclinic phase in terms of large intrinsic lattice strain and negligible domain switching have been observed. The extensional strain constant d33 and the transverse strain constant d31 are calculated to be 520 and -200 pm /V , respectively. These large piezoelectric coefficients are mainly due to the large intrinsic lattice strain, with very little extrinsic contribution from domain switching. The unique properties of the monoclinic phase provide new insights into the mechanisms responsible for the piezoelectric properties at the morphotropic phase boundary.
MATE: Machine Learning for Adaptive Calibration Template Detection
Donné, Simon; De Vylder, Jonas; Goossens, Bart; Philips, Wilfried
2016-01-01
The problem of camera calibration is two-fold. On the one hand, the parameters are estimated from known correspondences between the captured image and the real world. On the other, these correspondences themselves—typically in the form of chessboard corners—need to be found. Many distinct approaches for this feature template extraction are available, often of large computational and/or implementational complexity. We exploit the generalized nature of deep learning networks to detect checkerboard corners: our proposed method is a convolutional neural network (CNN) trained on a large set of example chessboard images, which generalizes several existing solutions. The network is trained explicitly against noisy inputs, as well as inputs with large degrees of lens distortion. The trained network that we evaluate is as accurate as existing techniques while offering improved execution time and increased adaptability to specific situations with little effort. The proposed method is not only robust against the types of degradation present in the training set (lens distortions, and large amounts of sensor noise), but also to perspective deformations, e.g., resulting from multi-camera set-ups. PMID:27827920
Salehi, Ali; Jimenez-Berni, Jose; Deery, David M; Palmer, Doug; Holland, Edward; Rozas-Larraondo, Pablo; Chapman, Scott C; Georgakopoulos, Dimitrios; Furbank, Robert T
2015-01-01
To our knowledge, there is no software or database solution that supports large volumes of biological time series sensor data efficiently and enables data visualization and analysis in real time. Existing solutions for managing data typically use unstructured file systems or relational databases. These systems are not designed to provide instantaneous response to user queries. Furthermore, they do not support rapid data analysis and visualization to enable interactive experiments. In large scale experiments, this behaviour slows research discovery, discourages the widespread sharing and reuse of data that could otherwise inform critical decisions in a timely manner and encourage effective collaboration between groups. In this paper we present SensorDB, a web based virtual laboratory that can manage large volumes of biological time series sensor data while supporting rapid data queries and real-time user interaction. SensorDB is sensor agnostic and uses web-based, state-of-the-art cloud and storage technologies to efficiently gather, analyse and visualize data. Collaboration and data sharing between different agencies and groups is thereby facilitated. SensorDB is available online at http://sensordb.csiro.au.
Separated matter and antimatter domains with vanishing domain walls
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolgov, A.D.; Godunov, S.I.; Rudenko, A.S.
2015-10-01
We present a model of spontaneous (or dynamical) C and CP violation where it is possible to generate domains of matter and antimatter separated by cosmologically large distances. Such C(CP) violation existed only in the early universe and later it disappeared with the only trace of generated baryonic and/or antibaryonic domains. So the problem of domain walls in this model does not exist. These features are achieved through a postulated form of interaction between inflaton and a new scalar field, realizing short time C(CP) violation.
Clusters of Galaxies at High Redshift
NASA Astrophysics Data System (ADS)
Fort, Bernard
For a long time, the small number of clusters at z > 0.3 in the Abell survey catalogue and simulations of the standard CDM formation of large scale structures provided a paradigm where clusters were considered as young merging structures. At earlier times, loose concentrations of galaxy clumps were mostly anticipated. Recent observations broke the taboo. Progressively we became convinced that compact and massive clusters at z = 1 or possibly beyond exist and should be searched for.
Reducing adaptive optics latency using Xeon Phi many-core processors
NASA Astrophysics Data System (ADS)
Barr, David; Basden, Alastair; Dipper, Nigel; Schwartz, Noah
2015-11-01
The next generation of Extremely Large Telescopes (ELTs) for astronomy will rely heavily on the performance of their adaptive optics (AO) systems. Real-time control is at the heart of the critical technologies that will enable telescopes to deliver the best possible science and will require a very significant extrapolation from current AO hardware existing for 4-10 m telescopes. Investigating novel real-time computing architectures and testing their eligibility against anticipated challenges is one of the main priorities of technology development for the ELTs. This paper investigates the suitability of the Intel Xeon Phi, which is a commercial off-the-shelf hardware accelerator. We focus on wavefront reconstruction performance, implementing a straightforward matrix-vector multiplication (MVM) algorithm. We present benchmarking results of the Xeon Phi on a real-time Linux platform, both as a standalone processor and integrated into an existing real-time controller (RTC). Performance of single and multiple Xeon Phis are investigated. We show that this technology has the potential of greatly reducing the mean latency and variations in execution time (jitter) of large AO systems. We present both a detailed performance analysis of the Xeon Phi for a typical E-ELT first-light instrument along with a more general approach that enables us to extend to any AO system size. We show that systematic and detailed performance analysis is an essential part of testing novel real-time control hardware to guarantee optimal science results.
Chronos in synchronicity: manifestations of the psychoid reality.
Yiassemides, Angeliki
2011-09-01
Jung's most obvious time-related concept is synchronicity. Yet, even though 'time' is embedded in it (chronos) there has been no systematic treatment of the time factor. Jung himself avoided dealing explicitly with the concept of time in synchronicity, in spite of its temporal assumptions and implications. In this paper the role of time in synchronicity is examined afresh, locating it in the context of meaning and relating it to the psychoid archetype. Synchronicity is viewed as an expression of the psychoid; the vital parameter for the elucidation of this link appears to be time. The author argues that the psychoid rests on relative time which Jung deemed transcendent. The existence of two different uses of the word 'time' in Jung's opus are emphasized: fixed time that dominates consciousness and relative time that exists in the psyche at large. Since consciousness cannot grasp the psychoid's temporality it de-relativizes time; examples of this 'behaviour' of time can be observed in instances of synchronicity. It is thus argued that synchronicity demonstrates by analogy the nature of the psychoid archetype. Jung's quaternio, as it developed via his communication with Pauli, is also examined in light of the above presented 'time theory'. © 2011, The Society of Analytical Psychology.
ERIC Educational Resources Information Center
Hilpert, Jonathan C.; Husman, Jenefer
2017-01-01
The current study leveraged a professional development programme for engineering faculty at a large research university to examine the impact of instructional improvement on student engagement. Professors who participated in the professional development were observed three times and rated using an existing observation protocol. Students in courses…
Taylor, F K
1979-01-01
The symptom of penis captivus during sexual intercourse has had a largely hearsay existence in medical history, and rumour has embellished the drama of its occurrence. It is not entirely mythical, however. It seems to have been a symptom of great rarity in former times and to have vanished perhaps completely in this century. PMID:509182
ERIC Educational Resources Information Center
Addis, Elizabeth A.; Quardokus, Kathleen M.; Bassham, Diane C.; Becraft, Philip W.; Boury, Nancy; Coffman, Clark R.; Colbert, James T.; Powell-Coffman, Jo Anne
2013-01-01
Recent national reports have indicated a need for significant changes in science higher education, with the inclusion of more student centered learning. However, substantial barriers to change exist. These include a lack of faculty awareness and understanding of appropriate pedagogical approaches, large class sizes, the time commitment needed to…
Bouyssié, David; Dubois, Marc; Nasso, Sara; Gonzalez de Peredo, Anne; Burlet-Schiltz, Odile; Aebersold, Ruedi; Monsarrat, Bernard
2015-01-01
The analysis and management of MS data, especially those generated by data independent MS acquisition, exemplified by SWATH-MS, pose significant challenges for proteomics bioinformatics. The large size and vast amount of information inherent to these data sets need to be properly structured to enable an efficient and straightforward extraction of the signals used to identify specific target peptides. Standard XML based formats are not well suited to large MS data files, for example, those generated by SWATH-MS, and compromise high-throughput data processing and storing. We developed mzDB, an efficient file format for large MS data sets. It relies on the SQLite software library and consists of a standardized and portable server-less single-file database. An optimized 3D indexing approach is adopted, where the LC-MS coordinates (retention time and m/z), along with the precursor m/z for SWATH-MS data, are used to query the database for data extraction. In comparison with XML formats, mzDB saves ∼25% of storage space and improves access times by a factor of twofold up to even 2000-fold, depending on the particular data access. Similarly, mzDB shows also slightly to significantly lower access times in comparison with other formats like mz5. Both C++ and Java implementations, converting raw or XML formats to mzDB and providing access methods, will be released under permissive license. mzDB can be easily accessed by the SQLite C library and its drivers for all major languages, and browsed with existing dedicated GUIs. The mzDB described here can boost existing mass spectrometry data analysis pipelines, offering unprecedented performance in terms of efficiency, portability, compactness, and flexibility. PMID:25505153
Large-scale and Long-duration Simulation of a Multi-stage Eruptive Solar Event
NASA Astrophysics Data System (ADS)
Jiang, chaowei; Hu, Qiang; Wu, S. T.
2015-04-01
We employ a data-driven 3D MHD active region evolution model by using the Conservation Element and Solution Element (CESE) numerical method. This newly developed model retains the full MHD effects, allowing time-dependent boundary conditions and time evolution studies. The time-dependent simulation is driven by measured vector magnetograms and the method of MHD characteristics on the bottom boundary. We have applied the model to investigate the coronal magnetic field evolution of AR11283 which was characterized by a pre-existing sigmoid structure in the core region and multiple eruptions, both in relatively small and large scales. We have succeeded in producing the core magnetic field structure and the subsequent eruptions of flux-rope structures (see https://dl.dropboxusercontent.com/u/96898685/large.mp4 for an animation) as the measured vector magnetograms on the bottom boundary evolve in time with constant flux emergence. The whole process, lasting for about an hour in real time, compares well with the corresponding SDO/AIA and coronagraph imaging observations. From these results, we show the capability of the model, largely data-driven, that is able to simulate complex, topological, and highly dynamic active region evolutions. (We acknowledge partial support of NSF grants AGS 1153323 and AGS 1062050, and data support from SDO/HMI and AIA teams).
NASA Astrophysics Data System (ADS)
Parajuli, Sagar Prasad; Yang, Zong-Liang; Lawrence, David M.
2016-06-01
Large amounts of mineral dust are injected into the atmosphere during dust storms, which are common in the Middle East and North Africa (MENA) where most of the global dust hotspots are located. In this work, we present simulations of dust emission using the Community Earth System Model Version 1.2.2 (CESM 1.2.2) and evaluate how well it captures the spatio-temporal characteristics of dust emission in the MENA region with a focus on large-scale dust storm mobilization. We explicitly focus our analysis on the model's two major input parameters that affect the vertical mass flux of dust-surface winds and the soil erodibility factor. We analyze dust emissions in simulations with both prognostic CESM winds and with CESM winds that are nudged towards ERA-Interim reanalysis values. Simulations with three existing erodibility maps and a new observation-based erodibility map are also conducted. We compare the simulated results with MODIS satellite data, MACC reanalysis data, AERONET station data, and CALIPSO 3-d aerosol profile data. The dust emission simulated by CESM, when driven by nudged reanalysis winds, compares reasonably well with observations on daily to monthly time scales despite CESM being a global General Circulation Model. However, considerable bias exists around known high dust source locations in northwest/northeast Africa and over the Arabian Peninsula where recurring large-scale dust storms are common. The new observation-based erodibility map, which can represent anthropogenic dust sources that are not directly represented by existing erodibility maps, shows improved performance in terms of the simulated dust optical depth (DOD) and aerosol optical depth (AOD) compared to existing erodibility maps although the performance of different erodibility maps varies by region.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alzate, Nathalia; Morgan, Huw, E-mail: naa19@aber.ac.uk
Coronal mass ejections (CMEs) are generally associated with low coronal signatures (LCSs), such as flares, filament eruptions, extreme ultraviolet (EUV) waves, or jets. A number of recent studies have reported the existence of stealth CMEs as events without LCSs, possibly due to observational limitations. Our study focuses on a set of 40 stealth CMEs identified from a study by D’Huys et al. New image processing techniques are applied to high-cadence, multi-instrument sets of images spanning the onset and propagation time of each of these CMEs to search for possible LCSs. Twenty-three of these events are identified as small, low-mass, unstructuredmore » blobs or puffs, often occurring in the aftermath of a large CME, but associated with LCSs such as small flares, jets, or filament eruptions. Of the larger CMEs, seven are associated with jets and eight with filament eruptions. Several of these filament eruptions are different from the standard model of an erupting filament/flux tube in that they are eruptions of large, faint flux tubes that seem to exist at large heights for a long time prior to their slow eruption. For two of these events, we see an eruption in Large Angle Spectrometric Coronagraph C2 images and the consequent changes at the bottom edge of the eruption in EUV images. All 40 events in our study are associated with some form of LCS. We conclude that stealth CMEs arise from observational and processing limitations.« less
The morphodynamics and sedimentology of large river confluences
NASA Astrophysics Data System (ADS)
Nicholas, Andrew; Sambrook Smith, Greg; Best, James; Bull, Jon; Dixon, Simon; Goodbred, Steven; Sarker, Mamin; Vardy, Mark
2017-04-01
Confluences are key locations within large river networks, yet surprisingly little is known about how they migrate and evolve through time. Moreover, because confluence sites are associated with scour pools that are typically several times the mean channel depth, the deposits associated with such scours should have a high potential for preservation within the rock record. However, paradoxically, such scours are rarely observed, and the sedimentological characteristics of such deposits are poorly understood. This study reports results from a physically-based morphodynamic model, which is applied to simulate the evolution and resulting alluvial architecture associated with large river junctions. Boundary conditions within the model simulation are defined to approximate the junction of the Ganges and Jamuna rivers, in Bangladesh. Model results are supplemented by geophysical datasets collected during boat-based surveys at this junction. Simulated deposit characteristics and geophysical datasets are compared with three existing and contrasting conceptual models that have been proposed to represent the sedimentary architecture of confluence scours. Results illustrate that existing conceptual models may be overly simplistic, although elements of each of the three conceptual models are evident in the deposits generated by the numerical simulation. The latter are characterised by several distinct styles of sedimentary fill, which can be linked to particular morphodynamic behaviours. However, the preserved characteristics of simulated confluence deposits vary substantial according to the degree of reworking by channel migration. This may go some way towards explaining the confluence scour paradox; while abundant large scours might be expected in the rock record, they are rarely reported.
Eastman, Alexander L; Rinnert, Kathy J; Nemeth, Ira R; Fowler, Raymond L; Minei, Joseph P
2007-08-01
Hospital surge capacity has been advocated to accommodate large increases in demand for healthcare; however, existing urban trauma centers and emergency departments (TC/EDs) face barriers to providing timely care even at baseline patient volumes. The purpose of this study is to describe how alternate-site medical surge capacity absorbed large patient volumes while minimizing impact on routine TC/ED operations immediately after Hurricane Katrina. From September 1 to 16, 2005, an alternate site for medical care was established. Using an off-site space, the Dallas Convention Center Medical Unit (DCCMU) was established to meet the increased demand for care. Data were collected and compared with TC/ED patient volumes to assess impact on existing facilities. During the study period, 23,231 persons displaced by Hurricane Katrina were registered to receive evacuee services in the City of Dallas, Texas. From those displaced, 10,367 visits for emergent or urgent healthcare were seen at the DCCMU. The mean number of daily visits (mean +/- SD) to the DCCMU was 619 +/- 301 visits with a peak on day 3 (n = 1,125). No patients died, 3.2% (n = 257) were observed in the DCCMU, and only 2.9% (n = 236) required transport to a TC/ED. During the same period, the mean number of TC/ED visits at the region's primary provider of indigent care (Hospital 1) was 346 +/- 36 visits. Using historical data from Hospital 1 during the same period of time (341 +/- 41), there was no significant difference in the mean number of TC/ED visits from the previous year (p = 0.26). Alternate-site medical surge capacity provides for safe and effective delivery of care to a large influx of patients seeking urgent and emergent care. This protects the integrity of existing public hospital TC/ED infrastructure and ongoing operations.
Initial state with shear in peripheral heavy ion collisions
NASA Astrophysics Data System (ADS)
Magas, V. K.; Gordillo, J.; Strottman, D.; Xie, Y. L.; Csernai, L. P.
2018-06-01
In the present work we propose a new way of constructing the initial state for further hydrodynamic simulation of relativistic heavy ion collisions based on Bjorken-like solution applied streak by streak in the transverse plane. Previous fluid dynamical calculations in Cartesian coordinates with an initial state based on a streak by streak Yang-Mills field led for peripheral higher energy collisions to large angular momentum, initial shear flow and significant local vorticity. Recent experiments verified the existence of this vorticity via the resulting polarization of emitted Λ and Λ ¯ particles. At the same time parton cascade models indicated the existence of more compact initial state configurations, which we are going to simulate in our approach. The proposed model satisfies all the conservation laws, including conservation of a strong initial angular momentum, which is present in noncentral collisions. As a consequence of this large initial angular momentum we observe the rotation of the whole system as well as the fluid shear in the initial state, which leads to large flow vorticity. Another advantage of the proposed model is that the initial state can be given in both [t,x,y,z] and [τ ,x ,y ,η ] coordinates and thus can be tested by all 3+1D hydrodynamical codes which exist in the field.
Retarded correlators in kinetic theory: branch cuts, poles and hydrodynamic onset transitions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romatschke, Paul
In this paper, the collective modes of an effective kinetic theory description based on the Boltzmann equation in a relaxation-time approximation applicable to gauge theories at weak but finite coupling and low frequencies are studied. Real time retarded two-point correlators of the energy-momentum tensor and the R-charge current are calculated at finite temperature in flat space-times for large N gauge theories. It is found that the real-time correlators possess logarithmic branch cuts which in the limit of large coupling disappear and give rise to non-hydrodynamic poles that are reminiscent of quasi-normal modes in black holes. In addition to branch cuts,more » correlators can have simple hydrodynamic poles, generalizing the concept of hydrodynamic modes to intermediate wavelength. Surprisingly, the hydrodynamic poles cease to exist for some critical value of the wavelength and coupling reminiscent of the properties of onset transitions.« less
Retarded correlators in kinetic theory: branch cuts, poles and hydrodynamic onset transitions
Romatschke, Paul
2016-06-24
In this paper, the collective modes of an effective kinetic theory description based on the Boltzmann equation in a relaxation-time approximation applicable to gauge theories at weak but finite coupling and low frequencies are studied. Real time retarded two-point correlators of the energy-momentum tensor and the R-charge current are calculated at finite temperature in flat space-times for large N gauge theories. It is found that the real-time correlators possess logarithmic branch cuts which in the limit of large coupling disappear and give rise to non-hydrodynamic poles that are reminiscent of quasi-normal modes in black holes. In addition to branch cuts,more » correlators can have simple hydrodynamic poles, generalizing the concept of hydrodynamic modes to intermediate wavelength. Surprisingly, the hydrodynamic poles cease to exist for some critical value of the wavelength and coupling reminiscent of the properties of onset transitions.« less
Negative refraction and planar focusing based on parity-time symmetric metasurfaces.
Fleury, Romain; Sounas, Dimitrios L; Alù, Andrea
2014-07-11
We introduce a new mechanism to realize negative refraction and planar focusing using a pair of parity-time symmetric metasurfaces. In contrast to existing solutions that achieve these effects with negative-index metamaterials or phase conjugating surfaces, the proposed parity-time symmetric lens enables loss-free, all-angle negative refraction and planar focusing in free space, without relying on bulk metamaterials or nonlinear effects. This concept may represent a pivotal step towards loss-free negative refraction and highly efficient planar focusing by exploiting the largely uncharted scattering properties of parity-time symmetric systems.
Earth's Bow Shock: Elapsed-Time Observations by Two Closely Spaced Satellites.
Greenstadt, E W; Green, I M; Colburn, D S
1968-11-22
Coordinated observations of the earth's bow shock were made as Vela 3A and Explorer 33 passed within 6 earth radii of each other. Elapsed time measurements of shock motion give directly determined velocities in the range 1 to 10 kilometers per second and establish the existence of two regions, one of large amplitude magnetic "shock" oscillations and another of smaller, sunward, upstream oscillations. Each region is as thick as 1 earth radius, or more.
The large discretization step method for time-dependent partial differential equations
NASA Technical Reports Server (NTRS)
Haras, Zigo; Taasan, Shlomo
1995-01-01
A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.
Friction Stir Welding of Large Scale Cryogenic Tanks for Aerospace Applications
NASA Technical Reports Server (NTRS)
Russell, Carolyn; Ding, R. Jeffrey
1998-01-01
The Marshall Space Flight Center (MSFC) has established a facility for the joining of large-scale aluminum cryogenic propellant tanks using the friction stir welding process. Longitudinal welds, approximately five meters in length, have been made by retrofitting an existing vertical fusion weld system, designed to fabricate tank barrel sections ranging from two to ten meters in diameter. The structural design requirements of the tooling, clamping and travel system will be described in this presentation along with process controls and real-time data acquisition developed for this application. The approach to retrofitting other large welding tools at MSFC with the friction stir welding process will also be discussed.
Gene coexpression measures in large heterogeneous samples using count statistics.
Wang, Y X Rachel; Waterman, Michael S; Huang, Haiyan
2014-11-18
With the advent of high-throughput technologies making large-scale gene expression data readily available, developing appropriate computational tools to process these data and distill insights into systems biology has been an important part of the "big data" challenge. Gene coexpression is one of the earliest techniques developed that is still widely in use for functional annotation, pathway analysis, and, most importantly, the reconstruction of gene regulatory networks, based on gene expression data. However, most coexpression measures do not specifically account for local features in expression profiles. For example, it is very likely that the patterns of gene association may change or only exist in a subset of the samples, especially when the samples are pooled from a range of experiments. We propose two new gene coexpression statistics based on counting local patterns of gene expression ranks to take into account the potentially diverse nature of gene interactions. In particular, one of our statistics is designed for time-course data with local dependence structures, such as time series coupled over a subregion of the time domain. We provide asymptotic analysis of their distributions and power, and evaluate their performance against a wide range of existing coexpression measures on simulated and real data. Our new statistics are fast to compute, robust against outliers, and show comparable and often better general performance.
The SNARC effect in two dimensions: Evidence for a frontoparallel mental number plane.
Hesse, Philipp Nikolaus; Bremmer, Frank
2017-01-01
The existence of an association between numbers and space is known for a long time. The most prominent demonstration of this relationship is the spatial numerical association of response codes (SNARC) effect, describing the fact that participants' reaction times are shorter with the left hand for small numbers and with the right hand for large numbers, when being asked to judge the parity of a number (Dehaene et al., J. Exp. Psychol., 122, 371-396, 1993). The SNARC effect is commonly seen as support for the concept of a mental number line, i.e. a mentally conceived line where small numbers are represented more on the left and large numbers are represented more on the right. The SNARC effect has been demonstrated for all three cardinal axes and recently a transverse SNARC plane has been reported (Chen et al., Exp. Brain Res., 233(5), 1519-1528, 2015). Here, by employing saccadic responses induced by auditory or visual stimuli, we measured the SNARC effect within the same subjects along the horizontal (HM) and vertical meridian (VM) and along the two interspersed diagonals. We found a SNARC effect along HM and VM, which allowed predicting the occurrence of a SNARC effect along the two diagonals by means of linear regression. Importantly, significant differences in SNARC strength were found between modalities. Our results suggest the existence of a frontoparallel mental number plane, where small numbers are represented left and down, while large numbers are represented right and up. Together with the recently described transverse mental number plane our findings provide further evidence for the existence of a three-dimensional mental number space. Copyright © 2016 Elsevier Ltd. All rights reserved.
Spatiotemporal canards in neural field equations
NASA Astrophysics Data System (ADS)
Avitabile, D.; Desroches, M.; Knobloch, E.
2017-04-01
Canards are special solutions to ordinary differential equations that follow invariant repelling slow manifolds for long time intervals. In realistic biophysical single-cell models, canards are responsible for several complex neural rhythms observed experimentally, but their existence and role in spatially extended systems is largely unexplored. We identify and describe a type of coherent structure in which a spatial pattern displays temporal canard behavior. Using interfacial dynamics and geometric singular perturbation theory, we classify spatiotemporal canards and give conditions for the existence of folded-saddle and folded-node canards. We find that spatiotemporal canards are robust to changes in the synaptic connectivity and firing rate. The theory correctly predicts the existence of spatiotemporal canards with octahedral symmetry in a neural field model posed on the unit sphere.
Tai, David; Fang, Jianwen
2012-08-27
The large sizes of today's chemical databases require efficient algorithms to perform similarity searches. It can be very time consuming to compare two large chemical databases. This paper seeks to build upon existing research efforts by describing a novel strategy for accelerating existing search algorithms for comparing large chemical collections. The quest for efficiency has focused on developing better indexing algorithms by creating heuristics for searching individual chemical against a chemical library by detecting and eliminating needless similarity calculations. For comparing two chemical collections, these algorithms simply execute searches for each chemical in the query set sequentially. The strategy presented in this paper achieves a speedup upon these algorithms by indexing the set of all query chemicals so redundant calculations that arise in the case of sequential searches are eliminated. We implement this novel algorithm by developing a similarity search program called Symmetric inDexing or SymDex. SymDex shows over a 232% maximum speedup compared to the state-of-the-art single query search algorithm over real data for various fingerprint lengths. Considerable speedup is even seen for batch searches where query set sizes are relatively small compared to typical database sizes. To the best of our knowledge, SymDex is the first search algorithm designed specifically for comparing chemical libraries. It can be adapted to most, if not all, existing indexing algorithms and shows potential for accelerating future similarity search algorithms for comparing chemical databases.
Large deviations and mixing for dissipative PDEs with unbounded random kicks
NASA Astrophysics Data System (ADS)
Jakšić, V.; Nersesyan, V.; Pillet, C.-A.; Shirikyan, A.
2018-02-01
We study the problem of exponential mixing and large deviations for discrete-time Markov processes associated with a class of random dynamical systems. Under some dissipativity and regularisation hypotheses for the underlying deterministic dynamics and a non-degeneracy condition for the driving random force, we discuss the existence and uniqueness of a stationary measure and its exponential stability in the Kantorovich-Wasserstein metric. We next turn to the large deviations principle (LDP) and establish its validity for the occupation measures of the Markov processes in question. The proof is based on Kifer’s criterion for non-compact spaces, a result on large-time asymptotics for generalised Markov semigroup, and a coupling argument. These tools combined together constitute a new approach to LDP for infinite-dimensional processes without strong Feller property in a non-compact space. The results obtained can be applied to the two-dimensional Navier-Stokes system in a bounded domain and to the complex Ginzburg-Landau equation.
In Search of Truth, on the Internet
ERIC Educational Resources Information Center
Goldsborough, Reid
2004-01-01
Is it true? There's no more important question to ask when online. Truth telling has never been a requirement to provide information online. Standards for accuracy, to a large extent, don't exist. As a general rule, the "real time" communication that takes place in instant messaging sessions and chat rooms is the most unreliable. One level up in…
Attaining a steady air stream in wind tunnels
NASA Technical Reports Server (NTRS)
Prandtl, L
1933-01-01
Many experimental arrangements of varying kind involve the problems of assuring a large, steady air stream both as to volume and to time. For this reason a separate discussion of the methods by which this is achieved should prove of particular interest. Motors and blades receive special attention and a review of existent wind tunnels is also provided.
Are Selective Private and Public Colleges Affordable?
ERIC Educational Resources Information Center
Karikari, John A.; Dezhbakhsh, Hashem
2013-01-01
We examine college affordability under the existing pricing and financial aid system that awards both non need-based and need-based aid. Using data of freshmen attending a large number of selective private and public colleges in the USA, we find that the prices students actually pay for college have increased over time. Need-based grant aid has…
Curtis L. VanderSchaaf; Ryan W. McKnight; Thomas R. Fox; H. Lee Allen
2010-01-01
A model form is presented, where the model contains regressors selected for inclusion based on biological rationale, to predict how fertilization, precipitation amounts, and overstory stand density affect understory vegetation biomass. Due to time, economic, and logistic constraints, datasets of large sample sizes generally do not exist for understory vegetation. Thus...
The Adult Asperger Assessment (AAA): A Diagnostic Method
ERIC Educational Resources Information Center
Baron-Cohen, Simon; Wheelwright, Sally; Robinson, Janine; Woodbury-Smith, Marc
2005-01-01
At the present time there are a large number of adults who have "suspected" Asperger syndrome (AS). In this paper we describe a new instrument, the Adult Asperger Assessment (AAA), developed in our clinic for adults with AS. The need for a new instrument relevant to the diagnosis of AS in adulthood arises because existing instruments are designed…
Brunei English: A Developing Variety
ERIC Educational Resources Information Center
O'Hara-Davies, Breda
2010-01-01
A considerable amount of time has elapsed since the existence of a distinct variety of English, Brunei English (BNE), was mooted in the early 1990s. A subsequent study conducted by Svalberg in 1998 suggested that BNE was then in its infancy and that its speakers were largely unaware of the differences between it and Standard British English (STE).…
Happiness in the Classroom: Strategies for Teacher Retention and Development
ERIC Educational Resources Information Center
De Stercke, Joachim; Goyette, Nancy; Robertson, Jean E.
2015-01-01
This Viewpoint proposes a new perspective on why so many teachers leave the profession after only a very short time. While existing studies have largely focused on employment and working conditions, this essay argues that happiness is key to keeping new teachers in the workplace. Juxtaposing two fields that have heretofore been oblivious of one…
Student Projects in Cosmic Ray Detection
ERIC Educational Resources Information Center
Brouwer, W.; Pinfold, J.; Soluk, R.; McDonough, B.; Pasek, V.; Bao-shan, Zheng
2009-01-01
The Alberta Large-area Time-coincidence Array (ALTA) study has been in existence for about 10 years under the direction of Jim Pinfold of the Centre for Particle Physics at the University of Alberta. The purpose of the ALTA project is to involve Alberta high schools, and primarily their physics classes, to assist in the detection of the presence…
Fan, Longlong; Chen, Jun; Ren, Yang; Pan, Zhao; Zhang, Linxing; Xing, Xianran
2016-01-15
The origin of the excellent piezoelectric properties at the morphotropic phase boundary is generally attributed to the existence of a monoclinic phase in various piezoelectric systems. However, there exist no experimental studies that reveal the role of the monoclinic phase in the piezoelectric behavior in phase-pure ceramics. In this work, a single monoclinic phase has been identified in Pb(Zr,Ti)O_{3} ceramics at room temperature by in situ high-energy synchrotron x-ray diffraction, and its response to electric field has been characterized for the first time. Unique piezoelectric properties of the monoclinic phase in terms of large intrinsic lattice strain and negligible domain switching have been observed. The extensional strain constant d_{33} and the transverse strain constant d_{31} are calculated to be 520 and -200 pm/V, respectively. These large piezoelectric coefficients are mainly due to the large intrinsic lattice strain, with very little extrinsic contribution from domain switching. The unique properties of the monoclinic phase provide new insights into the mechanisms responsible for the piezoelectric properties at the morphotropic phase boundary.
Large Scale Winter Time Disturbances in Meteor Winds over Central and Eastern Europe
NASA Technical Reports Server (NTRS)
Greisiger, K. M.; Portnyagin, Y. I.; Lysenko, I. A.
1984-01-01
Daily zonal wind data of the four pre-MAP-winters 1978/79 to 1981/82 obtained over Central Europe and Eastern Europe by the radar meteor method were studied. Available temperature and satellite radiance data of the middle and upper stratosphere were used for comparison, as well as wind data from Canada. The existence or nonexistence of coupling between the observed large scale zonal wind disturbances in the upper mesopause region (90 to 100 km) and corresponding events in the stratosphere are discussed.
Greenfield, P E; Roberts, D H; Burke, B F
1980-05-02
A full 12-hour synthesis at 6-centimeter wavelength with the Very Large Array confirms the major features previously reported for the double quasar 0957+561. In addition, the existence of radio jets apparently associated with both quasars is demonstrated. Gravitational lens models are now favored on the basis of recent optical observations, and the radio jets place severe constraints on such models. Further radio observations of the double quasar are needed to establish the expected relative time delay in variations between the images.
Enhanced polarization of the cosmic microwave background radiation from thermal gravitational waves.
Bhattacharya, Kaushik; Mohanty, Subhendra; Nautiyal, Akhilesh
2006-12-22
If inflation was preceded by a radiation era, then at the time of inflation there will exist a decoupled thermal distribution of gravitons. Gravitational waves generated during inflation will be amplified by the process of stimulated emission into the existing thermal distribution of gravitons. Consequently, the usual zero temperature scale invariant tensor spectrum is modified by a temperature dependent factor. This thermal correction factor amplifies the B-mode polarization of the cosmic microwave background radiation by an order of magnitude at large angles, which may now be in the range of observability of the Wilkinson Microwave Anisotropy Probe.
Memory effect in M ≥ 7 earthquakes of Taiwan
NASA Astrophysics Data System (ADS)
Wang, Jeen-Hwa
2014-07-01
The M ≥ 7 earthquakes that occurred in the Taiwan region during 1906-2006 are taken to study the possibility of memory effect existing in the sequence of those large earthquakes. Those events are all mainshocks. The fluctuation analysis technique is applied to analyze two sequences in terms of earthquake magnitude and inter-event time represented in the natural time domain. For both magnitude and inter-event time, the calculations are made for three data sets, i.e., the original order data, the reverse-order data, and that of the mean values. Calculated results show that the exponents of scaling law of fluctuation versus window length are less than 0.5 for the sequences of both magnitude and inter-event time data. In addition, the phase portraits of two sequent magnitudes and two sequent inter-event times are also applied to explore if large (or small) earthquakes are followed by large (or small) events. Results lead to a negative answer. Together with all types of information in study, we make a conclusion that the earthquake sequence in study is short-term corrected and thus the short-term memory effect would be operative.
An experimental and theoretical investigation on torrefaction of a large wet wood particle.
Basu, Prabir; Sadhukhan, Anup Kumar; Gupta, Parthapratim; Rao, Shailendra; Dhungana, Alok; Acharya, Bishnu
2014-05-01
A competitive kinetic scheme representing primary and secondary reactions is proposed for torrefaction of large wet wood particles. Drying and diffusive, convective and radiative mode of heat transfer is considered including particle shrinking during torrefaction. The model prediction compares well with the experimental results of both mass fraction residue and temperature profiles for biomass particles. The effect of temperature, residence time and particle size on torrefaction of cylindrical wood particles is investigated through model simulations. For large biomass particles heat transfer is identified as one of the controlling factor for torrefaction. The optimum torrefaction temperature, residence time and particle size are identified. The model may thus be integrated with CFD analysis to estimate the performance of an existing torrefier for a given feedstock. The performance analysis may also provide useful insight for design and development of an efficient torrefier. Copyright © 2014 Elsevier Ltd. All rights reserved.
Dragas, Jelena; Jäckel, David; Hierlemann, Andreas; Franke, Felix
2017-01-01
Reliable real-time low-latency spike sorting with large data throughput is essential for studies of neural network dynamics and for brain-machine interfaces (BMIs), in which the stimulation of neural networks is based on the networks' most recent activity. However, the majority of existing multi-electrode spike-sorting algorithms are unsuited for processing high quantities of simultaneously recorded data. Recording from large neuronal networks using large high-density electrode sets (thousands of electrodes) imposes high demands on the data-processing hardware regarding computational complexity and data transmission bandwidth; this, in turn, entails demanding requirements in terms of chip area, memory resources and processing latency. This paper presents computational complexity optimization techniques, which facilitate the use of spike-sorting algorithms in large multi-electrode-based recording systems. The techniques are then applied to a previously published algorithm, on its own, unsuited for large electrode set recordings. Further, a real-time low-latency high-performance VLSI hardware architecture of the modified algorithm is presented, featuring a folded structure capable of processing the activity of hundreds of neurons simultaneously. The hardware is reconfigurable “on-the-fly” and adaptable to the nonstationarities of neuronal recordings. By transmitting exclusively spike time stamps and/or spike waveforms, its real-time processing offers the possibility of data bandwidth and data storage reduction. PMID:25415989
Dragas, Jelena; Jackel, David; Hierlemann, Andreas; Franke, Felix
2015-03-01
Reliable real-time low-latency spike sorting with large data throughput is essential for studies of neural network dynamics and for brain-machine interfaces (BMIs), in which the stimulation of neural networks is based on the networks' most recent activity. However, the majority of existing multi-electrode spike-sorting algorithms are unsuited for processing high quantities of simultaneously recorded data. Recording from large neuronal networks using large high-density electrode sets (thousands of electrodes) imposes high demands on the data-processing hardware regarding computational complexity and data transmission bandwidth; this, in turn, entails demanding requirements in terms of chip area, memory resources and processing latency. This paper presents computational complexity optimization techniques, which facilitate the use of spike-sorting algorithms in large multi-electrode-based recording systems. The techniques are then applied to a previously published algorithm, on its own, unsuited for large electrode set recordings. Further, a real-time low-latency high-performance VLSI hardware architecture of the modified algorithm is presented, featuring a folded structure capable of processing the activity of hundreds of neurons simultaneously. The hardware is reconfigurable “on-the-fly” and adaptable to the nonstationarities of neuronal recordings. By transmitting exclusively spike time stamps and/or spike waveforms, its real-time processing offers the possibility of data bandwidth and data storage reduction.
The Large Water-Clock of Amphiaraeion
NASA Astrophysics Data System (ADS)
Theodossiou, E.; Manimanis, V. N.; Katsiotis, M.; Mantarakis, P.
2010-07-01
A very well preserved ancient water-clock exists at the Amphiaraeion, in Oropos, Greece. The Amphiaraeion, sanctuary of the mythical oracle and deified healer Amphiaraus, was active from the pre-classic period until the 5th Century A.D. In such a place the measurement of time, both day and night, was a necessity. Therefore, time was kept with both a conical sundial and a water-clock in the shape of a fountain, which, according to the archaeologists, dates to the 4th Century B.C.
Minimal time spiking in various ChR2-controlled neuron models.
Renault, Vincent; Thieullen, Michèle; Trélat, Emmanuel
2018-02-01
We use conductance based neuron models, and the mathematical modeling of optogenetics to define controlled neuron models and we address the minimal time control of these affine systems for the first spike from equilibrium. We apply tools of geometric optimal control theory to study singular extremals, and we implement a direct method to compute optimal controls. When the system is too large to theoretically investigate the existence of singular optimal controls, we observe numerically the optimal bang-bang controls.
Le craton ouest-africain et le bouclier guyanais: un seul craton au Protérozoïque inférieur?
NASA Astrophysics Data System (ADS)
Caen-Vachette, Michelle
Geochronological and paleomagnetism data for southern West African craton and Guyana shield in South America, are concordant and suggest the existence of a large unit grouping them during Archean and Lower Proterozoic times. The paleomagnetism data allow to put on a single line, the Zednes (Mauritania), Sassandra (Ivory Coast) and Guri (Venezuela) fault zones, the mylonites of which were dated 1670 Ma. This age reflects the end of the eburnean-transamazonian shearing tectonic, which affected the large West Africa-Guyana unit. This line separates the western Archean domain from the eastern lower Proterozoic one; thence it is possible to correlate the Sasca (Ivory Coast) and Pastora (Venezuela) areas. Archean relics have been found in mobile pan-african-bresiliano zones which surround the Precambrian cratons; this fact suggests the existence of still more extended Archean craton than defined above.
ProMotE: an efficient algorithm for counting independent motifs in uncertain network topologies.
Ren, Yuanfang; Sarkar, Aisharjya; Kahveci, Tamer
2018-06-26
Identifying motifs in biological networks is essential in uncovering key functions served by these networks. Finding non-overlapping motif instances is however a computationally challenging task. The fact that biological interactions are uncertain events further complicates the problem, as it makes the existence of an embedding of a given motif an uncertain event as well. In this paper, we develop a novel method, ProMotE (Probabilistic Motif Embedding), to count non-overlapping embeddings of a given motif in probabilistic networks. We utilize a polynomial model to capture the uncertainty. We develop three strategies to scale our algorithm to large networks. Our experiments demonstrate that our method scales to large networks in practical time with high accuracy where existing methods fail. Moreover, our experiments on cancer and degenerative disease networks show that our method helps in uncovering key functional characteristics of biological networks.
African humid periods triggered the reactivation of a large river system in Western Sahara.
Skonieczny, C; Paillou, P; Bory, A; Bayon, G; Biscara, L; Crosta, X; Eynaud, F; Malaizé, B; Revel, M; Aleman, N; Barusseau, J-P; Vernet, R; Lopez, S; Grousset, F
2015-11-10
The Sahara experienced several humid episodes during the late Quaternary, associated with the development of vast fluvial networks and enhanced freshwater delivery to the surrounding ocean margins. In particular, marine sediment records off Western Sahara indicate deposition of river-borne material at those times, implying sustained fluvial discharges along the West African margin. Today, however, no major river exists in this area; therefore, the origin of these sediments remains unclear. Here, using orbital radar satellite imagery, we present geomorphological data that reveal the existence of a large buried paleodrainage network on the Mauritanian coast. On the basis of evidence from the literature, we propose that reactivation of this major paleoriver during past humid periods contributed to the delivery of sediments to the Tropical Atlantic margin. This finding provides new insights for the interpretation of terrigenous sediment records off Western Africa, with important implications for our understanding of the paleohydrological history of the Sahara.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoecker, Nora Kathleen
2014-03-01
A Systems Analysis Group has existed at Sandia National Laboratories since at least the mid-1950s. Much of the groups work output (reports, briefing documents, and other materials) has been retained, along with large numbers of related documents. Over time the collection has grown to hundreds of thousands of unstructured documents in many formats contained in one or more of several different shared drives or SharePoint sites, with perhaps five percent of the collection still existing in print format. This presents a challenge. How can the group effectively find, manage, and build on information contained somewhere within such a large setmore » of unstructured documents? In response, a project was initiated to identify tools that would be able to meet this challenge. This report documents the results found and recommendations made as of August 2013.« less
Batch effects in single-cell RNA-sequencing data are corrected by matching mutual nearest neighbors.
Haghverdi, Laleh; Lun, Aaron T L; Morgan, Michael D; Marioni, John C
2018-06-01
Large-scale single-cell RNA sequencing (scRNA-seq) data sets that are produced in different laboratories and at different times contain batch effects that may compromise the integration and interpretation of the data. Existing scRNA-seq analysis methods incorrectly assume that the composition of cell populations is either known or identical across batches. We present a strategy for batch correction based on the detection of mutual nearest neighbors (MNNs) in the high-dimensional expression space. Our approach does not rely on predefined or equal population compositions across batches; instead, it requires only that a subset of the population be shared between batches. We demonstrate the superiority of our approach compared with existing methods by using both simulated and real scRNA-seq data sets. Using multiple droplet-based scRNA-seq data sets, we demonstrate that our MNN batch-effect-correction method can be scaled to large numbers of cells.
A web-based repository of surgical simulator projects.
Leskovský, Peter; Harders, Matthias; Székely, Gábor
2006-01-01
The use of computer-based surgical simulators for training of prospective surgeons has been a topic of research for more than a decade. As a result, a large number of academic projects have been carried out, and a growing number of commercial products are available on the market. Keeping track of all these endeavors for established groups as well as for newly started projects can be quite arduous. Gathering information on existing methods, already traveled research paths, and problems encountered is a time consuming task. To alleviate this situation, we have established a modifiable online repository of existing projects. It contains detailed information about a large number of simulator projects gathered from web pages, papers and personal communication. The database is modifiable (with password protected sections) and also allows for a simple statistical analysis of the collected data. For further information, the surgical repository web page can be found at www.virtualsurgery.vision.ee.ethz.ch.
NASA/Ames Research Center's science and applications aircraft program
NASA Technical Reports Server (NTRS)
Hall, G. Warren
1991-01-01
NASA-Ames Research Center operates a fleet of seven Science and Applications Aircraft, namely the C-141/Kuiper Airborne Observatory (KAO), DC-8, C-130, Lear Jet, and three ER-2s. These aircraft are used to satisfy two major objectives, each of equal importance. The first is to acquire remote and in-situ scientific data in astronomy, astrophysics, earth sciences, ocean processes, atmospheric physics, meteorology, materials processing and life sciences. The second major objective is to expedite the development of sensors and their attendant algorithms for ultimate use in space and to simulate from an aircraft, the data to be acquired from spaceborne sensors. NASA-Ames Science and Applications Aircraft are recognized as national and international facilities. They have performed and will continue to perform, operational missions from bases in the United States and worldwide. Historically, twice as many investigators have requested flight time than could be accommodated. This situation remains true today and is expected to increase in the years ahead. A major advantage of the existing fleet of aircraft is their ability to cover a large expanse of the earth's ecosystem from the surface to the lower stratosphere over large distances and time aloft. Their large payload capability allows a number of scientists to use multi-investigator sensor suites to permit simultaneous and complementary data gathering. In-flight changes to the sensors or data systems have greatly reduced the time required to optimize the development of new instruments. It is doubtful that spaceborne systems will ever totally replace the need for airborne science aircraft. The operations philosophy and capabilities exist at NASA-Ames Research Center.
Analyzing large scale genomic data on the cloud with Sparkhit
Huang, Liren; Krüger, Jan
2018-01-01
Abstract Motivation The increasing amount of next-generation sequencing data poses a fundamental challenge on large scale genomic analytics. Existing tools use different distributed computational platforms to scale-out bioinformatics workloads. However, the scalability of these tools is not efficient. Moreover, they have heavy run time overheads when pre-processing large amounts of data. To address these limitations, we have developed Sparkhit: a distributed bioinformatics framework built on top of the Apache Spark platform. Results Sparkhit integrates a variety of analytical methods. It is implemented in the Spark extended MapReduce model. It runs 92–157 times faster than MetaSpark on metagenomic fragment recruitment and 18–32 times faster than Crossbow on data pre-processing. We analyzed 100 terabytes of data across four genomic projects in the cloud in 21 h, which includes the run times of cluster deployment and data downloading. Furthermore, our application on the entire Human Microbiome Project shotgun sequencing data was completed in 2 h, presenting an approach to easily associate large amounts of public datasets with reference data. Availability and implementation Sparkhit is freely available at: https://rhinempi.github.io/sparkhit/. Contact asczyrba@cebitec.uni-bielefeld.de Supplementary information Supplementary data are available at Bioinformatics online. PMID:29253074
NASA Astrophysics Data System (ADS)
Uemura, Y.; Tadokoro, K.; Matsuhiro, K.; Ikuta, R.
2015-12-01
The most critical issue in reducing the accuracy of seafloor positioning system, GPS/Acoustic technique, is large-scale thermal gradient of sound-speed structure [Muto et al., 2008] due to the ocean current. For example, Kuroshio Current, near our observation station, forms this structure. To improve the accuracy of seafloor benchmark position (SBP), we need to directly measure the structure frequently, or estimate it from travel time residual. The former, we repeatedly measure the sound-speed at Kuroshio axis using Underway CTD and try to apply analysis method of seafloor positioning [Yasuda et al., 2015 AGU meeting]. The latter, however, we cannot estimate the structure using travel time residual until now. Accordingly, in this study, we focus on azimuthal dependence of Estimated Mean Sound-Speed (EMSS). EMSS is defined as distance between vessel position and estimated SBP divided by travel time. If thermal gradient exists and SBP is true, EMSS should have azimuthal dependence with the assumption of horizontal layered sound-speed structure in our previous analysis method. We use the data at KMC located on the central part of Nankai Trough, Japan on Jan. 28, 2015, because on that day KMC was on the north edge of Kuroshio, where we expect that thermal gradient exists. In our analysis method, the hyper parameter (μ value) weights travel time residual and rate of change of sound speed structure. However, EMSS derived from μ value determined by Ikuta et al. [2008] does not have azimuthal dependence, that is, we cannot estimate thermal gradient. Thus, we expect SBP has a large bias. Therefore, in this study, we use another μ value and examine whether EMSS has azimuthal dependence or not. With the μ value of this study, which is 1 order of magnitude smaller than the previous value, EMSS has azimuthal dependence that is consistent with observation day's thermal gradient. This result shows that we can estimate the thermal gradient adequately. This SBP displaces 25.6 cm to the north and 11.8 cm to the east compared to previous SBP. This displacement reduces the bias of SBP and RMS of horizontal component in time series to 1/3. Therefore, determination of SBP is suitable when the thermal gradient exists on observation day and EMSS has azimuthal dependence for redetermination of μ value.
Li, Zhijin; Vogelmann, Andrew M.; Feng, Sha; ...
2015-01-20
We produce fine-resolution, three-dimensional fields of meteorological and other variables for the U.S. Department of Energy’s Atmospheric Radiation Measurement (ARM) Southern Great Plains site. The Community Gridpoint Statistical Interpolation system is implemented in a multiscale data assimilation (MS-DA) framework that is used within the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. The MS-DA algorithm uses existing reanalysis products and constrains fine-scale atmospheric properties by assimilating high-resolution observations. A set of experiments show that the data assimilation analysis realistically reproduces the intensity, structure, and time evolution of clouds and precipitation associated with a mesoscale convective system.more » Evaluations also show that the large-scale forcing derived from the fine-resolution analysis has an overall accuracy comparable to the existing ARM operational product. For enhanced applications, the fine-resolution fields are used to characterize the contribution of subgrid variability to the large-scale forcing and to derive hydrometeor forcing, which are presented in companion papers.« less
Weak-signal Phase Calibration Strategies for Large DSN Arrays
NASA Technical Reports Server (NTRS)
Jones, Dayton L.
2005-01-01
The NASA Deep Space Network (DSN) is studying arrays of large numbers of small, mass-produced radio antennas as a cost-effective way to increase downlink sensitivity and data rates for future missions. An important issue for the operation of large arrays is the accuracy with which signals from hundreds of small antennas can be combined. This is particularly true at Ka band (32 GHz) where atmospheric phase variations can be large and rapidly changing. A number of algorithms exist to correct the phases of signals from individual antennas in the case where a spacecraft signal provides a useful signal-to-noise ratio (SNR) on time scales shorter than the atmospheric coherence time. However, for very weak spacecraft signals it will be necessary to rely on background natural radio sources to maintain array phasing. Very weak signals could result from a spacecraft emergency or by design, such as direct-to-Earth data transmissions from distant planetary atmospheric or surface probes using only low gain antennas. This paper considers the parameter space where external real-time phase calibration will be necessary, and what this requires in terms of array configuration and signal processing. The inherent limitations of this technique are also discussed.
NASA Astrophysics Data System (ADS)
Verma, Arjun; Privman, Vladimir
2018-02-01
We study approach to the large-time jammed state of the deposited particles in the model of random sequential adsorption. The convergence laws are usually derived from the argument of Pomeau which includes the assumption of the dominance, at large enough times, of small landing regions into each of which only a single particle can be deposited without overlapping earlier deposited particles and which, after a certain time are no longer created by depositions in larger gaps. The second assumption has been that the size distribution of gaps open for particle-center landing in this large-time small-gaps regime is finite in the limit of zero gap size. We report numerical Monte Carlo studies of a recently introduced model of random sequential adsorption on patterned one-dimensional substrates that suggest that the second assumption must be generalized. We argue that a region exists in the parameter space of the studied model in which the gap-size distribution in the Pomeau large-time regime actually linearly vanishes at zero gap sizes. In another region, the distribution develops a threshold property, i.e., there are no small gaps below a certain gap size. We discuss the implications of these findings for new asymptotic power-law and exponential-modified-by-a-power-law convergences to jamming in irreversible one-dimensional deposition.
Forecasting eruption size: what we know, what we don't know
NASA Astrophysics Data System (ADS)
Papale, Paolo
2017-04-01
Any eruption forecast includes an evaluation of the expected size of the forthcoming eruption, usually expressed as the probability associated to given size classes. Such evaluation is mostly based on the previous volcanic history at the specific volcano, or it is referred to a broader class of volcanoes constituting "analogues" of the one under specific consideration. In any case, use of knowledge from past eruptions implies considering the completeness of the reference catalogue, and most importantly, the existence of systematic biases in the catalogue, that may affect probability estimates and translate into biased volcanic hazard forecasts. An analysis of existing catalogues, with major reference to the catalogue from the Smithsonian Global Volcanism Program, suggests that systematic biases largely dominate at global, regional and local scale: volcanic histories reconstructed at individual volcanoes, often used as a reference for volcanic hazard forecasts, are the result of systematic loss of information with time and poor sample representativeness. That situation strictly requires the use of techniques to complete existing catalogues, as well as careful consideration of the uncertainties deriving from inadequate knowledge and model-dependent data elaboration. A reconstructed global eruption size distribution, obtained by merging information from different existing catalogues, shows a mode in the VEI 1-2 range, <0.1% incidence of eruptions with VEI 7 or larger, and substantial uncertainties associated with individual VEI frequencies. Even larger uncertainties are expected to derive from application to individual volcanoes or classes of analogue volcanoes, suggesting large to very large uncertainties associated to volcanic hazard forecasts virtually at any individual volcano worldwide.
Weather-based forecasts of California crop yields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lobell, D B; Cahill, K N; Field, C B
2005-09-26
Crop yield forecasts provide useful information to a range of users. Yields for several crops in California are currently forecast based on field surveys and farmer interviews, while for many crops official forecasts do not exist. As broad-scale crop yields are largely dependent on weather, measurements from existing meteorological stations have the potential to provide a reliable, timely, and cost-effective means to anticipate crop yields. We developed weather-based models of state-wide yields for 12 major California crops (wine grapes, lettuce, almonds, strawberries, table grapes, hay, oranges, cotton, tomatoes, walnuts, avocados, and pistachios), and tested their accuracy using cross-validation over themore » 1980-2003 period. Many crops were forecast with high accuracy, as judged by the percent of yield variation explained by the forecast, the number of yields with correctly predicted direction of yield change, or the number of yields with correctly predicted extreme yields. The most successfully modeled crop was almonds, with 81% of yield variance captured by the forecast. Predictions for most crops relied on weather measurements well before harvest time, allowing for lead times that were longer than existing procedures in many cases.« less
Rapid Calculation of Spacecraft Trajectories Using Efficient Taylor Series Integration
NASA Technical Reports Server (NTRS)
Scott, James R.; Martini, Michael C.
2011-01-01
A variable-order, variable-step Taylor series integration algorithm was implemented in NASA Glenn's SNAP (Spacecraft N-body Analysis Program) code. SNAP is a high-fidelity trajectory propagation program that can propagate the trajectory of a spacecraft about virtually any body in the solar system. The Taylor series algorithm's very high order accuracy and excellent stability properties lead to large reductions in computer time relative to the code's existing 8th order Runge-Kutta scheme. Head-to-head comparison on near-Earth, lunar, Mars, and Europa missions showed that Taylor series integration is 15.8 times faster than Runge- Kutta on average, and is more accurate. These speedups were obtained for calculations involving central body, other body, thrust, and drag forces. Similar speedups have been obtained for calculations that include J2 spherical harmonic for central body gravitation. The algorithm includes a step size selection method that directly calculates the step size and never requires a repeat step. High-order Taylor series integration algorithms have been shown to provide major reductions in computer time over conventional integration methods in numerous scientific applications. The objective here was to directly implement Taylor series integration in an existing trajectory analysis code and demonstrate that large reductions in computer time (order of magnitude) could be achieved while simultaneously maintaining high accuracy. This software greatly accelerates the calculation of spacecraft trajectories. At each time level, the spacecraft position, velocity, and mass are expanded in a high-order Taylor series whose coefficients are obtained through efficient differentiation arithmetic. This makes it possible to take very large time steps at minimal cost, resulting in large savings in computer time. The Taylor series algorithm is implemented primarily through three subroutines: (1) a driver routine that automatically introduces auxiliary variables and sets up initial conditions and integrates; (2) a routine that calculates system reduced derivatives using recurrence relations for quotients and products; and (3) a routine that determines the step size and sums the series. The order of accuracy used in a trajectory calculation is arbitrary and can be set by the user. The algorithm directly calculates the motion of other planetary bodies and does not require ephemeris files (except to start the calculation). The code also runs with Taylor series and Runge-Kutta used interchangeably for different phases of a mission.
Finding hidden periodic signals in time series - an application to stock prices
NASA Astrophysics Data System (ADS)
O'Shea, Michael
2014-03-01
Data in the form of time series appear in many areas of science. In cases where the periodicity is apparent and the only other contribution to the time series is stochastic in origin, the data can be `folded' to improve signal to noise and this has been done for light curves of variable stars with the folding resulting in a cleaner light curve signal. Stock index prices versus time are classic examples of time series. Repeating patterns have been claimed by many workers and include unusually large returns on small-cap stocks during the month of January, and small returns on the Dow Jones Industrial average (DJIA) in the months June through September compared to the rest of the year. Such observations imply that these prices have a periodic component. We investigate this for the DJIA. If such a component exists it is hidden in a large non-periodic variation and a large stochastic variation. We show how to extract this periodic component and for the first time reveal its yearly (averaged) shape. This periodic component leads directly to the `Sell in May and buy at Halloween' adage. We also drill down and show that this yearly variation emerges from approximately half of the underlying stocks making up the DJIA index.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoen, Ben; Cappers, Peter; Wiser, Ryan
2011-04-19
An increasing number of homes in the U.S. have sold with photovoltaic (PV) energy systems installed at the time of sale, yet relatively little research exists that estimates the marginal impacts of those PV systems on home sale prices. A clearer understanding of these possible impacts might influence the decisions of homeowners considering the installation of a PV system, homebuyers considering the purchase of a home with PV already installed, and new home builders considering including PV as an optional or standard product on their homes. This research analyzes a large dataset of California homes that sold from 2000 throughmore » mid-2009 with PV installed. It finds strong evidence that homes with PV systems sold for a premium over comparable homes without PV systems during this time frame. Estimates for this premium expressed in dollars per watt of installed PV range, on average, from roughly $4 to $5.5/watt across a large number of hedonic and repeat sales model specifications and robustness tests. When expressed as a ratio of the sales price premium of PV to estimated annual energy cost savings associated with PV, an average ratio of 14:1 to 19:1 can be calculated; these results are consistent with those of the more-extensive existing literature on the impact of energy efficiency on sales prices. When the data are split among new and existing homes, however, PV system premiums are markedly affected. New homes with PV show premiums of $2.3-2.6/watt, while existing homes with PV show premiums of more than $6/watt. Reasons for this discrepancy are suggested, yet further research is warranted. A number of other areas where future research would be useful are also highlighted.« less
Sigehuzi, Tomoo; Tanaka, Hajime
2004-11-01
We study phase-separation behavior of an off-symmetric fluid mixture induced by a "double temperature quench." We first quench a system into the unstable region. After a large phase-separated structure is formed, we again quench the system more deeply and follow the pattern-evolution process. The second quench makes the domains formed by the first quench unstable and leads to double phase separation; that is, small droplets are formed inside the large domains created by the first quench. The complex coarsening behavior of this hierarchic structure having two characteristic length scales is studied in detail by using the digital image analysis. We find three distinct time regimes in the time evolution of the structure factor of the system. In the first regime, small droplets coarsen with time inside large domains. There a large domain containing small droplets in it can be regarded as an isolated system. Later, however, the coarsening of small droplets stops when they start to interact via diffusion with the large domain containing them. Finally, small droplets disappear due to the Lifshitz-Slyozov mechanism. Thus the observed behavior can be explained by the crossover of the nature of a large domain from the isolated to the open system; this is a direct consequence of the existence of the two characteristic length scales.
Large natural geophysical events: planetary planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knox, J.B.; Smith, J.V.
1984-09-01
Geological and geophysical data suggest that during the evolution of the earth and its species, that there have been many mass extinctions due to large impacts from comets and large asteroids, and major volcanic events. Today, technology has developed to the stage where we can begin to consider protective measures for the planet. Evidence of the ecological disruption and frequency of these major events is presented. Surveillance and warning systems are most critical to develop wherein sufficient lead times for warnings exist so that appropriate interventions could be designed. The long term research undergirding these warning systems, implementation, and proofmore » testing is rich in opportunities for collaboration for peace.« less
ERIC Educational Resources Information Center
Liddicoat, Anthony J.; Curnow, Timothy Jowan; Scarino, Angela
2016-01-01
This paper examines the development of the First Language Maintenance and Development (FLMD) program in South Australia. This program is the main language policy activity that specifically focuses on language maintenance in government primary schools and has existed since 1986. During this time, the program has evolved largely as the result of ad…
Establishing an American Montessori Movement: Another Look at the Early Years
ERIC Educational Resources Information Center
Whitescarver, Keith; Cossentino, Jacqueline
2006-01-01
Though Montessorians have existed in the United States for nearly a century, a distinctly American version of the system did not begin to take hold until the late 1950s. What was referred to at the time as the "second spring" was actually a remarkable moment not just for Montessori education, but also for American culture at large. For…
Attitudes of College Students toward Contraceptives: A Consideration of Gender Differences
ERIC Educational Resources Information Center
Lance, Larry M.
2004-01-01
There exists a "contraceptive gap" among young people. That is, while a large majority of young males and females become sexually active, there is a time lapse between the onset of sexual activity and the use of contraceptives. As a result of this lack of sexual responsibility, there are over 1,000,000 teenage pregnancies each year in the American…
Rose-Hulman Institute of Technology's Technology & Entrepreneurial Development Program.
ERIC Educational Resources Information Center
Farbrother, Barry J.
The era of economic dominance supported by the existence of a large and inexpensive labor pool, and expanding domestic market, and/or exploitation of natural resources is over in the United States. It's time to work smarter, not just work harder. Thus, the economic vitality of any region in a modern economy is dependent on the ability of its…
USDA-ARS?s Scientific Manuscript database
A first step in exploring population structure in crop plants and other organisms is to define the number of subpopulations that exist for a given data set. The genetic marker data sets being generated have become increasingly large over time and commonly are the high-dimension, low sample size (HDL...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klein, Fred W.
A significant seismic hazard exists in south Hawaii from large tectonic earthquakes that can reach magnitude 8 and intensity XII. This paper quantifies the hazard by estimating the horizontal peak ground acceleration (PGA) in south Hawaii which occurs with a 90% probability of not being exceeded during exposure times from 10 to 250 years. The largest earthquakes occur beneath active, unbuttressed and mobile flanks of volcanoes in their shield building stage.
The influence of climate on species distribution over time and space during the late Quaternary
NASA Astrophysics Data System (ADS)
Carotenuto, F.; Di Febbraro, M.; Melchionna, M.; Castiglione, S.; Saggese, F.; Serio, C.; Mondanaro, A.; Passaro, F.; Loy, A.; Raia, P.
2016-10-01
Understanding the effect of climate on the composition of communities and its change over time and space is one of the major aims in ecology and paleoecology. Herein, we tackled on this issue by studying late Quaternary large mammal paleocommunities of Eurasia. The late Quaternary was a period of strong environmental instability, especially characterized by the occurrence of the last glacial maximum (LGM). We used community phylogenetics and joint species distribution models in order to understand the factors determining paleocommunity composition in the late Quaternary. Our results support the existence of strong climatic selection operating on the LGM fauna, both through the disappearance of warm-adapted species such as Elephas antiquus, Hippopothamus amphibious, and Stephanorhinus hemitoechus, and by setting the stage for the existence of a community characterized by cold-adapted large mammals. Patterns of abundance in the fossil record, co-occurrence between species pairs, and the extent of climatic forcing on faunal composition, differ between paleocommunities, but not between extinct and extant species, which is consistent with the idea that climate change, rather than the presence of humans, exerted a major effect on the survival of the late Quaternary megafauna.
Operational flood control of a low-lying delta system using large time step Model Predictive Control
NASA Astrophysics Data System (ADS)
Tian, Xin; van Overloop, Peter-Jules; Negenborn, Rudy R.; van de Giesen, Nick
2015-01-01
The safety of low-lying deltas is threatened not only by riverine flooding but by storm-induced coastal flooding as well. For the purpose of flood control, these deltas are mostly protected in a man-made environment, where dikes, dams and other adjustable infrastructures, such as gates, barriers and pumps are widely constructed. Instead of always reinforcing and heightening these structures, it is worth considering making the most of the existing infrastructure to reduce the damage and manage the delta in an operational and overall way. In this study, an advanced real-time control approach, Model Predictive Control, is proposed to operate these structures in the Dutch delta system (the Rhine-Meuse delta). The application covers non-linearity in the dynamic behavior of the water system and the structures. To deal with the non-linearity, a linearization scheme is applied which directly uses the gate height instead of the structure flow as the control variable. Given the fact that MPC needs to compute control actions in real-time, we address issues regarding computational time. A new large time step scheme is proposed in order to save computation time, in which different control variables can have different control time steps. Simulation experiments demonstrate that Model Predictive Control with the large time step setting is able to control a delta system better and much more efficiently than the conventional operational schemes.
The predictability of consumer visitation patterns
NASA Astrophysics Data System (ADS)
Krumme, Coco; Llorente, Alejandro; Cebrian, Manuel; Pentland, Alex ("Sandy"); Moro, Esteban
2013-04-01
We consider hundreds of thousands of individual economic transactions to ask: how predictable are consumers in their merchant visitation patterns? Our results suggest that, in the long-run, much of our seemingly elective activity is actually highly predictable. Notwithstanding a wide range of individual preferences, shoppers share regularities in how they visit merchant locations over time. Yet while aggregate behavior is largely predictable, the interleaving of shopping events introduces important stochastic elements at short time scales. These short- and long-scale patterns suggest a theoretical upper bound on predictability, and describe the accuracy of a Markov model in predicting a person's next location. We incorporate population-level transition probabilities in the predictive models, and find that in many cases these improve accuracy. While our results point to the elusiveness of precise predictions about where a person will go next, they suggest the existence, at large time-scales, of regularities across the population.
Finite-difference modeling with variable grid-size and adaptive time-step in porous media
NASA Astrophysics Data System (ADS)
Liu, Xinxin; Yin, Xingyao; Wu, Guochen
2014-04-01
Forward modeling of elastic wave propagation in porous media has great importance for understanding and interpreting the influences of rock properties on characteristics of seismic wavefield. However, the finite-difference forward-modeling method is usually implemented with global spatial grid-size and time-step; it consumes large amounts of computational cost when small-scaled oil/gas-bearing structures or large velocity-contrast exist underground. To overcome this handicap, combined with variable grid-size and time-step, this paper developed a staggered-grid finite-difference scheme for elastic wave modeling in porous media. Variable finite-difference coefficients and wavefield interpolation were used to realize the transition of wave propagation between regions of different grid-size. The accuracy and efficiency of the algorithm were shown by numerical examples. The proposed method is advanced with low computational cost in elastic wave simulation for heterogeneous oil/gas reservoirs.
The predictability of consumer visitation patterns
Krumme, Coco; Llorente, Alejandro; Cebrian, Manuel; Pentland, Alex ("Sandy"); Moro, Esteban
2013-01-01
We consider hundreds of thousands of individual economic transactions to ask: how predictable are consumers in their merchant visitation patterns? Our results suggest that, in the long-run, much of our seemingly elective activity is actually highly predictable. Notwithstanding a wide range of individual preferences, shoppers share regularities in how they visit merchant locations over time. Yet while aggregate behavior is largely predictable, the interleaving of shopping events introduces important stochastic elements at short time scales. These short- and long-scale patterns suggest a theoretical upper bound on predictability, and describe the accuracy of a Markov model in predicting a person's next location. We incorporate population-level transition probabilities in the predictive models, and find that in many cases these improve accuracy. While our results point to the elusiveness of precise predictions about where a person will go next, they suggest the existence, at large time-scales, of regularities across the population. PMID:23598917
High-performance metadata indexing and search in petascale data storage systems
NASA Astrophysics Data System (ADS)
Leung, A. W.; Shao, M.; Bisson, T.; Pasupathy, S.; Miller, E. L.
2008-07-01
Large-scale storage systems used for scientific applications can store petabytes of data and billions of files, making the organization and management of data in these systems a difficult, time-consuming task. The ability to search file metadata in a storage system can address this problem by allowing scientists to quickly navigate experiment data and code while allowing storage administrators to gather the information they need to properly manage the system. In this paper, we present Spyglass, a file metadata search system that achieves scalability by exploiting storage system properties, providing the scalability that existing file metadata search tools lack. In doing so, Spyglass can achieve search performance up to several thousand times faster than existing database solutions. We show that Spyglass enables important functionality that can aid data management for scientists and storage administrators.
NASA Astrophysics Data System (ADS)
Habibi, H.; Norouzi, A.; Habib, A.; Seo, D. J.
2016-12-01
To produce accurate predictions of flooding in urban areas, it is necessary to model both natural channel and storm drain networks. While there exist many urban hydraulic models of varying sophistication, most of them are not practical for real-time application for large urban areas. On the other hand, most distributed hydrologic models developed for real-time applications lack the ability to explicitly simulate storm drains. In this work, we develop a storm drain model that can be coupled with distributed hydrologic models such as the National Weather Service Hydrology Laboratory's Distributed Hydrologic Model, for real-time flash flood prediction in large urban areas to improve prediction and to advance the understanding of integrated response of natural channels and storm drains to rainfall events of varying magnitude and spatiotemporal extent in urban catchments of varying sizes. The initial study area is the Johnson Creek Catchment (40.1 km2) in the City of Arlington, TX. For observed rainfall, the high-resolution (500 m, 1 min) precipitation data from the Dallas-Fort Worth Demonstration Network of the Collaborative Adaptive Sensing of the Atmosphere radars is used.
Narimani, Zahra; Beigy, Hamid; Ahmad, Ashar; Masoudi-Nejad, Ali; Fröhlich, Holger
2017-01-01
Inferring the structure of molecular networks from time series protein or gene expression data provides valuable information about the complex biological processes of the cell. Causal network structure inference has been approached using different methods in the past. Most causal network inference techniques, such as Dynamic Bayesian Networks and ordinary differential equations, are limited by their computational complexity and thus make large scale inference infeasible. This is specifically true if a Bayesian framework is applied in order to deal with the unavoidable uncertainty about the correct model. We devise a novel Bayesian network reverse engineering approach using ordinary differential equations with the ability to include non-linearity. Besides modeling arbitrary, possibly combinatorial and time dependent perturbations with unknown targets, one of our main contributions is the use of Expectation Propagation, an algorithm for approximate Bayesian inference over large scale network structures in short computation time. We further explore the possibility of integrating prior knowledge into network inference. We evaluate the proposed model on DREAM4 and DREAM8 data and find it competitive against several state-of-the-art existing network inference methods.
NASA Astrophysics Data System (ADS)
Gur, David; Rockette, Howard E.; Sumkin, Jules H.; Hoy, Ronald J.; Feist, John H.; Thaete, F. Leland; King, Jill L.; Slasky, B. S.; Miketic, Linda M.; Straub, William H.
1991-07-01
In a series of large ROC studies, the authors analyzed the time radiologists took to diagnose PA chest images as a function of observer performance indices (Az), display environments, and difficulty of cases. Board-certified radiologists interpreted at least 600 images each for the presence or absence of one or more of the following abnormalities: interstitial disease, nodule, and pneumothorax. Results indicated that there exists a large inter- reader variability in the time required to diagnose PA chest images. There is no correlation between a reader's specific median reading time and his/her performance. Time generally increases as the number of abnormalities on a single image increases and for cases with subtle abnormalities. Results also indicated that, in general, the longer the time for interpretation of a specific case (within reader), the further the observer's confidence ratings were from the truth. These findings were found to hold true regardless of the display mode. These results may have implications with regards to the appropriate methodology that should be used for imaging systems evaluations and for measurements of productivity for radiologists.
NASA Astrophysics Data System (ADS)
Massah, Mozhdeh; Kantz, Holger
2016-04-01
As we have one and only one earth and no replicas, climate characteristics are usually computed as time averages from a single time series. For understanding climate variability, it is essential to understand how close a single time average will typically be to an ensemble average. To answer this question, we study large deviation probabilities (LDP) of stochastic processes and characterize them by their dependence on the time window. In contrast to iid variables for which there exists an analytical expression for the rate function, the correlated variables such as auto-regressive (short memory) and auto-regressive fractionally integrated moving average (long memory) processes, have not an analytical LDP. We study LDP for these processes, in order to see how correlation affects this probability in comparison to iid data. Although short range correlations lead to a simple correction of sample size, long range correlations lead to a sub-exponential decay of LDP and hence to a very slow convergence of time averages. This effect is demonstrated for a 120 year long time series of daily temperature anomalies measured in Potsdam (Germany).
Advanced Kalman Filter for Real-Time Responsiveness in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Welch, Gregory Francis; Zhang, Jinghe
2014-06-10
Complex engineering systems pose fundamental challenges in real-time operations and control because they are highly dynamic systems consisting of a large number of elements with severe nonlinearities and discontinuities. Today’s tools for real-time complex system operations are mostly based on steady state models, unable to capture the dynamic nature and too slow to prevent system failures. We developed advanced Kalman filtering techniques and the formulation of dynamic state estimation using Kalman filtering techniques to capture complex system dynamics in aiding real-time operations and control. In this work, we looked at complex system issues including severe nonlinearity of system equations, discontinuitiesmore » caused by system controls and network switches, sparse measurements in space and time, and real-time requirements of power grid operations. We sought to bridge the disciplinary boundaries between Computer Science and Power Systems Engineering, by introducing methods that leverage both existing and new techniques. While our methods were developed in the context of electrical power systems, they should generalize to other large-scale scientific and engineering applications.« less
An Approach for Removing Redundant Data from RFID Data Streams
Mahdin, Hairulnizam; Abawajy, Jemal
2011-01-01
Radio frequency identification (RFID) systems are emerging as the primary object identification mechanism, especially in supply chain management. However, RFID naturally generates a large amount of duplicate readings. Removing these duplicates from the RFID data stream is paramount as it does not contribute new information to the system and wastes system resources. Existing approaches to deal with this problem cannot fulfill the real time demands to process the massive RFID data stream. We propose a data filtering approach that efficiently detects and removes duplicate readings from RFID data streams. Experimental results show that the proposed approach offers a significant improvement as compared to the existing approaches. PMID:22163730
NASA Astrophysics Data System (ADS)
Chia, Elbert; Cheng, Liang; Lourembam, James; Wu, S. G.; Motapothula, Mallikarjuna R.; Sarkar, Tarapada; Venkatesan, Venky
Using terahertz time-domain spectroscopy (THz-TDS), we obtained the complex optical conductivity [ σ (ω) ] of Ta-doped TiO2 thin films - a transparent conducting oxide (TCO), in the frequency range 0.3-2.7 THz, temperature range 10-300 K and various Ta dopings. Our results reveal the existence of an interacting polaronic gas in these TCOs, and suggest that their large conductivity is caused by the combined effects of large carrier density and small electron-phonon coupling constant due to Ta doping. NUSNNI-NanoCore, NRF-CRP (NRF2008NRF-CRP002-024), NUS cross-faculty Grant and FRC (ARF Grant No. R-144-000-278-112), MOE Tier 1 (RG123/14), SinBeRISE CREATE.
Trampoline effect in extreme ground motion.
Aoi, Shin; Kunugi, Takashi; Fujiwara, Hiroyuki
2008-10-31
In earthquake hazard assessment studies, the focus is usually on horizontal ground motion. However, records from the 14 June 2008 Iwate-Miyagi earthquake in Japan, a crustal event with a moment magnitude of 6.9, revealed an unprecedented vertical surface acceleration of nearly four times gravity, more than twice its horizontal counterpart. The vertical acceleration was distinctly asymmetric; the waveform envelope was about 1.6 times as large in the upward direction as in the downward direction, which is not explained by existing models of the soil response. We present a simple model of a mass bouncing on a trampoline to account for this asymmetry and the large vertical amplitude. The finding of a hitherto-unknown mode of strong ground motion may prompt major progress in near-source shaking assessments.
A ZigBee wireless networking for remote sensing applications in hydrological monitoring system
NASA Astrophysics Data System (ADS)
Weng, Songgan; Zhai, Duo; Yang, Xing; Hu, Xiaodong
2017-01-01
Hydrological monitoring is recognized as one of the most important factors in hydrology. Particularly, investigation of the tempo-spatial variation patterns of water-level and their effect on hydrological research has attracted more and more attention in recent. Because of the limitations in both human costs and existing water-level monitoring devices, however, it is very hard for researchers to collect real-time water-level data from large-scale geographical areas. This paper designs and implements a real-time water-level data monitoring system (MCH) based on ZigBee networking, which explicitly serves as an effective and efficient scientific instrument for domain experts to facilitate the measurement of large-scale and real-time water-level data monitoring. We implement a proof-of-concept prototype of the MCH, which can monitor water-level automatically, real-timely and accurately with low cost and low power consumption. The preliminary laboratory results and analyses demonstrate the feasibility and the efficacy of the MCH.
Successful integration of ergonomics into continuous improvement initiatives.
Monroe, Kimberly; Fick, Faye; Joshi, Madina
2012-01-01
Process improvement initiatives are receiving renewed attention by large corporations as they attempt to reduce manufacturing costs and stay competitive in the global marketplace. These initiatives include 5S, Six Sigma, and Lean. These programs often take up a large amount of available time and budget resources. More often than not, existing ergonomics processes are considered separate initiatives by upper management and struggle to gain a seat at the table. To effectively maintain their programs, ergonomics program managers need to overcome those obstacles and demonstrate how ergonomics initiatives are a natural fit with continuous improvement philosophies.
NASA Technical Reports Server (NTRS)
Schwan, Karsten
1994-01-01
Atmospheric modeling is a grand challenge problem for several reasons, including its inordinate computational requirements and its generation of large amounts of data concurrent with its use of very large data sets derived from measurement instruments like satellites. In addition, atmospheric models are typically run several times, on new data sets or to reprocess existing data sets, to investigate or reinvestigate specific chemical or physical processes occurring in the earth's atmosphere, to understand model fidelity with respect to observational data, or simply to experiment with specific model parameters or components.
Friction-Stir Welding of Large Scale Cryogenic Fuel Tanks for Aerospace Applications
NASA Technical Reports Server (NTRS)
Jones, Clyde S., III; Venable, Richard A.
1998-01-01
The Marshall Space Flight Center has established a facility for the joining of large-scale aluminum-lithium alloy 2195 cryogenic fuel tanks using the friction-stir welding process. Longitudinal welds, approximately five meters in length, were made possible by retrofitting an existing vertical fusion weld system, designed to fabricate tank barrel sections ranging from two to ten meters in diameter. The structural design requirements of the tooling, clamping and the spindle travel system will be described in this paper. Process controls and real-time data acquisition will also be described, and were critical elements contributing to successful weld operation.
An overview of expert systems. [artificial intelligence
NASA Technical Reports Server (NTRS)
Gevarter, W. B.
1982-01-01
An expert system is defined and its basic structure is discussed. The knowledge base, the inference engine, and uses of expert systems are discussed. Architecture is considered, including choice of solution direction, reasoning in the presence of uncertainty, searching small and large search spaces, handling large search spaces by transforming them and by developing alternative or additional spaces, and dealing with time. Existing expert systems are reviewed. Tools for building such systems, construction, and knowledge acquisition and learning are discussed. Centers of research and funding sources are listed. The state-of-the-art, current problems, required research, and future trends are summarized.
NASA Astrophysics Data System (ADS)
Su, S.-Y.; Chao, C. K.; Liu, C. H.
2009-04-01
Global averaged postsunset equatorial ionospheric density irregularity occurrences observed by ROCSAT during the moderate to high solar activity years of 1999 to 2004 indicate a different local time distribution between June and December solstices. The irregularity occurrences during the December solstice show a faster increase rate to peak at 2100-2200 local time, while the irregularity occurrences during the June solstice have a slower increase rate and peak one hour later in local time than that in the December solstice. The cause of such different local time distributions is attributed to a large contrast in the time of zonal drift reversal and the magnitude of postsunset vertical drift observed by ROCSAT at longitudes of large magnetic declination in the two solstices. That is, a delay in the zonal drift reversal in association with a smaller postsunset vertical drift observed at longitudes of positive magnetic declination has greatly inhibited the irregularity occurrences during the June solstice in contrast to an earlier zonal drift reversal together with a large vertical drift occurring at longitudes of negative magnetic declination to accelerate the irregularity occurrences during the December solstice. We think that the different geomagnetic field strengths that existed between the longitudes of positive and negative magnetic declinations have played a crucial role in determining the different local time distributions of irregularity occurrences for the two solstices.
On the Large-Scaling Issues of Cloud-based Applications for Earth Science Dat
NASA Astrophysics Data System (ADS)
Hua, H.
2016-12-01
Next generation science data systems are needed to address the incoming flood of data from new missions such as NASA's SWOT and NISAR where its SAR data volumes and data throughput rates are order of magnitude larger than present day missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Experiences have shown that to embrace efficient cloud computing approaches for large-scale science data systems requires more than just moving existing code to cloud environments. At large cloud scales, we need to deal with scaling and cost issues. We present our experiences on deploying multiple instances of our hybrid-cloud computing science data system (HySDS) to support large-scale processing of Earth Science data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer 75%-90% costs savings but with an unpredictable computing environment based on market forces.
Fossils matter: improved estimates of divergence times in Pinus reveal older diversification.
Saladin, Bianca; Leslie, Andrew B; Wüest, Rafael O; Litsios, Glenn; Conti, Elena; Salamin, Nicolas; Zimmermann, Niklaus E
2017-04-04
The taxonomy of pines (genus Pinus) is widely accepted and a robust gene tree based on entire plastome sequences exists. However, there is a large discrepancy in estimated divergence times of major pine clades among existing studies, mainly due to differences in fossil placement and dating methods used. We currently lack a dated molecular phylogeny that makes use of the rich pine fossil record, and this study is the first to estimate the divergence dates of pines based on a large number of fossils (21) evenly distributed across all major clades, in combination with applying both node and tip dating methods. We present a range of molecular phylogenetic trees of Pinus generated within a Bayesian framework. We find the origin of crown Pinus is likely up to 30 Myr older (Early Cretaceous) than inferred in most previous studies (Late Cretaceous) and propose generally older divergence times for major clades within Pinus than previously thought. Our age estimates vary significantly between the different dating approaches, but the results generally agree on older divergence times. We present a revised list of 21 fossils that are suitable to use in dating or comparative analyses of pines. Reliable estimates of divergence times in pines are essential if we are to link diversification processes and functional adaptation of this genus to geological events or to changing climates. In addition to older divergence times in Pinus, our results also indicate that node age estimates in pines depend on dating approaches and the specific fossil sets used, reflecting inherent differences in various dating approaches. The sets of dated phylogenetic trees of pines presented here provide a way to account for uncertainties in age estimations when applying comparative phylogenetic methods.
Superstatistical fluctuations in time series: Applications to share-price dynamics and turbulence
NASA Astrophysics Data System (ADS)
van der Straeten, Erik; Beck, Christian
2009-09-01
We report a general technique to study a given experimental time series with superstatistics. Crucial for the applicability of the superstatistics concept is the existence of a parameter β that fluctuates on a large time scale as compared to the other time scales of the complex system under consideration. The proposed method extracts the main superstatistical parameters out of a given data set and examines the validity of the superstatistical model assumptions. We test the method thoroughly with surrogate data sets. Then the applicability of the superstatistical approach is illustrated using real experimental data. We study two examples, velocity time series measured in turbulent Taylor-Couette flows and time series of log returns of the closing prices of some stock market indices.
Image steganalysis using Artificial Bee Colony algorithm
NASA Astrophysics Data System (ADS)
Sajedi, Hedieh
2017-09-01
Steganography is the science of secure communication where the presence of the communication cannot be detected while steganalysis is the art of discovering the existence of the secret communication. Processing a huge amount of information takes extensive execution time and computational sources most of the time. As a result, it is needed to employ a phase of preprocessing, which can moderate the execution time and computational sources. In this paper, we propose a new feature-based blind steganalysis method for detecting stego images from the cover (clean) images with JPEG format. In this regard, we present a feature selection technique based on an improved Artificial Bee Colony (ABC). ABC algorithm is inspired by honeybees' social behaviour in their search for perfect food sources. In the proposed method, classifier performance and the dimension of the selected feature vector depend on using wrapper-based methods. The experiments are performed using two large data-sets of JPEG images. Experimental results demonstrate the effectiveness of the proposed steganalysis technique compared to the other existing techniques.
Existing methods for improving the accuracy of digital-to-analog converters
NASA Astrophysics Data System (ADS)
Eielsen, Arnfinn A.; Fleming, Andrew J.
2017-09-01
The performance of digital-to-analog converters is principally limited by errors in the output voltage levels. Such errors are known as element mismatch and are quantified by the integral non-linearity. Element mismatch limits the achievable accuracy and resolution in high-precision applications as it causes gain and offset errors, as well as harmonic distortion. In this article, five existing methods for mitigating the effects of element mismatch are compared: physical level calibration, dynamic element matching, noise-shaping with digital calibration, large periodic high-frequency dithering, and large stochastic high-pass dithering. These methods are suitable for improving accuracy when using digital-to-analog converters that use multiple discrete output levels to reconstruct time-varying signals. The methods improve linearity and therefore reduce harmonic distortion and can be retrofitted to existing systems with minor hardware variations. The performance of each method is compared theoretically and confirmed by simulations and experiments. Experimental results demonstrate that three of the five methods provide significant improvements in the resolution and accuracy when applied to a general-purpose digital-to-analog converter. As such, these methods can directly improve performance in a wide range of applications including nanopositioning, metrology, and optics.
A multi-scale network method for two-phase flow in porous media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khayrat, Karim, E-mail: khayratk@ifd.mavt.ethz.ch; Jenny, Patrick
Pore-network models of porous media are useful in the study of pore-scale flow in porous media. In order to extract macroscopic properties from flow simulations in pore-networks, it is crucial the networks are large enough to be considered representative elementary volumes. However, existing two-phase network flow solvers are limited to relatively small domains. For this purpose, a multi-scale pore-network (MSPN) method, which takes into account flow-rate effects and can simulate larger domains compared to existing methods, was developed. In our solution algorithm, a large pore network is partitioned into several smaller sub-networks. The algorithm to advance the fluid interfaces withinmore » each subnetwork consists of three steps. First, a global pressure problem on the network is solved approximately using the multiscale finite volume (MSFV) method. Next, the fluxes across the subnetworks are computed. Lastly, using fluxes as boundary conditions, a dynamic two-phase flow solver is used to advance the solution in time. Simulation results of drainage scenarios at different capillary numbers and unfavourable viscosity ratios are presented and used to validate the MSPN method against solutions obtained by an existing dynamic network flow solver.« less
NASA Astrophysics Data System (ADS)
Tiselj, Iztok
2014-12-01
Channel flow DNS (Direct Numerical Simulation) at friction Reynolds number 180 and with passive scalars of Prandtl numbers 1 and 0.01 was performed in various computational domains. The "normal" size domain was ˜2300 wall units long and ˜750 wall units wide; size taken from the similar DNS of Moser et al. The "large" computational domain, which is supposed to be sufficient to describe the largest structures of the turbulent flows was 3 times longer and 3 times wider than the "normal" domain. The "very large" domain was 6 times longer and 6 times wider than the "normal" domain. All simulations were performed with the same spatial and temporal resolution. Comparison of the standard and large computational domains shows the velocity field statistics (mean velocity, root-mean-square (RMS) fluctuations, and turbulent Reynolds stresses) that are within 1%-2%. Similar agreement is observed for Pr = 1 temperature fields and can be observed also for the mean temperature profiles at Pr = 0.01. These differences can be attributed to the statistical uncertainties of the DNS. However, second-order moments, i.e., RMS temperature fluctuations of standard and large computational domains at Pr = 0.01 show significant differences of up to 20%. Stronger temperature fluctuations in the "large" and "very large" domains confirm the existence of the large-scale structures. Their influence is more or less invisible in the main velocity field statistics or in the statistics of the temperature fields at Prandtl numbers around 1. However, these structures play visible role in the temperature fluctuations at low Prandtl number, where high temperature diffusivity effectively smears the small-scale structures in the thermal field and enhances the relative contribution of large-scales. These large thermal structures represent some kind of an echo of the large scale velocity structures: the highest temperature-velocity correlations are not observed between the instantaneous temperatures and instantaneous streamwise velocities, but between the instantaneous temperatures and velocities averaged over certain time interval.
Ongoing Progress in Spacecraft Controls
NASA Technical Reports Server (NTRS)
Ghosh, Dave (Editor)
1992-01-01
This publication is a collection of papers presented at the Mars Mission Research Center workshop on Ongoing Progress in Spacecraft Controls. The technical program addressed additional Mars mission control problems that currently exist in robotic missions in addition to human missions. Topics include control systems design in the presence of large time delays, fuel-optimal propulsive control, and adaptive control to handle a variety of unknown conditions.
ERIC Educational Resources Information Center
Luna, Yvonne M.; Winters, Stephanie A.
2017-01-01
Introduction to Sociology at a large public university was taught in two separate formats, blended learning and lecture, during the same semester by the first author. While some similarities existed, the distinction was in delivery of course content. Additionally, the blended class had one-third less in-class time that was primarily devoted to…
Social Circles Detection from Ego Network and Profile Information
2014-12-19
response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing... algorithm used to infer k-clique communities is expo- nential, which makes this technique unfeasible when treating egonets with a large number of users...atic when considering RBMs. This inconvenient was positively solved implementing a sparsity treatment with the RBM algorithm . (ii) The ground truth was
ERIC Educational Resources Information Center
Goossens, Amélie; Méon, Pierre-Guillaume
2015-01-01
Using a survey of a large group of first- and final-year students of different disciplines to study their beliefs in the existence of mutual benefits of market transactions, the authors observe significant differences between economics and business students versus students of other disciplines. These differences increase over time, due partly to…
Adjusting to Social Change - A Multi-Level Analysis in Three Cultures
2013-08-01
including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed , and completing and reviewing the...presence is often associated with the large-scale movement of civilian populations, and who need to better understand the Distribution A: Approved for...valuing openness to change (self-direction, stimulation and sometimes hedonism values) with valuing conservation (conformity, tradition and security
Heterogeneous Vision Data Fusion for Independently Moving Cameras
2010-03-01
target detection , tracking , and identification over a large terrain. The goal of the project is to investigate and evaluate the existing image...fusion algorithms, develop new real-time algorithms for Category-II image fusion, and apply these algorithms in moving target detection and tracking . The...moving target detection and classification. 15. SUBJECT TERMS Image Fusion, Target Detection , Moving Cameras, IR Camera, EO Camera 16. SECURITY
Non-timber forest products: local livelihoods and integrated forest management
Iain Davidson-Hunt; Luc C. Duchesne; John C. Zasada
2001-01-01
In October of 1999 a conference was held in Kenora, Ontario, Canada, to explore the non-timber forest products (NTFPs) of boreal and cold temperate forests. Up to this time, the concept of NTFP, was one that had been developed largely for tropical and subtropical forests. An extensive body of literature exists on a wide range of topics for the NTFPs of tropical and...
ERIC Educational Resources Information Center
Olsen, Danny R.
This study was designed to investigate the extent to which grade inflation has existed at Brigham Young University (BYU) after accounting for increased preparation levels of entering students over time. Analyses were conducted for the university at large and individual colleges. The study first developed a model to forecast student grade point…
On the development of efficient algorithms for three dimensional fluid flow
NASA Technical Reports Server (NTRS)
Maccormack, R. W.
1988-01-01
The difficulties of constructing efficient algorithms for three-dimensional flow are discussed. Reasonable candidates are analyzed and tested, and most are found to have obvious shortcomings. Yet, there is promise that an efficient class of algorithms exist between the severely time-step sized-limited explicit or approximately factored algorithms and the computationally intensive direct inversion of large sparse matrices by Gaussian elimination.
ERIC Educational Resources Information Center
Wright, Jill
2015-01-01
The purpose of this study was to examine the existing research on early literacy and the types of approaches used in schools at the time of this writing. Although researchers could not agree on which types of reading programs are the most effective, there was a large amount of research supporting the work done in 2000 from the National Reading…
ERIC Educational Resources Information Center
Weber, Jonathan
2006-01-01
Creating a digital library might seem like a task best left to a large research collection with a vast staff and generous budget. However, tools for successfully creating digital libraries are getting easier to use all the time. The explosion of people creating content for the web has led to the availability of many high-quality applications and…
Code of Federal Regulations, 2014 CFR
2014-07-01
.... . . According to the followingrequirements. . . 1. Each metal melting furnace subject to a PM or total metal HAP... metal HAP performance test. iv. For cupola metal melting furnaces, sample PM or total metal HAP only during times when the cupola is on blast. v. For electric arc and electric induction metal melting...
Code of Federal Regulations, 2013 CFR
2013-07-01
.... . . According to the followingrequirements. . . 1. Each metal melting furnace subject to a PM or total metal HAP... metal HAP performance test. iv. For cupola metal melting furnaces, sample PM or total metal HAP only during times when the cupola is on blast. v. For electric arc and electric induction metal melting...
Code of Federal Regulations, 2012 CFR
2012-07-01
.... . . According to the followingrequirements. . . 1. Each metal melting furnace subject to a PM or total metal HAP... metal HAP performance test. iv. For cupola metal melting furnaces, sample PM or total metal HAP only during times when the cupola is on blast. v. For electric arc and electric induction metal melting...
Code of Federal Regulations, 2010 CFR
2010-07-01
.... . . According to the followingrequirements. . . 1. Each metal melting furnace subject to a PM or total metal HAP... metal HAP performance test. iv. For cupola metal melting furnaces, sample PM or total metal HAP only during times when the cupola is on blast. v. For electric arc and electric induction metal melting...
Code of Federal Regulations, 2011 CFR
2011-07-01
.... . . According to the followingrequirements. . . 1. Each metal melting furnace subject to a PM or total metal HAP... metal HAP performance test. iv. For cupola metal melting furnaces, sample PM or total metal HAP only during times when the cupola is on blast. v. For electric arc and electric induction metal melting...
Environmental microbiology as related to planetary quarantine
NASA Technical Reports Server (NTRS)
Iflug, I. J.
1971-01-01
The results of studies to determine the effect of soil particle size on the survival time at 125 C of the microflora associated with these particles are discussed. The data suggest that longer survival times exist for the microflora associated with larger particles. The studies indicate that microorganisms associated with soil are difficult to kill and that organisms associated with large particles are harder to kill than those associated with small particles. Sterlization requirements increase as the level of contamination increases. Soil particles and their accompanying microflora are the most critical contaminants.
A Computational Model for Predicting Gas Breakdown
NASA Astrophysics Data System (ADS)
Gill, Zachary
2017-10-01
Pulsed-inductive discharges are a common method of producing a plasma. They provide a mechanism for quickly and efficiently generating a large volume of plasma for rapid use and are seen in applications including propulsion, fusion power, and high-power lasers. However, some common designs see a delayed response time due to the plasma forming when the magnitude of the magnetic field in the thruster is at a minimum. New designs are difficult to evaluate due to the amount of time needed to construct a new geometry and the high monetary cost of changing the power generation circuit. To more quickly evaluate new designs and better understand the shortcomings of existing designs, a computational model is developed. This model uses a modified single-electron model as the basis for a Mathematica code to determine how the energy distribution in a system changes with regards to time and location. By analyzing this energy distribution, the approximate time and location of initial plasma breakdown can be predicted. The results from this code are then compared to existing data to show its validity and shortcomings. Missouri S&T APLab.
Fast time- and frequency-domain finite-element methods for electromagnetic analysis
NASA Astrophysics Data System (ADS)
Lee, Woochan
Fast electromagnetic analysis in time and frequency domain is of critical importance to the design of integrated circuits (IC) and other advanced engineering products and systems. Many IC structures constitute a very large scale problem in modeling and simulation, the size of which also continuously grows with the advancement of the processing technology. This results in numerical problems beyond the reach of existing most powerful computational resources. Different from many other engineering problems, the structure of most ICs is special in the sense that its geometry is of Manhattan type and its dielectrics are layered. Hence, it is important to develop structure-aware algorithms that take advantage of the structure specialties to speed up the computation. In addition, among existing time-domain methods, explicit methods can avoid solving a matrix equation. However, their time step is traditionally restricted by the space step for ensuring the stability of a time-domain simulation. Therefore, making explicit time-domain methods unconditionally stable is important to accelerate the computation. In addition to time-domain methods, frequency-domain methods have suffered from an indefinite system that makes an iterative solution difficult to converge fast. The first contribution of this work is a fast time-domain finite-element algorithm for the analysis and design of very large-scale on-chip circuits. The structure specialty of on-chip circuits such as Manhattan geometry and layered permittivity is preserved in the proposed algorithm. As a result, the large-scale matrix solution encountered in the 3-D circuit analysis is turned into a simple scaling of the solution of a small 1-D matrix, which can be obtained in linear (optimal) complexity with negligible cost. Furthermore, the time step size is not sacrificed, and the total number of time steps to be simulated is also significantly reduced, thus achieving a total cost reduction in CPU time. The second contribution is a new method for making an explicit time-domain finite-element method (TDFEM) unconditionally stable for general electromagnetic analysis. In this method, for a given time step, we find the unstable modes that are the root cause of instability, and deduct them directly from the system matrix resulting from a TDFEM based analysis. As a result, an explicit TDFEM simulation is made stable for an arbitrarily large time step irrespective of the space step. The third contribution is a new method for full-wave applications from low to very high frequencies in a TDFEM based on matrix exponential. In this method, we directly deduct the eigenmodes having large eigenvalues from the system matrix, thus achieving a significantly increased time step in the matrix exponential based TDFEM. The fourth contribution is a new method for transforming the indefinite system matrix of a frequency-domain FEM to a symmetric positive definite one. We deduct non-positive definite component directly from the system matrix resulting from a frequency-domain FEM-based analysis. The resulting new representation of the finite-element operator ensures an iterative solution to converge in a small number of iterations. We then add back the non-positive definite component to synthesize the original solution with negligible cost.
Brooks, C M
1987-01-01
One of the 1990 Health Objectives established by the U.S. Department of Health and Human Services is for 60 per cent of adults 18-65 years of age to be participating regularly in vigorous physical exercise. Unfortunately, no valid and practical measurement system is available that will allow assessment of leisure time physical activity participation of large populations. Consequently, not only is it difficult to assess progress toward the 1990 goal, an accurate baseline from which to measure potential progress does not exist. This paper presents a time diary technique for measuring aggregate population physical activity participation and utilizes actual time diaries collected from adults by the Institute for Social Research in 1981 to arrive at a possible baseline. The results indicated that time diaries are a viable method for assessing aggregate physical activity behavior of large populations. American Adults were quite sedentary in 1981. Over a period of one week, 31% undertook no leisure time physical activity. Only 14 per cent expended more than 1600 kcals/week in leisure time physical activity, and 10 per cent met the DHHS physical activity requirements. PMID:3826464
Monteiro Gil, Octávia; Vaz, Pedro; Romm, Horst; De Angelis, Cinzia; Antunes, Ana Catarina; Barquinero, Joan-Francesc; Beinke, Christina; Bortolin, Emanuela; Burbidge, Christopher Ian; Cucu, Alexandra; Della Monaca, Sara; Domene, Mercedes Moreno; Fattibene, Paola; Gregoire, Eric; Hadjidekova, Valeria; Kulka, Ulrike; Lindholm, Carita; Meschini, Roberta; M'Kacher, Radhia; Moquet, Jayne; Oestreicher, Ursula; Palitti, Fabrizio; Pantelias, Gabriel; Montoro Pastor, Alegria; Popescu, Irina-Anca; Quattrini, Maria Cristina; Ricoul, Michelle; Rothkamm, Kai; Sabatier, Laure; Sebastià, Natividad; Sommer, Sylwester; Terzoudi, Georgia; Testa, Antonella; Trompier, François; Vral, Anne
2017-01-01
To identify and assess, among the participants in the RENEB (Realizing the European Network of Biodosimetry) project, the emergency preparedness, response capabilities and resources that can be deployed in the event of a radiological or nuclear accident/incident affecting a large number of individuals. These capabilities include available biodosimetry techniques, infrastructure, human resources (existing trained staff), financial and organizational resources (including the role of national contact points and their articulation with other stakeholders in emergency response) as well as robust quality control/assurance systems. A survey was prepared and sent to the RENEB partners in order to acquire information about the existing, operational techniques and infrastructure in the laboratories of the different RENEB countries and to assess the capacity of response in the event of radiological or nuclear accident involving mass casualties. The survey focused on several main areas: laboratory's general information, country and staff involved in biological and physical dosimetry; retrospective assays used, the number of assays available per laboratory and other information related to biodosimetry and emergency preparedness. Following technical intercomparisons amongst RENEB members, an update of the survey was performed one year later concerning the staff and the available assays. The analysis of RENEB questionnaires allowed a detailed assessment of existing capacity of the RENEB network to respond to nuclear and radiological emergencies. This highlighted the key importance of international cooperation in order to guarantee an effective and timely response in the event of radiological or nuclear accidents involving a considerable number of casualties. The deployment of the scientific and technical capabilities existing within the RENEB network members seems mandatory, to help other countries with less or no capacity for biological or physical dosimetry, or countries overwhelmed in case of a radiological or nuclear accident involving a large number of individuals.
Winds in the meteor zone over Trivandrum
NASA Astrophysics Data System (ADS)
Reddi, C. R.; Rajeev, K.; Ramakumar, Geetha
1991-04-01
The height profiles of the zonal and meridional wind obtained from the meteor wind radar data recorded at Trivandrum (8 deg 36 min N, 77 deg E) are presented. Large wind shears were found to exist in the meteor zone over Trivandrum. The profiles showed quasi-sinusoidal variations with altitude and vertical wavelength of the oscillation in the range 15-25 km. Further, there was a large day-to-day variability in the profiles obtained for the same local time on consecutive days. The results are discussed in the light of the winds due to tides and equatorial waves in the low latitudes. The implications of the large wind shears with reference to the local wind effects on the equatorial electrojet are outlined.
Open quantum random walks: Bistability on pure states and ballistically induced diffusion
NASA Astrophysics Data System (ADS)
Bauer, Michel; Bernard, Denis; Tilloy, Antoine
2013-12-01
Open quantum random walks (OQRWs) deal with quantum random motions on a line for systems with internal and orbital degrees of freedom. The internal system behaves as a quantum random gyroscope coding for the direction of the orbital moves. We reveal the existence of a transition, depending on OQRW moduli, in the internal system behaviors from simple oscillations to random flips between two unstable pure states. This induces a transition in the orbital motions from the usual diffusion to ballistically induced diffusion with a large mean free path and large effective diffusion constant at large times. We also show that mixed states of the internal system are converted into random pure states during the process. We touch upon possible experimental realizations.
NASA Astrophysics Data System (ADS)
Radiguet, M.; Cotton, F.; Cavalié, O.; Pathier, E.; Kostoglodov, V.; Vergnolle, M.; Campillo, M.; Walpersdorf, A.; Cotte, N.; Santiago, J.; Franco, S.
2012-12-01
Continuous Global Positioning System (cGPS) time series in Guerrero, Mexico, reveal the widespread existence of large Slow Slip Events (SSEs) at the boundary between the Cocos and North American plates. The existence of these SSEs asks the question of how seismic and aseismic slips complement each other in subduction zones. We examined the last three SSEs that occurred in 2001/2002, 2006 and 2009/2010, and their impact on the strain accumulation along the Guerrero subduction margin. We use continuous cGPS time series and InSAR images to evaluate the surface displacement during SSEs and inter-SSE periods. The slip distributions on the plate interface associated with each SSE, as well as the inter-SSE (short-term) coupling rates are evaluated by inverting these surface displacements. Our results reveal that the three analyzed SSEs have equivalent moment magnitudes of around 7.5 and their lateral extension is variable.The slip distributions for the three SSEs show that in the Guerrero gap area, the slow slip occurs at shallower depth (updip limit around 15-20 km) than in surrounding regions. The InSAR data provide additional information for the 2006 SSE. The joint inversion of InSAR and cGPS data confirms the lateral variation of the slip distribution along the trench, with shallower slip in the Guerrero seismic gap, west of Acapulco, and deeper slip further east. Inversion of inter-SSE displacement rates reveal that during the inter-SSE time intervals, the interplate coupling is high in the area where the slow slip subsequently occurs. Over a 12 year period, corresponding to three cycles of SSEs, our results reveal that the accumulated slip deficit in the Guerrero gap area is only ¼ of the slip deficit accumulated on both sides of the gap. Moreover, the regions of large slip deficit coincide with the rupture areas of recent large earthquakes. We conclude that the SSEs account for a major portion of the overall moment release budget in the Guerrero gap. If large subduction thrust earthquakes occur in the Guerrero gap, their recurrence time is probably increased compared to adjacent regions.
A method on error analysis for large-aperture optical telescope control system
NASA Astrophysics Data System (ADS)
Su, Yanrui; Wang, Qiang; Yan, Fabao; Liu, Xiang; Huang, Yongmei
2016-10-01
For large-aperture optical telescope, compared with the performance of azimuth in the control system, arc second-level jitters exist in elevation under different speeds' working mode, especially low-speed working mode in the process of its acquisition, tracking and pointing. The jitters are closely related to the working speed of the elevation, resulting in the reduction of accuracy and low-speed stability of the telescope. By collecting a large number of measured data to the elevation, we do analysis on jitters in the time domain, frequency domain and space domain respectively. And the relation between jitter points and the leading speed of elevation and the corresponding space angle is concluded that the jitters perform as periodic disturbance in space domain and the period of the corresponding space angle of the jitter points is 79.1″ approximately. Then we did simulation, analysis and comparison to the influence of the disturbance sources, like PWM power level output disturbance, torque (acceleration) disturbance, speed feedback disturbance and position feedback disturbance on the elevation to find that the space periodic disturbance still exist in the elevation performance. It leads us to infer that the problems maybe exist in angle measurement unit. The telescope employs a 24-bit photoelectric encoder and we can calculate the encoder grating angular resolution as 79.1016'', which is as the corresponding angle value in the whole encoder system of one period of the subdivision signal. The value is approximately equal to the space frequency of the jitters. Therefore, the working elevation of the telescope is affected by subdivision errors and the period of the subdivision error is identical to the period of encoder grating angular. Through comprehensive consideration and mathematical analysis, that DC subdivision error of subdivision error sources causes the jitters is determined, which is verified in the practical engineering. The method that analyze error sources from time domain, frequency domain and space domain respectively has a very good role in guiding to find disturbance sources for large-aperture optical telescope.
Tracing the trajectory of skill learning with a very large sample of online game players.
Stafford, Tom; Dewar, Michael
2014-02-01
In the present study, we analyzed data from a very large sample (N = 854,064) of players of an online game involving rapid perception, decision making, and motor responding. Use of game data allowed us to connect, for the first time, rich details of training history with measures of performance from participants engaged for a sustained amount of time in effortful practice. We showed that lawful relations exist between practice amount and subsequent performance, and between practice spacing and subsequent performance. Our methodology allowed an in situ confirmation of results long established in the experimental literature on skill acquisition. Additionally, we showed that greater initial variation in performance is linked to higher subsequent performance, a result we link to the exploration/exploitation trade-off from the computational framework of reinforcement learning. We discuss the benefits and opportunities of behavioral data sets with very large sample sizes and suggest that this approach could be particularly fecund for studies of skill acquisition.
Markov-modulated Markov chains and the covarion process of molecular evolution.
Galtier, N; Jean-Marie, A
2004-01-01
The covarion (or site specific rate variation, SSRV) process of biological sequence evolution is a process by which the evolutionary rate of a nucleotide/amino acid/codon position can change in time. In this paper, we introduce time-continuous, space-discrete, Markov-modulated Markov chains as a model for representing SSRV processes, generalizing existing theory to any model of rate change. We propose a fast algorithm for diagonalizing the generator matrix of relevant Markov-modulated Markov processes. This algorithm makes phylogeny likelihood calculation tractable even for a large number of rate classes and a large number of states, so that SSRV models become applicable to amino acid or codon sequence datasets. Using this algorithm, we investigate the accuracy of the discrete approximation to the Gamma distribution of evolutionary rates, widely used in molecular phylogeny. We show that a relatively large number of classes is required to achieve accurate approximation of the exact likelihood when the number of analyzed sequences exceeds 20, both under the SSRV and among site rate variation (ASRV) models.
CLAST: CUDA implemented large-scale alignment search tool.
Yano, Masahiro; Mori, Hiroshi; Akiyama, Yutaka; Yamada, Takuji; Kurokawa, Ken
2014-12-11
Metagenomics is a powerful methodology to study microbial communities, but it is highly dependent on nucleotide sequence similarity searching against sequence databases. Metagenomic analyses with next-generation sequencing technologies produce enormous numbers of reads from microbial communities, and many reads are derived from microbes whose genomes have not yet been sequenced, limiting the usefulness of existing sequence similarity search tools. Therefore, there is a clear need for a sequence similarity search tool that can rapidly detect weak similarity in large datasets. We developed a tool, which we named CLAST (CUDA implemented large-scale alignment search tool), that enables analyses of millions of reads and thousands of reference genome sequences, and runs on NVIDIA Fermi architecture graphics processing units. CLAST has four main advantages over existing alignment tools. First, CLAST was capable of identifying sequence similarities ~80.8 times faster than BLAST and 9.6 times faster than BLAT. Second, CLAST executes global alignment as the default (local alignment is also an option), enabling CLAST to assign reads to taxonomic and functional groups based on evolutionarily distant nucleotide sequences with high accuracy. Third, CLAST does not need a preprocessed sequence database like Burrows-Wheeler Transform-based tools, and this enables CLAST to incorporate large, frequently updated sequence databases. Fourth, CLAST requires <2 GB of main memory, making it possible to run CLAST on a standard desktop computer or server node. CLAST achieved very high speed (similar to the Burrows-Wheeler Transform-based Bowtie 2 for long reads) and sensitivity (equal to BLAST, BLAT, and FR-HIT) without the need for extensive database preprocessing or a specialized computing platform. Our results demonstrate that CLAST has the potential to be one of the most powerful and realistic approaches to analyze the massive amount of sequence data from next-generation sequencing technologies.
Global Well-Posedness of the Boltzmann Equation with Large Amplitude Initial Data
NASA Astrophysics Data System (ADS)
Duan, Renjun; Huang, Feimin; Wang, Yong; Yang, Tong
2017-07-01
The global well-posedness of the Boltzmann equation with initial data of large amplitude has remained a long-standing open problem. In this paper, by developing a new {L^∞_xL^1v\\cap L^∞_{x,v}} approach, we prove the global existence and uniqueness of mild solutions to the Boltzmann equation in the whole space or torus for a class of initial data with bounded velocity-weighted {L^∞} norm under some smallness condition on the {L^1_xL^∞_v} norm as well as defect mass, energy and entropy so that the initial data allow large amplitude oscillations. Both the hard and soft potentials with angular cut-off are considered, and the large time behavior of solutions in the {L^∞_{x,v}} norm with explicit rates of convergence are also studied.
Floquet prethermalization in the resonantly driven Hubbard model
NASA Astrophysics Data System (ADS)
Herrmann, Andreas; Murakami, Yuta; Eckstein, Martin; Werner, Philipp
2017-12-01
We demonstrate the existence of long-lived prethermalized states in the Mott insulating Hubbard model driven by periodic electric fields. These states, which also exist in the resonantly driven case with a large density of photo-induced doublons and holons, are characterized by a nonzero current and an effective temperature of the doublons and holons which depends sensitively on the driving condition. Focusing on the specific case of resonantly driven models whose effective time-independent Hamiltonian in the high-frequency driving limit corresponds to noninteracting fermions, we show that the time evolution of the double occupation can be reproduced by the effective Hamiltonian, and that the prethermalization plateaus at finite driving frequency are controlled by the next-to-leading-order correction in the high-frequency expansion of the effective Hamiltonian. We propose a numerical procedure to determine an effective Hubbard interaction that mimics the correlation effects induced by these higher-order terms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoen, Ben; Cappers, Pete; Wiser, Ryan
2011-04-12
An increasing number of homes in the U.S. have sold with photovoltaic (PV) energy systems installed at the time of sale, yet relatively little research exists that provides estimates of the marginal impacts of those PV systems on home sale prices. This research analyzes a large dataset of California homes that sold from 2000 through mid-2009 with PV installed. We find strong evidence that homes with PV systems sold for a premium over comparable homes without PV systems during this time frame. Estimates for this premium expressed in dollars per watt of installed PV range, from roughly $4 to $6.4/wattmore » across the full dataset, to approximately $2.3/watt for new homes, to more than $6/watt for existing homes. A number of ideas for further research are suggested.« less
Spacecraft flight control with the new phase space control law and optimal linear jet select
NASA Technical Reports Server (NTRS)
Bergmann, E. V.; Croopnick, S. R.; Turkovich, J. J.; Work, C. C.
1977-01-01
An autopilot designed for rotation and translation control of a rigid spacecraft is described. The autopilot uses reaction control jets as control effectors and incorporates a six-dimensional phase space control law as well as a linear programming algorithm for jet selection. The interaction of the control law and jet selection was investigated and a recommended configuration proposed. By means of a simulation procedure the new autopilot was compared with an existing system and was found to be superior in terms of core memory, central processing unit time, firings, and propellant consumption. But it is thought that the cycle time required to perform the jet selection computations might render the new autopilot unsuitable for existing flight computer applications, without modifications. The new autopilot is capable of maintaining attitude control in the presence of a large number of jet failures.
Study of V/STOL aircraft implementation. Volume 1: Summary
NASA Technical Reports Server (NTRS)
Portenier, W. J.; Webb, H. M.
1973-01-01
A high density short haul air market which by 1980 is large enough to support the introduction of an independent short haul air transportation system is discussed. This system will complement the existing air transportation system and will provide relief of noise and congestion problems at conventional airports. The study has found that new aircraft, exploiting V/STOL and quiet engine technology, can be available for implementing these new services, and they can operate from existing reliever and general aviation airports. The study has also found that the major funding requirements for implementing new short haul services could be borne by private capital, and that the government funding requirement would be minimal and/or recovered through the airline ticket tax. In addition, a suitable new short haul aircraft would have a market potential for $3.5 billion in foreign sales. The long lead times needed for aircraft and engine technology development will require timely actions by federal agencies.
Tool for simplifying the complex interactions within resilient communities
NASA Astrophysics Data System (ADS)
Stwertka, C.; Albert, M. R.; White, K. D.
2016-12-01
In recent decades, scientists have observed and documented impacts from climate change that will impact multiple sectors, will be impacted by decisions from multiple sectors, and will change over time. This complex human-engineered system has a large number of moving, interacting parts, which are interdependent and evolve over time towards their purpose. Many of the existing resilience frameworks and vulnerability frameworks focus on interactions between the domains, but do not include the structure of the interactions. We present an engineering systems approach to investigate the structural elements that influence a community's ability to be resilient. In this presentation we will present and analyze four common methods for building community resilience, utilizing our common framework. For several existing case studies we examine the stress points in the system and identify the impacts on the outcomes from the case studies. In ongoing research we will apply our system tool to a new case in the field.
On the correlation of angular position with time of occurrence of gamma-ray bursts
NASA Technical Reports Server (NTRS)
Petrosian, Vahe; Efron, Bradley
1995-01-01
Evidence indicating that a large fraction of gamma-ray bursts are repeaters would provide strong support for noncosmological origin of these sources. Wang & Lingenfelter have claimed existance of a correlation between angular position and time of occurrence of bursts. We perform statistical tests and find a marginal evidence for nearby bursts occurring within 4 to 5 days of each other in the BATSE 1B catalog. This evidence is present also in the 2B catalogs, which in addition, shows some marginal evidence for bursts repetition at longer time delays up to the total length of the observations.
Evaluating Existing Strategies to Limit Video Game Playing Time.
Davies, Bryan; Blake, Edwin
2016-01-01
Public concern surrounding the effects video games have on players has inspired a large body of research, and policy makers in China and South Korea have even mandated systems that limit the amount of time players spend in game. The authors present an experiment that evaluates the effectiveness of such policies. They show that forcibly removing players from the game environment causes distress, potentially removing some of the benefits that games provide and producing a desire for more game time. They also show that, with an understanding of player psychology, playtime can be manipulated without significantly changing the user experience or negating the positive effects of video games.
Shippee, Nathan D; Shah, Nilay D; May, Carl R; Mair, Frances S; Montori, Victor M
2012-10-01
To design a functional, patient-centered model of patient complexity with practical applicability to analytic design and clinical practice. Existing literature on patient complexity has mainly identified its components descriptively and in isolation, lacking clarity as to their combined functions in disrupting care or to how complexity changes over time. The authors developed a cumulative complexity model, which integrates existing literature and emphasizes how clinical and social factors accumulate and interact to complicate patient care. A narrative literature review is used to explicate the model. The model emphasizes a core, patient-level mechanism whereby complicating factors impact care and outcomes: the balance between patient workload of demands and patient capacity to address demands. Workload encompasses the demands on the patient's time and energy, including demands of treatment, self-care, and life in general. Capacity concerns ability to handle work (e.g., functional morbidity, financial/social resources, literacy). Workload-capacity imbalances comprise the mechanism driving patient complexity. Treatment and illness burdens serve as feedback loops, linking negative outcomes to further imbalances, such that complexity may accumulate over time. With its components largely supported by existing literature, the model has implications for analytic design, clinical epidemiology, and clinical practice. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke
1989-01-01
Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.
NASA's Hyperwall Revealing the Big Picture
NASA Technical Reports Server (NTRS)
Sellers, Piers
2011-01-01
NASA:s hyperwall is a sophisticated visualization tool used to display large datasets. The hyperwall, or video wall, is capable of displaying multiple high-definition data visualizations and/or images simultaneously across an arrangement of screens. Functioning as a key component at many NASA exhibits, the hyperwall is used to help explain phenomena, ideas, or examples of world change. The traveling version of the hyperwall is typically comprised of nine 42-50" flat-screen monitors arranged in a 3x3 array (as depicted below). However, it is not limited to monitor size or number; screen sizes can be as large as 52" and the arrangement of screens can include more than nine monitors. Generally, NASA satellite and model data are used to highlight particular themes in atmospheric, land, and ocean science. Many of the existing hyperwall stories reveal change across space and time, while others display large-scale still-images accompanied by descriptive, story-telling captions. Hyperwall content on a variety of Earth Science topics already exists and is made available to the public at: eospso.gsfc.nasa.gov/hyperwall. Keynote and PowerPoint presentations as well as Summary of Story files are available for download on each existing topic. New hyperwall content and accompanying files will continue being developed to promote scientific literacy across a diverse group of audience members. NASA invites the use of content accessible through this website but requests the user to acknowledge any and all data sources referenced in the content being used.
Highly-stretchable 3D-architected Mechanical Metamaterials
NASA Astrophysics Data System (ADS)
Jiang, Yanhui; Wang, Qiming
2016-09-01
Soft materials featuring both 3D free-form architectures and high stretchability are highly desirable for a number of engineering applications ranging from cushion modulators, soft robots to stretchable electronics; however, both the manufacturing and fundamental mechanics are largely elusive. Here, we overcome the manufacturing difficulties and report a class of mechanical metamaterials that not only features 3D free-form lattice architectures but also poses ultrahigh reversible stretchability (strain > 414%), 4 times higher than that of the existing counterparts with the similar complexity of 3D architectures. The microarchitected metamaterials, made of highly stretchable elastomers, are realized through an additive manufacturing technique, projection microstereolithography, and its postprocessing. With the fabricated metamaterials, we reveal their exotic mechanical behaviors: Under large-strain tension, their moduli follow a linear scaling relationship with their densities regardless of architecture types, in sharp contrast to the architecture-dependent modulus power-law of the existing engineering materials; under large-strain compression, they present tunable negative-stiffness that enables ultrahigh energy absorption efficiencies. To harness their extraordinary stretchability and microstructures, we demonstrate that the metamaterials open a number of application avenues in lightweight and flexible structure connectors, ultraefficient dampers, 3D meshed rehabilitation structures and stretchable electronics with designed 3D anisotropic conductivity.
Highly-stretchable 3D-architected Mechanical Metamaterials.
Jiang, Yanhui; Wang, Qiming
2016-09-26
Soft materials featuring both 3D free-form architectures and high stretchability are highly desirable for a number of engineering applications ranging from cushion modulators, soft robots to stretchable electronics; however, both the manufacturing and fundamental mechanics are largely elusive. Here, we overcome the manufacturing difficulties and report a class of mechanical metamaterials that not only features 3D free-form lattice architectures but also poses ultrahigh reversible stretchability (strain > 414%), 4 times higher than that of the existing counterparts with the similar complexity of 3D architectures. The microarchitected metamaterials, made of highly stretchable elastomers, are realized through an additive manufacturing technique, projection microstereolithography, and its postprocessing. With the fabricated metamaterials, we reveal their exotic mechanical behaviors: Under large-strain tension, their moduli follow a linear scaling relationship with their densities regardless of architecture types, in sharp contrast to the architecture-dependent modulus power-law of the existing engineering materials; under large-strain compression, they present tunable negative-stiffness that enables ultrahigh energy absorption efficiencies. To harness their extraordinary stretchability and microstructures, we demonstrate that the metamaterials open a number of application avenues in lightweight and flexible structure connectors, ultraefficient dampers, 3D meshed rehabilitation structures and stretchable electronics with designed 3D anisotropic conductivity.
NASA Astrophysics Data System (ADS)
Shearer, Christine; West, Mick; Caldeira, Ken; Davis, Steven J.
2016-08-01
Nearly 17% of people in an international survey said they believed the existence of a secret large-scale atmospheric program (SLAP) to be true or partly true. SLAP is commonly referred to as ‘chemtrails’ or ‘covert geoengineering’, and has led to a number of websites purported to show evidence of widespread chemical spraying linked to negative impacts on human health and the environment. To address these claims, we surveyed two groups of experts—atmospheric chemists with expertize in condensation trails and geochemists working on atmospheric deposition of dust and pollution—to scientifically evaluate for the first time the claims of SLAP theorists. Results show that 76 of the 77 scientists (98.7%) that took part in this study said they had not encountered evidence of a SLAP, and that the data cited as evidence could be explained through other factors, including well-understood physics and chemistry associated with aircraft contrails and atmospheric aerosols. Our goal is not to sway those already convinced that there is a secret, large-scale spraying program—who often reject counter-evidence as further proof of their theories—but rather to establish a source of objective science that can inform public discourse.
Analyzing large-scale spiking neural data with HRLAnalysis™
Thibeault, Corey M.; O'Brien, Michael J.; Srinivasa, Narayan
2014-01-01
The additional capabilities provided by high-performance neural simulation environments and modern computing hardware has allowed for the modeling of increasingly larger spiking neural networks. This is important for exploring more anatomically detailed networks but the corresponding accumulation in data can make analyzing the results of these simulations difficult. This is further compounded by the fact that many existing analysis packages were not developed with large spiking data sets in mind. Presented here is a software suite developed to not only process the increased amount of spike-train data in a reasonable amount of time, but also provide a user friendly Python interface. We describe the design considerations, implementation and features of the HRLAnalysis™ suite. In addition, performance benchmarks demonstrating the speedup of this design compared to a published Python implementation are also presented. The result is a high-performance analysis toolkit that is not only usable and readily extensible, but also straightforward to interface with existing Python modules. PMID:24634655
The Effects of Very Light Jet Air Taxi Operations on Commercial Air Traffic
NASA Technical Reports Server (NTRS)
Smith, Jeremy C.; Dollyhigh, Samuel M.
2006-01-01
This study investigates the potential effects of Very Light Jet (VLJ) air taxi operations adding to delays experienced by commercial passenger air transportation in the year 2025. The affordable cost relative to existing business jets and ability to use many of the existing small, minimally equipped, but conveniently located airports is projected to stimulate a large demand for the aircraft. The resulting increase in air traffic operations will mainly be at smaller airports, but this study indicates that VLJs have the potential to increase further the pressure of demand at some medium and large airports, some of which are already operating at or near capacity at peak times. The additional delays to commercial passenger air transportation due to VLJ air taxi operations are obtained from simulation results using the Airspace Concepts Evaluation System (ACES) simulator. The direct increase in operating cost due to additional delays is estimated. VLJs will also cause an increase in traffic density, and this study shows increased potential for conflicts due to VLJ operations.
African humid periods triggered the reactivation of a large river system in Western Sahara
Skonieczny, C.; Paillou, P.; Bory, A.; Bayon, G.; Biscara, L.; Crosta, X.; Eynaud, F.; Malaizé, B.; Revel, M.; Aleman, N.; Barusseau, J. -P.; Vernet, R.; Lopez, S.; Grousset, F.
2015-01-01
The Sahara experienced several humid episodes during the late Quaternary, associated with the development of vast fluvial networks and enhanced freshwater delivery to the surrounding ocean margins. In particular, marine sediment records off Western Sahara indicate deposition of river-borne material at those times, implying sustained fluvial discharges along the West African margin. Today, however, no major river exists in this area; therefore, the origin of these sediments remains unclear. Here, using orbital radar satellite imagery, we present geomorphological data that reveal the existence of a large buried paleodrainage network on the Mauritanian coast. On the basis of evidence from the literature, we propose that reactivation of this major paleoriver during past humid periods contributed to the delivery of sediments to the Tropical Atlantic margin. This finding provides new insights for the interpretation of terrigenous sediment records off Western Africa, with important implications for our understanding of the paleohydrological history of the Sahara. PMID:26556052
Sleep patterns and match performance in elite Australian basketball athletes.
Staunton, Craig; Gordon, Brett; Custovic, Edhem; Stanger, Jonathan; Kingsley, Michael
2017-08-01
To assess sleep patterns and associations between sleep and match performance in elite Australian female basketball players. Prospective cohort study. Seventeen elite female basketball players were monitored across two consecutive in-season competitions (30 weeks). Total sleep time and sleep efficiency were determined using triaxial accelerometers for Baseline, Pre-match, Match-day and Post-match timings. Match performance was determined using the basketball efficiency statistic (EFF). The effects of match schedule (Regular versus Double-Header; Home versus Away) and sleep on EFF were assessed. The Double-Header condition changed the pattern of sleep when compared with the Regular condition (F (3,48) =3.763, P=0.017), where total sleep time Post-match was 11% less for Double-Header (mean±SD; 7.2±1.4h) compared with Regular (8.0±1.3h; P=0.007). Total sleep time for Double-Header was greater Pre-match (8.2±1.7h) compared with Baseline (7.1±1.6h; P=0.022) and Match-day (7.3±1.5h; P=0.007). Small correlations existed between sleep metrics at Pre-match and EFF for pooled data (r=-0.39 to -0.22; P≥0.238). Relationships between total sleep time and EFF ranged from moderate negative to large positive correlations for individual players (r=-0.37 to 0.62) and reached significance for one player (r=0.60; P=0.025). Match schedule can affect the sleep patterns of elite female basketball players. A large degree of inter-individual variability existed in the relationship between sleep and match performance; nevertheless, sleep monitoring might assist in the optimisation of performance for some athletes. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Surface settling in partially filled containers upon step reduction in gravity
NASA Technical Reports Server (NTRS)
Weislogel, Marl M.; Ross, Howard D.
1990-01-01
A large literature exists concerning the equilibrium configurations of free liquid/gas surfaces in reduced gravity environments. Such conditions generally yield surfaces of constant curvature meeting the container wall at a particular (contact) angle. The time required to reach and stabilize about this configuration is less studied for the case of sudden changes in gravity level, e.g. from normal- to low-gravity, as can occur in many drop tower experiments. The particular interest here was to determine the total reorientation time for such surfaces in cylinders (mainly), as a function primarily of contact angle and kinematic viscosity, in order to aid in the development of drop tower experiment design. A large parametric range of tests were performed and, based on an accompanying scale analysis, the complete data set was correlated. The results of other investigations are included for comparison.
Energy Conservation: Heating Navy Hangars
1984-07-01
temperature, IF Tf Inside air temperature 1 foot above the floor, OF T. Inside design temperature, IF To Hot water temperature setpoint , OF TON Chiller ...systems capable of optimizing energy usage base-wide. An add-on to an existing large scale EMCS is probably the first preference, followed by single...the building comfort conditions are met during hours of building occupancy. 2. Optimized Start/Stop turns on equipment at the latest possible time and
15 KW Small Turboelectric Power Generation System
2006-08-18
1 hour per response, including the time for reviewing instnlctions, searching existing data sources, gathering and maintaining the data needed, and...pressure rise is consistent with data from the baseline compressor and a large body of published diffuser data . Table 1 LTS22 Compressor Preliminary... data on designs of 150 HP, 60 HP, and 5 HP engine size class, and in subsequent engine testing. The design methodology encompasses basic sizing
ERIC Educational Resources Information Center
Goldhaber, Dan
2010-01-01
The details of school reform in Washington State continue to evolve, but the unprecedented performance demands that it and NCLB place on schools are unlikely to disappear any time soon. The same is true of the large gap that exists between today's performance and tomorrow's aspirations. By any measure, significant improvements in performance now…
Scalable Matrix Algorithms for Interactive Analytics of Very Large Informatics Graphs
2017-06-14
information networks. Depending on the situation, these larger networks may not fit on a single machine. Although we considered traditional matrix and graph...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources...gathering and maintaining the data needed, and completing and reviewing the collection of information . Send comments regarding this burden estimate or
2012-06-01
per capita and gross domestic product. Literacy is a function of educational opportunity and quality such that people may communicate and acquire...Private capitalists do, however, face the challenges of a largely agricultural and/or informal labor force with lower education, skills and literacy rates... information is estimated to average 1 hour per response, including the time for reviewing instruction , searching existing data sources, gathering and
ERIC Educational Resources Information Center
Zakharova, Larisa M.; Zakharova, Victoria
2018-01-01
The article is devoted to the problem of the influence of physical activity on the development of voluntary attention in children aged 5-7, which largely determines the success of schooling. The studies conducted in different countries prove the existence of such a correlation, which, at the same time, is not always unambiguous. We studied the…
Modelling short pulse, high intensity laser plasma interactions
NASA Astrophysics Data System (ADS)
Evans, R. G.
2006-06-01
Modelling the interaction of ultra-intense laser pulses with solid targets is made difficult through the large range of length and time scales involved in the transport of relativistic electrons. An implicit hybrid PIC-fluid model using the commercial code LSP (LSP is marketed by MRC (Albuquerque), New Mexico, USA) reveals a variety of complex phenomena which seem to be borne out in experiments and some existing theories.
NASA Astrophysics Data System (ADS)
Cocco, M.
2001-12-01
Earthquake stress changes can promote failures on favorably oriented faults and modify the seismicity pattern over broad regions around the causative faults. Because the induced stress perturbations modify the rate of production of earthquakes, they alter the probability of seismic events in a specified time window. Comparing the Coulomb stress changes with the seismicity rate changes and aftershock patterns can statistically test the role of stress transfer in earthquake occurrence. The interaction probability may represent a further tool to test the stress trigger or shadow model. The probability model, which incorporate stress transfer, has the main advantage to include the contributions of the induced stress perturbation (a static step in its present formulation), the loading rate and the fault constitutive properties. Because the mechanical conditions of the secondary faults at the time of application of the induced load are largely unkown, stress triggering can only be tested on fault populations and not on single earthquake pairs with a specified time delay. The interaction probability can represent the most suitable tool to test the interaction between large magnitude earthquakes. Despite these important implications and the stimulating perspectives, there exist problems in understanding earthquake interaction that should motivate future research but at the same time limit its immediate social applications. One major limitation is that we are unable to predict how and if the induced stress perturbations modify the ratio between small versus large magnitude earthquakes. In other words, we cannot distinguish between a change in this ratio in favor of small events or of large magnitude earthquakes, because the interaction probability is independent of magnitude. Another problem concerns the reconstruction of the stressing history. The interaction probability model is based on the response to a static step; however, we know that other processes contribute to the stressing history perturbing the faults (such as dynamic stress changes, post-seismic stress changes caused by viscolelastic relaxation or fluid flow). If, for instance, we believe that dynamic stress changes can trigger aftershocks or earthquakes years after the passing of the seismic waves through the fault, the perspective of calculating interaction probability is untenable. It is therefore clear we have learned a lot on earthquake interaction incorporating fault constitutive properties, allowing to solve existing controversy, but leaving open questions for future research.
Modeling and estimating the jump risk of exchange rates: Applications to RMB
NASA Astrophysics Data System (ADS)
Wang, Yiming; Tong, Hanfei
2008-11-01
In this paper we propose a new type of continuous-time stochastic volatility model, SVDJ, for the spot exchange rate of RMB, and other foreign currencies. In the model, we assume that the change of exchange rate can be decomposed into two components. One is the normally small-cope innovation driven by the diffusion motion; the other is a large drop or rise engendered by the Poisson counting process. Furthermore, we develop a MCMC method to estimate our model. Empirical results indicate the significant existence of jumps in the exchange rate. Jump components explain a large proportion of the exchange rate change.
Ida, K; Funaba, H; Kado, S; Narihara, K; Tanaka, K; Takeiri, Y; Nakamura, Y; Ohyabu, N; Yamazaki, K; Yokoyama, M; Murakami, S; Ashikawa, N; deVries, P C; Emoto, M; Goto, M; Idei, H; Ikeda, K; Inagaki, S; Inoue, N; Isobe, M; Itoh, K; Kaneko, O; Kawahata, K; Khlopenkov, K; Komori, A; Kubo, S; Kumazawa, R; Liang, Y; Masuzaki, S; Minami, T; Miyazawa, J; Morisaki, T; Morita, S; Mutoh, T; Muto, S; Nagayama, Y; Nakanishi, H; Nishimura, K; Noda, N; Notake, T; Kobuchi, T; Ohdachi, S; Ohkubo, K; Oka, Y; Osakabe, M; Ozaki, T; Pavlichenko, R O; Peterson, B J; Sagara, A; Saito, K; Sakakibara, S; Sakamoto, R; Sanuki, H; Sasao, H; Sasao, M; Sato, K; Sato, M; Seki, T; Shimozuma, T; Shoji, M; Suzuki, H; Sudo, S; Tamura, N; Toi, K; Tokuzawa, T; Torii, Y; Tsumori, K; Yamamoto, T; Yamada, H; Yamada, I; Yamaguchi, S; Yamamoto, S; Yoshimura, Y; Watanabe, K Y; Watari, T; Hamada, Y; Motojima, O; Fujiwara, M
2001-06-04
Recent large helical device experiments revealed that the transition from ion root to electron root occurred for the first time in neutral-beam-heated discharges, where no nonthermal electrons exist. The measured values of the radial electric field were found to be in qualitative agreement with those estimated by neoclassical theory. A clear reduction of ion thermal diffusivity was observed after the mode transition from ion root to electron root as predicted by neoclassical theory when the neoclassical ion loss is more dominant than the anomalous ion loss.
Software environment for implementing engineering applications on MIMD computers
NASA Technical Reports Server (NTRS)
Lopez, L. A.; Valimohamed, K. A.; Schiff, S.
1990-01-01
In this paper the concept for a software environment for developing engineering application systems for multiprocessor hardware (MIMD) is presented. The philosophy employed is to solve the largest problems possible in a reasonable amount of time, rather than solve existing problems faster. In the proposed environment most of the problems concerning parallel computation and handling of large distributed data spaces are hidden from the application program developer, thereby facilitating the development of large-scale software applications. Applications developed under the environment can be executed on a variety of MIMD hardware; it protects the application software from the effects of a rapidly changing MIMD hardware technology.
A high-resolution European dataset for hydrologic modeling
NASA Astrophysics Data System (ADS)
Ntegeka, Victor; Salamon, Peter; Gomes, Goncalo; Sint, Hadewij; Lorini, Valerio; Thielen, Jutta
2013-04-01
There is an increasing demand for large scale hydrological models not only in the field of modeling the impact of climate change on water resources but also for disaster risk assessments and flood or drought early warning systems. These large scale models need to be calibrated and verified against large amounts of observations in order to judge their capabilities to predict the future. However, the creation of large scale datasets is challenging for it requires collection, harmonization, and quality checking of large amounts of observations. For this reason, only a limited number of such datasets exist. In this work, we present a pan European, high-resolution gridded dataset of meteorological observations (EFAS-Meteo) which was designed with the aim to drive a large scale hydrological model. Similar European and global gridded datasets already exist, such as the HadGHCND (Caesar et al., 2006), the JRC MARS-STAT database (van der Goot and Orlandi, 2003) and the E-OBS gridded dataset (Haylock et al., 2008). However, none of those provide similarly high spatial resolution and/or a complete set of variables to force a hydrologic model. EFAS-Meteo contains daily maps of precipitation, surface temperature (mean, minimum and maximum), wind speed and vapour pressure at a spatial grid resolution of 5 x 5 km for the time period 1 January 1990 - 31 December 2011. It furthermore contains calculated radiation, which is calculated by using a staggered approach depending on the availability of sunshine duration, cloud cover and minimum and maximum temperature, and evapotranspiration (potential evapotranspiration, bare soil and open water evapotranspiration). The potential evapotranspiration was calculated using the Penman-Monteith equation with the above-mentioned meteorological variables. The dataset was created as part of the development of the European Flood Awareness System (EFAS) and has been continuously updated throughout the last years. The dataset variables are used as inputs to the hydrological calibration and validation of EFAS as well as for establishing long-term discharge "proxy" climatologies which can then in turn be used for statistical analysis to derive return periods or other time series derivatives. In addition, this dataset will be used to assess climatological trends in Europe. Unfortunately, to date no baseline dataset at the European scale exists to test the quality of the herein presented data. Hence, a comparison against other existing datasets can therefore only be an indication of data quality. Due to availability, a comparison was made for precipitation and temperature only, arguably the most important meteorological drivers for hydrologic models. A variety of analyses was undertaken at country scale against data reported to EUROSTAT and E-OBS datasets. The comparison revealed that while the datasets showed overall similar temporal and spatial patterns, there were some differences in magnitudes especially for precipitation. It is not straightforward to define the specific cause for these differences. However, in most cases the comparatively low observation station density appears to be the principal reason for the differences in magnitude.
A software solution for recording circadian oscillator features in time-lapse live cell microscopy.
Sage, Daniel; Unser, Michael; Salmon, Patrick; Dibner, Charna
2010-07-06
Fluorescent and bioluminescent time-lapse microscopy approaches have been successfully used to investigate molecular mechanisms underlying the mammalian circadian oscillator at the single cell level. However, most of the available software and common methods based on intensity-threshold segmentation and frame-to-frame tracking are not applicable in these experiments. This is due to cell movement and dramatic changes in the fluorescent/bioluminescent reporter protein during the circadian cycle, with the lowest expression level very close to the background intensity. At present, the standard approach to analyze data sets obtained from time lapse microscopy is either manual tracking or application of generic image-processing software/dedicated tracking software. To our knowledge, these existing software solutions for manual and automatic tracking have strong limitations in tracking individual cells if their plane shifts. In an attempt to improve existing methodology of time-lapse tracking of a large number of moving cells, we have developed a semi-automatic software package. It extracts the trajectory of the cells by tracking theirs displacements, makes the delineation of cell nucleus or whole cell, and finally yields measurements of various features, like reporter protein expression level or cell displacement. As an example, we present here single cell circadian pattern and motility analysis of NIH3T3 mouse fibroblasts expressing a fluorescent circadian reporter protein. Using Circadian Gene Express plugin, we performed fast and nonbiased analysis of large fluorescent time lapse microscopy datasets. Our software solution, Circadian Gene Express (CGE), is easy to use and allows precise and semi-automatic tracking of moving cells over longer period of time. In spite of significant circadian variations in protein expression with extremely low expression levels at the valley phase, CGE allows accurate and efficient recording of large number of cell parameters, including level of reporter protein expression, velocity, direction of movement, and others. CGE proves to be useful for the analysis of widefield fluorescent microscopy datasets, as well as for bioluminescence imaging. Moreover, it might be easily adaptable for confocal image analysis by manually choosing one of the focal planes of each z-stack of the various time points of a time series. CGE is a Java plugin for ImageJ; it is freely available at: http://bigwww.epfl.ch/sage/soft/circadian/.
Wu, Jiangyu; Feng, Meimei; Yu, Bangyong; Han, Guansheng
2018-01-01
It is important to study the mechanical properties of cracked rock to understand the engineering behavior of cracked rock mass. Consequently, the influence of the length of pre-existing fissures on the strength, deformation, acoustic emission (AE) and failure characteristics of cracked rock specimen was analyzed, and the optimal selection of strength parameter in engineering design was discussed. The results show that the strength parameters (stress of dilatancy onset and uniaxial compressive strength) and deformation parameters (axial strain and circumferential strain at dilatancy onset and peak point) of cracked rock specimen decrease with the increase of the number of pre-existing fissures, and the relations which can use the negative exponential function to fit. Compared with the intact rock specimens, the different degrees of stress drop phenomena were produced in the process of cracked rock specimens when the stress exceeds the dilatancy onset. At this moment, the cracked rock specimens with the existence of stress drop are not instantaneous failure, but the circumferential strain, volumetric strain and AE signals increase burstingly. And the yield platform was presented in the cracked rock specimen with the length of pre-existing fissure more than 23mm, the yield failure was gradually conducted around the inner tip of pre-existing fissure, the development of original fissures and new cracks was evolved fully in rock. However, the time of dilatancy onset is always ahead of the the time of that point with the existence of stress drop. It indicates that the stress of dilatancy onset can be as the parameter of strength design in rock engineering, which can effectively prevent the large deformation of rock. Copyright © 2017 Elsevier B.V. All rights reserved.
Theory of self-resonance after inflation. I. Adiabatic and isocurvature Goldstone modes
NASA Astrophysics Data System (ADS)
Hertzberg, Mark P.; Karouby, Johanna; Spitzer, William G.; Becerra, Juana C.; Li, Lanqing
2014-12-01
We develop a theory of self-resonance after inflation. We study a large class of models involving multiple scalar fields with an internal symmetry. For illustration, we often specialize to dimension-four potentials, but we derive results for general potentials. This is the first part of a two part series of papers. Here in Part 1 we especially focus on the behavior of long-wavelength modes, which are found to govern most of the important physics. Since the inflaton background spontaneously breaks the time-translation symmetry and the internal symmetry, we obtain Goldstone modes; these are the adiabatic and isocurvature modes. We find general conditions on the potential for when a large instability band exists for these modes at long wavelengths. For the adiabatic mode, this is determined by a sound speed derived from the time-averaged potential, while for the isocurvature mode, this is determined by a speed derived from a time-averaged auxiliary potential. Interestingly, we find that this instability band usually exists for one of these classes of modes, rather than both simultaneously. We focus on backgrounds that evolve radially in field space, as set up by inflation, and also mention circular orbits, as relevant to Q -balls. In Part 2 [M. P. Hertzberg et al., Phys. Rev. D 90, 123529 (2014)] we derive the central behavior from the underlying description of many-particle quantum mechanics, and introduce a weak breaking of the symmetry to study corrections to particle-antiparticle production from preheating.
Accelerated Seismic Release and Related Aspects of Seismicity Patterns on Earthquake Faults
NASA Astrophysics Data System (ADS)
Ben-Zion, Y.; Lyakhovsky, V.
Observational studies indicate that large earthquakes are sometimes preceded by phases of accelerated seismic release (ASR) characterized by cumulative Benioff strain following a power law time-to-failure relation with a term (tf-t)m, where tf is the failure time of the large event and observed values of m are close to 0.3. We discuss properties of ASR and related aspects of seismicity patterns associated with several theoretical frameworks. The subcritical crack growth approach developed to describe deformation on a crack prior to the occurrence of dynamic rupture predicts great variability and low asymptotic values of the exponent m that are not compatible with observed ASR phases. Statistical physics studies assuming that system-size failures in a deforming region correspond to critical phase transitions predict establishment of long-range correlations of dynamic variables and power-law statistics before large events. Using stress and earthquake histories simulated by the model of Ben-Zion (1996) for a discrete fault with quenched heterogeneities in a 3-D elastic half space, we show that large model earthquakes are associated with nonrepeating cyclical establishment and destruction of long-range stress correlations, accompanied by nonstationary cumulative Benioff strain release. We then analyze results associated with a regional lithospheric model consisting of a seismogenic upper crust governed by the damage rheology of Lyakhovskyet al. (1997) over a viscoelastic substrate. We demonstrate analytically for a simplified 1-D case that the employed damage rheology leads to a singular power-law equation for strain proportional to (tf-t)-1/3, and a nonsingular power-law relation for cumulative Benioff strain proportional to (tf-t)1/3. A simple approximate generalization of the latter for regional cumulative Benioff strain is obtained by adding to the result a linear function of time representing a stationary background release. To go beyond the analytical expectations, we examine results generated by various realizations of the regional lithospheric model producing seismicity following the characteristic frequency-size statistics, Gutenberg-Richter power-law distribution, and mode switching activity. We find that phases of ASR exist only when the seismicity preceding a given large event has broad frequency-size statistics. In such cases the simulated ASR phases can be fitted well by the singular analytical relation with m = -1/3, the nonsingular equation with m = 0.2, and the generalized version of the latter including a linear term with m = 1/3. The obtained good fits with all three relations highlight the difficulty of deriving reliable information on functional forms and parameter values from such data sets. The activation process in the simulated ASR phases is found to be accommodated both by increasing rates of moderate events and increasing average event size, with the former starting a few years earlier than the latter. The lack of ASR in portions of the seismicity not having broad frequency-size statistics may explain why some large earthquakes are preceded by ASR and other are not. The results suggest that observations of moderate and large events contain two complementary end-member predictive signals on the time of future large earthquakes. In portions of seismicity following the characteristic earthquake distribution, such information exists directly in the associated quasi-periodic temporal distribution of large events. In portions of seismicity having broad frequency-size statistics with random or clustered temporal distribution of large events, the ASR phases have predictive information. The extent to which natural seismicity may be understood in terms of these end-member cases remains to be clarified. Continuing studies of evolving stress and other dynamic variables in model calculations combined with advanced analyses of simulated and observed seismicity patterns may lead to improvements in existing forecasting strategies.
webpic: A flexible web application for collecting distance and count measurements from images
2018-01-01
Despite increasing ability to store and analyze large amounts of data for organismal and ecological studies, the process of collecting distance and count measurements from images has largely remained time consuming and error-prone, particularly for tasks for which automation is difficult or impossible. Improving the efficiency of these tasks, which allows for more high quality data to be collected in a shorter amount of time, is therefore a high priority. The open-source web application, webpic, implements common web languages and widely available libraries and productivity apps to streamline the process of collecting distance and count measurements from images. In this paper, I introduce the framework of webpic and demonstrate one readily available feature of this application, linear measurements, using fossil leaf specimens. This application fills the gap between workflows accomplishable by individuals through existing software and those accomplishable by large, unmoderated crowds. It demonstrates that flexible web languages can be used to streamline time-intensive research tasks without the use of specialized equipment or proprietary software and highlights the potential for web resources to facilitate data collection in research tasks and outreach activities with improved efficiency. PMID:29608592
Performance analysis of a large-grain dataflow scheduling paradigm
NASA Technical Reports Server (NTRS)
Young, Steven D.; Wills, Robert W.
1993-01-01
A paradigm for scheduling computations on a network of multiprocessors using large-grain data flow scheduling at run time is described and analyzed. The computations to be scheduled must follow a static flow graph, while the schedule itself will be dynamic (i.e., determined at run time). Many applications characterized by static flow exist, and they include real-time control and digital signal processing. With the advent of computer-aided software engineering (CASE) tools for capturing software designs in dataflow-like structures, macro-dataflow scheduling becomes increasingly attractive, if not necessary. For parallel implementations, using the macro-dataflow method allows the scheduling to be insulated from the application designer and enables the maximum utilization of available resources. Further, by allowing multitasking, processor utilizations can approach 100 percent while they maintain maximum speedup. Extensive simulation studies are performed on 4-, 8-, and 16-processor architectures that reflect the effects of communication delays, scheduling delays, algorithm class, and multitasking on performance and speedup gains.
Subsampled Hessian Newton Methods for Supervised Learning.
Wang, Chien-Chih; Huang, Chun-Heng; Lin, Chih-Jen
2015-08-01
Newton methods can be applied in many supervised learning approaches. However, for large-scale data, the use of the whole Hessian matrix can be time-consuming. Recently, subsampled Newton methods have been proposed to reduce the computational time by using only a subset of data for calculating an approximation of the Hessian matrix. Unfortunately, we find that in some situations, the running speed is worse than the standard Newton method because cheaper but less accurate search directions are used. In this work, we propose some novel techniques to improve the existing subsampled Hessian Newton method. The main idea is to solve a two-dimensional subproblem per iteration to adjust the search direction to better minimize the second-order approximation of the function value. We prove the theoretical convergence of the proposed method. Experiments on logistic regression, linear SVM, maximum entropy, and deep networks indicate that our techniques significantly reduce the running time of the subsampled Hessian Newton method. The resulting algorithm becomes a compelling alternative to the standard Newton method for large-scale data classification.
Multiple choices of time in quantum cosmology
NASA Astrophysics Data System (ADS)
Małkiewicz, Przemysław
2015-07-01
It is often conjectured that a choice of time function merely sets up a frame for the quantum evolution of the gravitational field, meaning that all choices should be in some sense compatible. In order to explore this conjecture (and the meaning of compatibility), we develop suitable tools for determining the relation between quantum theories based on different time functions. First, we discuss how a time function fixes a canonical structure on the constraint surface. The presentation includes both the kinematical and the reduced perspective, and the relation between them. Second, we formulate twin theorems about the existence of two inequivalent maps between any two deparameterizations, a formal canonical and a coordinate one. They are used to separate the effects induced by choice of clock and other factors. We show, in an example, how the spectra of quantum observables are transformed under the change of clock and prove, via a general argument, the existence of choice-of-time-induced semiclassical effects. Finally, we study an example, in which we find that the semiclassical discrepancies can in fact be arbitrarily large for dynamical observables. We conclude that the values of critical energy density or critical volume in the bouncing scenarios of quantum cosmology cannot in general be at the Planck scale, and always need to be given with reference to a specific time function.
Testing for a cosmological influence on local physics using atomic and gravitational clocks
NASA Technical Reports Server (NTRS)
Adams, P. J.; Hellings, R. W.; Canuto, V. M.; Goldman, I.
1983-01-01
The existence of a possible influence of the large-scale structure of the universe on local physics is discussed. A particular realization of such an influence is discussed in terms of the behavior in time of atomic and gravitational clocks. Two natural categories of metric theories embodying a cosmic infuence exist. The first category has geodesic equations of motion in atomic units, while the second category has geodesic equations of motion in gravitational units. Equations of motion for test bodies are derived for both categories of theories in the appropriate parametrized post-Newtonian limit and are applied to the Solar System. Ranging data to the Viking lander on Mars are of sufficient precision to reveal (1) if such a cosmological influence exists at the level of Hubble's constant, and (2) which category of theories is appropriate for a descripton of the phenomenon.
Fast half-sibling population reconstruction: theory and algorithms.
Dexter, Daniel; Brown, Daniel G
2013-07-12
Kinship inference is the task of identifying genealogically related individuals. Kinship information is important for determining mating structures, notably in endangered populations. Although many solutions exist for reconstructing full sibling relationships, few exist for half-siblings. We consider the problem of determining whether a proposed half-sibling population reconstruction is valid under Mendelian inheritance assumptions. We show that this problem is NP-complete and provide a 0/1 integer program that identifies the minimum number of individuals that must be removed from a population in order for the reconstruction to become valid. We also present SibJoin, a heuristic-based clustering approach based on Mendelian genetics, which is strikingly fast. The software is available at http://github.com/ddexter/SibJoin.git+. Our SibJoin algorithm is reasonably accurate and thousands of times faster than existing algorithms. The heuristic is used to infer a half-sibling structure for a population which was, until recently, too large to evaluate.
Spatiotemporal property and predictability of large-scale human mobility
NASA Astrophysics Data System (ADS)
Zhang, Hai-Tao; Zhu, Tao; Fu, Dongfei; Xu, Bowen; Han, Xiao-Pu; Chen, Duxin
2018-04-01
Spatiotemporal characteristics of human mobility emerging from complexity on individual scale have been extensively studied due to the application potential on human behavior prediction and recommendation, and control of epidemic spreading. We collect and investigate a comprehensive data set of human activities on large geographical scales, including both websites browse and mobile towers visit. Numerical results show that the degree of activity decays as a power law, indicating that human behaviors are reminiscent of scale-free random walks known as Lévy flight. More significantly, this study suggests that human activities on large geographical scales have specific non-Markovian characteristics, such as a two-segment power-law distribution of dwelling time and a high possibility for prediction. Furthermore, a scale-free featured mobility model with two essential ingredients, i.e., preferential return and exploration, and a Gaussian distribution assumption on the exploration tendency parameter is proposed, which outperforms existing human mobility models under scenarios of large geographical scales.
SLIDE - a web-based tool for interactive visualization of large-scale -omics data.
Ghosh, Soumita; Datta, Abhik; Tan, Kaisen; Choi, Hyungwon
2018-06-28
Data visualization is often regarded as a post hoc step for verifying statistically significant results in the analysis of high-throughput data sets. This common practice leaves a large amount of raw data behind, from which more information can be extracted. However, existing solutions do not provide capabilities to explore large-scale raw datasets using biologically sensible queries, nor do they allow user interaction based real-time customization of graphics. To address these drawbacks, we have designed an open-source, web-based tool called Systems-Level Interactive Data Exploration, or SLIDE to visualize large-scale -omics data interactively. SLIDE's interface makes it easier for scientists to explore quantitative expression data in multiple resolutions in a single screen. SLIDE is publicly available under BSD license both as an online version as well as a stand-alone version at https://github.com/soumitag/SLIDE. Supplementary Information are available at Bioinformatics online.
Derivation of large-scale cellular regulatory networks from biological time series data.
de Bivort, Benjamin L
2010-01-01
Pharmacological agents and other perturbants of cellular homeostasis appear to nearly universally affect the activity of many genes, proteins, and signaling pathways. While this is due in part to nonspecificity of action of the drug or cellular stress, the large-scale self-regulatory behavior of the cell may also be responsible, as this typically means that when a cell switches states, dozens or hundreds of genes will respond in concert. If many genes act collectively in the cell during state transitions, rather than every gene acting independently, models of the cell can be created that are comprehensive of the action of all genes, using existing data, provided that the functional units in the model are collections of genes. Techniques to develop these large-scale cellular-level models are provided in detail, along with methods of analyzing them, and a brief summary of major conclusions about large-scale cellular networks to date.
Projection-free approximate balanced truncation of large unstable systems
NASA Astrophysics Data System (ADS)
Flinois, Thibault L. B.; Morgans, Aimee S.; Schmid, Peter J.
2015-08-01
In this article, we show that the projection-free, snapshot-based, balanced truncation method can be applied directly to unstable systems. We prove that even for unstable systems, the unmodified balanced proper orthogonal decomposition algorithm theoretically yields a converged transformation that balances the Gramians (including the unstable subspace). We then apply the method to a spatially developing unstable system and show that it results in reduced-order models of similar quality to the ones obtained with existing methods. Due to the unbounded growth of unstable modes, a practical restriction on the final impulse response simulation time appears, which can be adjusted depending on the desired order of the reduced-order model. Recommendations are given to further reduce the cost of the method if the system is large and to improve the performance of the method if it does not yield acceptable results in its unmodified form. Finally, the method is applied to the linearized flow around a cylinder at Re = 100 to show that it actually is able to accurately reproduce impulse responses for more realistic unstable large-scale systems in practice. The well-established approximate balanced truncation numerical framework therefore can be safely applied to unstable systems without any modifications. Additionally, balanced reduced-order models can readily be obtained even for large systems, where the computational cost of existing methods is prohibitive.
Discrete charge diagnostics on Pre-DIRECT COURSE
NASA Astrophysics Data System (ADS)
Guice, R. L.; Bryant, C.
1984-02-01
The Air Force Weapons Laboratory attempted to make 100 time-of-arrival measurements on Pre-DIRECT COURSE. With an 88 percent success rate, the detonation wave propagation within the charge was measured. The top and bottom hemispheres detonated at two different rates. However, the detonation velocities were well within the existing data base for Ammonium-Nitrate Fuel Oil charges. One large jet was observed on the charge but its location should not have caused any problems for ground level measurements. Twenty experimental time-of-arrival crystals were also fielded; however, the results are skeptical due to the grounding system of the support structure.
Implementing a Reliability Centered Maintenance Program at NASA's Kennedy Space Center
NASA Technical Reports Server (NTRS)
Tuttle, Raymond E.; Pete, Robert R.
1998-01-01
Maintenance practices have long focused on time based "preventive maintenance" techniques. Components were changed out and parts replaced based on how long they had been in place instead of what condition they were in. A reliability centered maintenance (RCM) program seeks to offer equal or greater reliability at decreased cost by insuring only applicable, effective maintenance is performed and by in large part replacing time based maintenance with condition based maintenance. A significant portion of this program involved introducing non-intrusive technologies, such as vibration analysis, oil analysis and I/R cameras, to an existing labor force and management team.
Entropy and time: A search for Denning's resting place
NASA Astrophysics Data System (ADS)
Beech, Martin
2013-04-01
The interminable scientific literature reveals William Frederick Denning (1848-1931) as one of the great practitioners of meteor astronomy: he wrote widely on the subject and dedicated innumerable hours to his observations. But who was Denning? What can we learn of his life, living and death. Glimpses of Denning the man do exist, but he is largely a man of translucency and unknowns. The journey recounted here reflects upon a recent search for Denning's final resting place, but, once again, it is found that time and circumstance have erased virtually all of the physical history.
Mechanical behaviour of cerclage material consisting of silicon rubber.
Hinrichsen, G; Eberhardt, A; Springer, H
1979-09-01
Silicon rubber specimens of circular or rectangular cross-section (cross-section area between ca. 2 and 7 mm2) are used as cerclage bands. A series of commercial cerclage elements was investigated for mechanical characteristics, such as stress-strain behaviour and modulus of elasticity, using a tensile-testing machine. Large differences in these properties exist among the various specimens. Moreover, time-dependent effects, such as stress-relaxation, retardation, and creep, were analysed by the present investigations. One has to take into consideration that the initial length and stress of the cerclage band vary significantly with time after the operation.
Long-lived oscillons from asymmetric bubbles: Existence and stability
NASA Astrophysics Data System (ADS)
Adib, Artur B.; Gleiser, Marcelo; Almeida, Carlos A.
2002-10-01
The possibility that extremely long-lived, time-dependent, and localized field configurations (``oscillons'') arise during the collapse of asymmetrical bubbles in (2+1)-dimensional φ4 models is investigated. It is found that oscillons can develop from a large spectrum of elliptically deformed bubbles. Moreover, we provide numerical evidence that such oscillons are (a) circularly symmetric and (b) linearly stable against small arbitrary radial and angular perturbations. The latter is based on a dynamical approach designed to investigate the stability of nonintegrable time-dependent configurations that is capable of probing slowly growing instabilities not seen through the usual ``spectral'' method.
miBLAST: scalable evaluation of a batch of nucleotide sequence queries with BLAST
Kim, You Jung; Boyd, Andrew; Athey, Brian D.; Patel, Jignesh M.
2005-01-01
A common task in many modern bioinformatics applications is to match a set of nucleotide query sequences against a large sequence dataset. Exis-ting tools, such as BLAST, are designed to evaluate a single query at a time and can be unacceptably slow when the number of sequences in the query set is large. In this paper, we present a new algorithm, called miBLAST, that evaluates such batch workloads efficiently. At the core, miBLAST employs a q-gram filtering and an index join for efficiently detecting similarity between the query sequences and database sequences. This set-oriented technique, which indexes both the query and the database sets, results in substantial performance improvements over existing methods. Our results show that miBLAST is significantly faster than BLAST in many cases. For example, miBLAST aligned 247 965 oligonucleotide sequences in the Affymetrix probe set against the Human UniGene in 1.26 days, compared with 27.27 days with BLAST (an improvement by a factor of 22). The relative performance of miBLAST increases for larger word sizes; however, it decreases for longer queries. miBLAST employs the familiar BLAST statistical model and output format, guaranteeing the same accuracy as BLAST and facilitating a seamless transition for existing BLAST users. PMID:16061938
Coleman, C Norman; Blumenthal, Daniel J; Casto, Charles A; Alfant, Michael; Simon, Steven L; Remick, Alan L; Gepford, Heather J; Bowman, Thomas; Telfer, Jana L; Blumenthal, Pamela M; Noska, Michael A
2013-04-01
Resilience after a nuclear power plant or other radiation emergency requires response and recovery activities that are appropriately safe, timely, effective, and well organized. Timely informed decisions must be made, and the logic behind them communicated during the evolution of the incident before the final outcome is known. Based on our experiences in Tokyo responding to the Fukushima Daiichi nuclear power plant crisis, we propose a real-time, medical decision model by which to make key health-related decisions that are central drivers to the overall incident management. Using this approach, on-site decision makers empowered to make interim decisions can act without undue delay using readily available and high-level scientific, medical, communication, and policy expertise. Ongoing assessment, consultation, and adaption to the changing conditions and additional information are additional key features. Given the central role of health and medical issues in all disasters, we propose that this medical decision model, which is compatible with the existing US National Response Framework structure, be considered for effective management of complex, large-scale, and large-consequence incidents.
Morrison, James J; Hostetter, Jason; Wang, Kenneth; Siegel, Eliot L
2015-02-01
Real-time mining of large research trial datasets enables development of case-based clinical decision support tools. Several applicable research datasets exist including the National Lung Screening Trial (NLST), a dataset unparalleled in size and scope for studying population-based lung cancer screening. Using these data, a clinical decision support tool was developed which matches patient demographics and lung nodule characteristics to a cohort of similar patients. The NLST dataset was converted into Structured Query Language (SQL) tables hosted on a web server, and a web-based JavaScript application was developed which performs real-time queries. JavaScript is used for both the server-side and client-side language, allowing for rapid development of a robust client interface and server-side data layer. Real-time data mining of user-specified patient cohorts achieved a rapid return of cohort cancer statistics and lung nodule distribution information. This system demonstrates the potential of individualized real-time data mining using large high-quality clinical trial datasets to drive evidence-based clinical decision-making.
Multi-messenger studies of compact binary mergers in the in the ngVLA era
NASA Astrophysics Data System (ADS)
Corsi, Alessandra
2018-01-01
We explore some of the scientific opportunities that the next generation Very Large Array (ngVLA) will open in the field of multi-messenger time-domain astronomy. We focus on compact binary mergers, golden astrophysical targets of ground-based gravitational wave (GW) detectors such as advanced LIGO. A decade from now, a large number of these mergers is likely to be discovered by a world-wide network of GW detectors. We discuss how a radio array with 10 times the sensitivity of the current Karl G. Jansky VLA and 10 times the resolution, would enable resolved radio continuum studies of binary merger hosts, probing regions of the galaxy undergoing star formation (which can be heavily obscured by dust and gas), AGN components, and mapping the offset distribution of the mergers with respect to the host galaxy light. For compact binary mergers containing at least one neutron star (NS), from which electromagnetic counterparts are expected to exist, we show how the ngVLA would enable direct size measurements of the relativistic merger ejecta and probe, for the first time directly, their dynamics.
Bell's theorem and the problem of decidability between the views of Einstein and Bohr.
Hess, K; Philipp, W
2001-12-04
Einstein, Podolsky, and Rosen (EPR) have designed a gedanken experiment that suggested a theory that was more complete than quantum mechanics. The EPR design was later realized in various forms, with experimental results close to the quantum mechanical prediction. The experimental results by themselves have no bearing on the EPR claim that quantum mechanics must be incomplete nor on the existence of hidden parameters. However, the well known inequalities of Bell are based on the assumption that local hidden parameters exist and, when combined with conflicting experimental results, do appear to prove that local hidden parameters cannot exist. This fact leaves only instantaneous actions at a distance (called "spooky" by Einstein) to explain the experiments. The Bell inequalities are based on a mathematical model of the EPR experiments. They have no experimental confirmation, because they contradict the results of all EPR experiments. In addition to the assumption that hidden parameters exist, Bell tacitly makes a variety of other assumptions; for instance, he assumes that the hidden parameters are governed by a single probability measure independent of the analyzer settings. We argue that the mathematical model of Bell excludes a large set of local hidden variables and a large variety of probability densities. Our set of local hidden variables includes time-like correlated parameters and a generalized probability density. We prove that our extended space of local hidden variables does permit derivation of the quantum result and is consistent with all known experiments.
Geographic variation in marine invasions among large estuaries: effects of ships and time.
Ruiz, Gregory M; Fofonoff, Paul W; Ashton, Gail; Minton, Mark S; Miller, A Whitman
2013-03-01
Coastal regions exhibit strong geographic patterns of nonnative species richness. Most invasions in marine ecosystems are known from bays and estuaries, where ship-mediated transfers (on hulls or in ballasted materials) have been a dominant vector of species introductions. Conspicuous spatial differences in nonnative species richness exist among bays, but the quantitative relationship between invasion magnitude and shipping activity across sites is largely unexplored. Using data on marine invasions (for invertebrates and algae) and commercial shipping across 16 large bays in the United States, we estimated (1) geographic variation in nonnative species richness attributed to ships, controlling for effects of salinity and other vectors, (2) changes through time in geographic variation of these ship-mediated invasions, and (3) effects of commercial ship traffic and ballast water discharge magnitude on nonnative species richness. For all nonnative species together (regardless of vector, salinity, or time period), species richness differed among U.S. coasts, being significantly greater for Pacific Coast bays than Atlantic or Gulf Coast bays. This difference also existed when considering only species attributed to shipping (or ballast water), controlling for time and salinity. Variation in nonnative species richness among Pacific Coast bays was strongly affected by these same criteria. San Francisco Bay, California, had over 200 documented nonnative species, more than twice that reported for other bays, but many species were associated with other (non-shipping) vectors or the extensive low-salinity habitats (unavailable in some bays). When considering only ship- or ballast-mediated introductions in high-salinity waters, the rate of newly detected invasions in San Francisco Bay has converged increasingly through time on that for other Pacific Coast bays, appearing no different since 1982. Considering all 16 bays together, there was no relationship between either (1) number of ship arrivals (from foreign ports) and number of introductions attributed to ships since 1982 or (2) volume of foreign ballast water discharge and number of species attributed to ballast water since 1982. These shipping measures are likely poor proxies for propagule supply, although they are sometimes used as such, highlighting a fundamental gap in data needed to evaluate invasion dynamics and management strategies.
Only time will tell: the changing relationships between LMX, job performance, and justice.
Park, Sanghee; Sturman, Michael C; Vanderpool, Chelsea; Chan, Elisa
2015-05-01
Although it has been argued that leader-member exchange (LMX) is a phenomenon that develops over time, the existing LMX literature is largely cross-sectional in nature. Yet, there is a great need for unraveling how LMX develops over time. To address this issue in the LMX literature, we examine the relationships of LMX with 2 variables known for changing over time: job performance and justice perceptions. On the basis of current empirical findings, a simulation deductively shows that LMX develops over time, but differently in early stages versus more mature stages. Our findings also indicate that performance and justice trends affect LMX. Implications for LMX theory and for longitudinal research on LMX, performance, and justice are discussed. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Experimental quantification of nonlinear time scales in inertial wave rotating turbulence
NASA Astrophysics Data System (ADS)
Yarom, Ehud; Salhov, Alon; Sharon, Eran
2017-12-01
We study nonlinearities of inertial waves in rotating turbulence. At small Rossby numbers the kinetic energy in the system is contained in helical inertial waves with time dependence amplitudes. In this regime the amplitude variations time scales are slow compared to wave periods, and the spectrum is concentrated along the dispersion relation of the waves. A nonlinear time scale was extracted from the width of the spectrum, which reflects the intensity of nonlinear wave interactions. This nonlinear time scale is found to be proportional to (U.k ) -1, where k is the wave vector and U is the root-mean-square horizontal velocity, which is dominated by large scales. This correlation, which indicates the existence of turbulence in which inertial waves undergo weak nonlinear interactions, persists only for small Rossby numbers.
Chaotic trajectories in the standard map. The concept of anti-integrability
NASA Astrophysics Data System (ADS)
Aubry, Serge; Abramovici, Gilles
1990-07-01
A rigorous proof is given in the standard map (associated with a Frenkel-Kontorowa model) for the existence of chaotic trajectories with unbounded momenta for large enough coupling constant k > k0. These chaotic trajectories (with finite entropy per site) are coded by integer sequences { mi} such that the sequence bi = |m i+1 + m i-1-2m i| be bounded by some integer b. The bound k0 in k depends on b and can be lowered for coding sequences { mi} fulfilling more restrictive conditions. The obtained chaotic trajectories correspond to stationary configurations of the Frenkel-Kontorowa model with a finite (non-zero) photon gap (called gap parameter in dimensionless units). This property implies that the trajectory (or the configuration { ui}) can be uniquely continued as a uniformly continuous function of the model parameter k in some neighborhood of the initial configuration. A non-zero gap parameter implies that the Lyapunov coefficient is strictly positive (when it is defined). In addition, the existence of dilating and contracting manifolds is proven for these chaotic trajectories. “Exotic” trajectories such as ballistic trajectories are also proven to exist as a consequence of these theorems. The concept of anti-integrability emerges from these theorems. In the anti-integrable limit which can be only defined for a discrete time dynamical system, the coordinates of the trajectory at time i do not depend on the coordinates at time i - 1. Thus, at this singular limit, the existence of chaotic trajectories is trivial and the dynamical system reduces to a Bernoulli shift. It is well known that the KAM tori of symplectic dynamical originates by continuity from the invariant tori which exists in the integrible limit (under certain conditions). In a similar way, it appears that the chaotic trajectories of dynamical systems originate by continuity from those which exists at the anti-integrable limits (also under certain conditions).
2014-09-30
for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data ...sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden...relatively small for quantitative comparisons and some of the deployed tags are still transmitting, their overall performance appears to have improved. 2
Advanced Software Development Workstation Project, phase 3
NASA Technical Reports Server (NTRS)
1991-01-01
ACCESS provides a generic capability to develop software information system applications which are explicitly intended to facilitate software reuse. In addition, it provides the capability to retrofit existing large applications with a user friendly front end for preparation of input streams in a way that will reduce required training time, improve the productivity even of experienced users, and increase accuracy. Current and past work shows that ACCESS will be scalable to much larger object bases.
Trinity Bay Study: Dye tracing experiments
NASA Technical Reports Server (NTRS)
Ward, G. H., Jr.
1972-01-01
An analysis of the heat balance and temperature distribution within Trinity Bay near Galveston, Texas is presented. The effects of tidal currents, wind driven circulations, and large volume inflows are examined. Emphasis is placed on the effects of turbulent diffusion and local shears in currents. The technique of dye tracing to determine the parameters characterizing dispersion is described. Aerial photographs and maps are provided to show the flow conditions existing at different times and seasons.
General Recommendations on Fatigue Risk Management for the Canadian Forces
2010-04-01
missions performed in aviation require an individual(s) to process large amount of information in a short period of time and to do this on a continuous...information processing required during sustained operations can deteriorate an individual’s ability to perform a task. Given the high operational tempo...memory, which, in turn, is utilized to perform human thought processes (Baddeley, 2003). While various versions of this theory exist, they all share
Unification of small and large time scales for biological evolution: deviations from power law.
Chowdhury, Debashish; Stauffer, Dietrich; Kunwar, Ambarish
2003-02-14
We develop a unified model that describes both "micro" and "macro" evolutions within a single theoretical framework. The ecosystem is described as a dynamic network; the population dynamics at each node of this network describes the "microevolution" over ecological time scales (i.e., birth, ageing, and natural death of individual organisms), while the appearance of new nodes, the slow changes of the links, and the disappearance of existing nodes accounts for the "macroevolution" over geological time scales (i.e., the origination, evolution, and extinction of species). In contrast to several earlier claims in the literature, we observe strong deviations from power law in the regime of long lifetimes.
Monitoring of changes in cluster structures in water under AC magnetic field
NASA Astrophysics Data System (ADS)
Usanov, A. D.; Ulyanov, S. S.; Ilyukhina, N. S.; Usanov, D. A.
2016-01-01
A fundamental possibility of visualizing cluster structures formed in distilled water by an optical method based on the analysis of dynamic speckle structures is demonstrated. It is shown for the first time that, in contrast to the existing concepts, water clusters can be rather large (up to 200 -m in size), and their lifetime is several tens of seconds. These clusters are found to have an internal spatially inhomogeneous structure, constantly changing in time. The properties of magnetized and non-magnetized water are found to differ significantly. In particular, the number of clusters formed in magnetized water is several times larger than that formed in the same volume of non-magnetized water.
Barnett, Elizabeth; Spruijt-Metz, Donna; Unger, Jennifer B.; Rohrbach, Louise Ann; Sun, Ping; Sussman, Steve
2014-01-01
We examined whether a bidirectional, longitudinal relationship exists between future time perspective (FTP), measured with the Zimbardo Time Perspective Inventory, and any past 30-day use of alcohol, tobacco, marijuana, or hard drugs among continuation high school students (N = 1,310, mean age 16.8 years) in a large urban area. We found increased FTP to be protective against drug use for all substances except alcohol. While any baseline use of substances did not predict changes in FTP 1 year later. The discussion explores why alcohol findings may differ from other substances. Future consideration of FTP as a mediator of program effects is explored. PMID:23750661
NASA Astrophysics Data System (ADS)
Chu, Y. X.; Liang, X. Y.; Yu, L. H.; Xu, L.; Lu, X. M.; Liu, Y. Q.; Leng, Y. X.; Li, R. X.; Xu, Z. Z.
2013-05-01
Theoretical and experimental investigations are carried out to determine the influence of the time delay between the input seed pulse and pump pulses on transverse parasitic lasing in a Ti:sapphire amplifier with a diameter of 80 mm, which is clad by a refractive index-matched liquid doped with an absorber. When the time delay is optimized, a maximum output energy of 50.8 J is achieved at a pump energy of 105 J, which corresponds to a conversion efficiency of 47.5%. Based on the existing compressor, the laser system achieves a peak power of 1.26 PW with a 29.0 fs pulse duration.
Soil Water Content Sensors as a Method of Measuring Ice Depth
NASA Astrophysics Data System (ADS)
Whitaker, E.; Reed, D. E.; Desai, A. R.
2015-12-01
Lake ice depth provides important information about local and regional climate change, weather patterns, and recreational safety, as well as impacting in situ ecology and carbon cycling. However, it is challenging to measure ice depth continuously from a remote location, as existing methods are too large, expensive, and/or time-intensive. Therefore, we present a novel application that reduces the size and cost issues by using soil water content reflectometer sensors. Analysis of sensors deployed in an environmental chamber using a scale model of a lake demonstrated their value as accurate measures of the change in ice depth over any time period, through measurement of the liquid-to-solid phase change. A robust correlation exists between volumetric water content in time as a function of environmental temperature. This relationship allows us to convert volumetric water content into ice depth. An array of these sensors will be placed in Lake Mendota, Madison, Wisconsin in winter 2015-2016, to create a temporally high-resolution ice depth record, which will be used for ecological or climatological studies while also being transmitted to the public to increase recreational safety.
Bounded energy states in homogeneous turbulent shear flow: An alternative view
NASA Technical Reports Server (NTRS)
Bernard, Peter S.; Speziale, Charles G.
1990-01-01
The equilibrium structure of homogeneous turbulent shear flow is investigated from a theoretical standpoint. Existing turbulence models, in apparent agreement with physical and numerical experiments, predict an unbounded exponential time growth of the turbulent kinetic energy and dissipation rate; only the anisotropy tensor and turbulent time scale reach a structural equilibrium. It is shown that if vortex stretching is accounted for in the dissipation rate transport equation, then there can exist equilibrium solutions, with bounded energy states, where the turbulence production is balanced by its dissipation. Illustrative calculations are present for a k-epsilon model modified to account for vortex stretching. The calculations indicate an initial exponential time growth of the turbulent kinetic energy and dissipation rate for elapsed times that are as large as those considered in any of the previously conducted physical or numerical experiments on homogeneous shear flow. However, vortex stretching eventually takes over and forces a production-equals-dissipation equilibrium with bounded energy states. The validity of this result is further supported by an independent theoretical argument. It is concluded that the generally accepted structural equilibrium for homogeneous shear flow with unbounded component energies is in need of re-examination.
An empirical method for estimating travel times for wet volcanic mass flows
Pierson, Thomas C.
1998-01-01
Travel times for wet volcanic mass flows (debris avalanches and lahars) can be forecast as a function of distance from source when the approximate flow rate (peak discharge near the source) can be estimated beforehand. The near-source flow rate is primarily a function of initial flow volume, which should be possible to estimate to an order of magnitude on the basis of geologic, geomorphic, and hydrologic factors at a particular volcano. Least-squares best fits to plots of flow-front travel time as a function of distance from source provide predictive second-degree polynomial equations with high coefficients of determination for four broad size classes of flow based on near-source flow rate: extremely large flows (>1 000 000 m3/s), very large flows (10 000–1 000 000 m3/s), large flows (1000–10 000 m3/s), and moderate flows (100–1000 m3/s). A strong nonlinear correlation that exists between initial total flow volume and flow rate for "instantaneously" generated debris flows can be used to estimate near-source flow rates in advance. Differences in geomorphic controlling factors among different flows in the data sets have relatively little effect on the strong nonlinear correlations between travel time and distance from source. Differences in flow type may be important, especially for extremely large flows, but this could not be evaluated here. At a given distance away from a volcano, travel times can vary by approximately an order of magnitude depending on flow rate. The method can provide emergency-management officials a means for estimating time windows for evacuation of communities located in hazard zones downstream from potentially hazardous volcanoes.
Stauffer, Reto; Mayr, Georg J; Messner, Jakob W; Umlauf, Nikolaus; Zeileis, Achim
2017-06-15
Flexible spatio-temporal models are widely used to create reliable and accurate estimates for precipitation climatologies. Most models are based on square root transformed monthly or annual means, where a normal distribution seems to be appropriate. This assumption becomes invalid on a daily time scale as the observations involve large fractions of zero observations and are limited to non-negative values. We develop a novel spatio-temporal model to estimate the full climatological distribution of precipitation on a daily time scale over complex terrain using a left-censored normal distribution. The results demonstrate that the new method is able to account for the non-normal distribution and the large fraction of zero observations. The new climatology provides the full climatological distribution on a very high spatial and temporal resolution, and is competitive with, or even outperforms existing methods, even for arbitrary locations.
Turbulent Superstructures in Rayleigh-Bénard convection at different Prandtl number
NASA Astrophysics Data System (ADS)
Schumacher, Jörg; Pandey, Ambrish; Ender, Martin; Westermann, Rüdiger; Scheel, Janet D.
2017-11-01
Large-scale patterns of the temperature and velocity field in horizontally extended cells can be considered as turbulent superstructures in Rayleigh-Bénard convection (RBC). These structures are obtained once the turbulent fluctuations are removed by a finite-time average. Their existence has been reported for example in Bailon-Cuba et al.. This large-scale order obeys a strong similarity with the well-studied patterns from the weakly nonlinear regime at lower Rayleigh number in RBC. In the present work we analyze the superstructures of RBC at different Prandtl number for Prandtl values between Pr = 0.005 for liquid sodium and 7 for water. The characteristic evolution time scales, the typical spatial extension of the rolls and the properties of the defects of the resulting superstructure patterns are analyzed. Data are obtained from well-resolved spectral element direct numerical simulations. The work is supported by the Priority Programme SPP 1881 of the Deutsche Forschungsgemeinschaft.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, B. R.; Millan, R. M.; Reeves, G. D.
We report that past studies of radiation belt relativistic electrons have favored active storm time periods, while the effects of small geomagnetic storms (Dst >₋50 nT) have not been statistically characterized. In this timely study, given the current weak solar cycle, we identify 342 small storms from 1989 through 2000 and quantify the corresponding change in relativistic electron flux at geosynchronous orbit. Surprisingly, small storms can be equally as effective as large storms at enhancing and depleting fluxes. Slight differences exist, as small storms are 10% less likely to result in flux enhancement and 10% more likely to result inmore » flux depletion than large storms. Nevertheless, it is clear that neither acceleration nor loss mechanisms scale with storm drivers as would be expected. Small geomagnetic storms play a significant role in radiation belt relativistic electron dynamics and provide opportunities to gain new insights into the complex balance of acceleration and loss processes.« less
Acceleration and loss of relativistic electrons during small geomagnetic storms.
Anderson, B R; Millan, R M; Reeves, G D; Friedel, R H W
2015-12-16
Past studies of radiation belt relativistic electrons have favored active storm time periods, while the effects of small geomagnetic storms ( D s t > -50 nT) have not been statistically characterized. In this timely study, given the current weak solar cycle, we identify 342 small storms from 1989 through 2000 and quantify the corresponding change in relativistic electron flux at geosynchronous orbit. Surprisingly, small storms can be equally as effective as large storms at enhancing and depleting fluxes. Slight differences exist, as small storms are 10% less likely to result in flux enhancement and 10% more likely to result in flux depletion than large storms. Nevertheless, it is clear that neither acceleration nor loss mechanisms scale with storm drivers as would be expected. Small geomagnetic storms play a significant role in radiation belt relativistic electron dynamics and provide opportunities to gain new insights into the complex balance of acceleration and loss processes.
Extreme reaction times determine fluctuation scaling in human color vision
NASA Astrophysics Data System (ADS)
Medina, José M.; Díaz, José A.
2016-11-01
In modern mental chronometry, human reaction time defines the time elapsed from stimulus presentation until a response occurs and represents a reference paradigm for investigating stochastic latency mechanisms in color vision. Here we examine the statistical properties of extreme reaction times and whether they support fluctuation scaling in the skewness-kurtosis plane. Reaction times were measured for visual stimuli across the cardinal directions of the color space. For all subjects, the results show that very large reaction times deviate from the right tail of reaction time distributions suggesting the existence of dragon-kings events. The results also indicate that extreme reaction times are correlated and shape fluctuation scaling over a wide range of stimulus conditions. The scaling exponent was higher for achromatic than isoluminant stimuli, suggesting distinct generative mechanisms. Our findings open a new perspective for studying failure modes in sensory-motor communications and in complex networks.
NASA Astrophysics Data System (ADS)
Rak, Rafał; Drożdż, Stanisław; Kwapień, Jarosław; Oświȩcimka, Paweł
2015-11-01
We consider a few quantities that characterize trading on a stock market in a fixed time interval: logarithmic returns, volatility, trading activity (i.e., the number of transactions), and volume traded. We search for the power-law cross-correlations among these quantities aggregated over different time units from 1 min to 10 min. Our study is based on empirical data from the American stock market consisting of tick-by-tick recordings of 31 stocks listed in Dow Jones Industrial Average during the years 2008-2011. Since all the considered quantities except the returns show strong daily patterns related to the variable trading activity in different parts of a day, which are the most evident in the autocorrelation function, we remove these patterns by detrending before we proceed further with our study. We apply the multifractal detrended cross-correlation analysis with sign preserving (MFCCA) and show that the strongest power-law cross-correlations exist between trading activity and volume traded, while the weakest ones exist (or even do not exist) between the returns and the remaining quantities. We also show that the strongest cross-correlations are carried by those parts of the signals that are characterized by large and medium variance. Our observation that the most convincing power-law cross-correlations occur between trading activity and volume traded reveals the existence of strong fractal-like coupling between these quantities.
An integrated approach for updating cadastral maps in Pakistan using satellite remote sensing data
NASA Astrophysics Data System (ADS)
Ali, Zahir; Tuladhar, Arbind; Zevenbergen, Jaap
2012-08-01
Updating cadastral information is crucial for recording land ownership and property division changes in a timely fashioned manner. In most cases, the existing cadastral maps do not provide up-to-date information on land parcel boundaries. Such a situation demands that all the cadastral data and parcel boundaries information in these maps to be updated in a timely fashion. The existing techniques for acquiring cadastral information are discipline-oriented based on different disciplines such as geodesy, surveying, and photogrammetry. All these techniques require a large number of manpower, time, and cost when they are carried out separately. There is a need to integrate these techniques for acquiring cadastral information to update the existing cadastral data and (re)produce cadastral maps in an efficient manner. To reduce the time and cost involved in cadastral data acquisition, this study develops an integrated approach by integrating global position system (GPS) data, remote sensing (RS) imagery, and existing cadastral maps. For this purpose, the panchromatic image with 0.6 m spatial resolution and the corresponding multi-spectral image with 2.4 m spatial resolution and 3 spectral bands from QuickBird satellite were used. A digital elevation model (DEM) was extracted from SPOT-5 stereopairs and some ground control points (GCPs) were also used for ortho-rectifying the QuickBird images. After ortho-rectifying these images and registering the multi-spectral image to the panchromatic image, fusion between them was attained to get good quality multi-spectral images of these two study areas with 0.6 m spatial resolution. Cadastral parcel boundaries were then identified on QuickBird images of the two study areas via visual interpretation using participatory-GIS (PGIS) technique. The regions of study are the urban and rural areas of Peshawar and Swabi districts in the Khyber Pakhtunkhwa province of Pakistan. The results are the creation of updated cadastral maps with a lot of cadastral information which can be used in updating the existing cadastral data with less time and cost.
Potentiation Effects of Half-Squats Performed in a Ballistic or Nonballistic Manner.
Suchomel, Timothy J; Sato, Kimitake; DeWeese, Brad H; Ebben, William P; Stone, Michael H
2016-06-01
This study examined and compared the acute effects of ballistic and nonballistic concentric-only half-squats (COHSs) on squat jump performance. Fifteen resistance-trained men performed a squat jump 2 minutes after a control protocol or 2 COHSs at 90% of their 1 repetition maximum (1RM) COHS performed in a ballistic or nonballistic manner. Jump height (JH), peak power (PP), and allometrically scaled peak power (PPa) were compared using three 3 × 2 repeated-measures analyses of variance. Statistically significant condition × time interaction effects existed for JH (p = 0.037), PP (p = 0.041), and PPa (p = 0.031). Post hoc analysis revealed that the ballistic condition produced statistically greater JH (p = 0.017 and p = 0.036), PP (p = 0.031 and p = 0.026), and PPa (p = 0.024 and p = 0.023) than the control and nonballistic conditions, respectively. Small effect sizes for JH, PP, and PPa existed during the ballistic condition (d = 0.28-0.44), whereas trivial effect sizes existed during the control (d = 0.0-0.18) and nonballistic (d = 0.0-0.17) conditions. Large statistically significant relationships existed between the JH potentiation response and the subject's relative back squat 1RM (r = 0.520; p = 0.047) and relative COHS 1RM (r = 0.569; p = 0.027) during the ballistic condition. In addition, large statistically significant relationship existed between JH potentiation response and the subject's relative back squat strength (r = 0.633; p = 0.011), whereas the moderate relationship with the subject's relative COHS strength trended toward significance (r = 0.483; p = 0.068). Ballistic COHS produced superior potentiation effects compared with COHS performed in a nonballistic manner. Relative strength may contribute to the elicited potentiation response after ballistic and nonballistic COHS.
Lee, Lian N; Bolinger, Beatrice; Banki, Zoltan; de Lara, Catherine; Highton, Andrew J; Colston, Julia M; Hutchings, Claire; Klenerman, Paul
2017-12-01
The efficacies of many new T cell vaccines rely on generating large populations of long-lived pathogen-specific effector memory CD8 T cells. However, it is now increasingly recognized that prior infection history impacts on the host immune response. Additionally, the order in which these infections are acquired could have a major effect. Exploiting the ability to generate large sustained effector memory (i.e. inflationary) T cell populations from murine cytomegalovirus (MCMV) and human Adenovirus-subtype (AdHu5) 5-beta-galactosidase (Ad-lacZ) vector, the impact of new infections on pre-existing memory and the capacity of the host's memory compartment to accommodate multiple inflationary populations from unrelated pathogens was investigated in a murine model. Simultaneous and sequential infections, first with MCMV followed by Ad-lacZ, generated inflationary populations towards both viruses with similar kinetics and magnitude to mono-infected groups. However, in Ad-lacZ immune mice, subsequent acute MCMV infection led to a rapid decline of the pre-existing Ad-LacZ-specific inflating population, associated with bystander activation of Fas-dependent apoptotic pathways. However, responses were maintained long-term and boosting with Ad-lacZ led to rapid re-expansion of the inflating population. These data indicate firstly that multiple specificities of inflating memory cells can be acquired at different times and stably co-exist. Some acute infections may also deplete pre-existing memory populations, thus revealing the importance of the order of infection acquisition. Importantly, immunization with an AdHu5 vector did not alter the size of the pre-existing memory. These phenomena are relevant to the development of adenoviral vectors as novel vaccination strategies for diverse infections and cancers. (241 words).
Bolinger, Beatrice; de Lara, Catherine; Hutchings, Claire
2017-01-01
The efficacies of many new T cell vaccines rely on generating large populations of long-lived pathogen-specific effector memory CD8 T cells. However, it is now increasingly recognized that prior infection history impacts on the host immune response. Additionally, the order in which these infections are acquired could have a major effect. Exploiting the ability to generate large sustained effector memory (i.e. inflationary) T cell populations from murine cytomegalovirus (MCMV) and human Adenovirus-subtype (AdHu5) 5-beta-galactosidase (Ad-lacZ) vector, the impact of new infections on pre-existing memory and the capacity of the host’s memory compartment to accommodate multiple inflationary populations from unrelated pathogens was investigated in a murine model. Simultaneous and sequential infections, first with MCMV followed by Ad-lacZ, generated inflationary populations towards both viruses with similar kinetics and magnitude to mono-infected groups. However, in Ad-lacZ immune mice, subsequent acute MCMV infection led to a rapid decline of the pre-existing Ad-LacZ-specific inflating population, associated with bystander activation of Fas-dependent apoptotic pathways. However, responses were maintained long-term and boosting with Ad-lacZ led to rapid re-expansion of the inflating population. These data indicate firstly that multiple specificities of inflating memory cells can be acquired at different times and stably co-exist. Some acute infections may also deplete pre-existing memory populations, thus revealing the importance of the order of infection acquisition. Importantly, immunization with an AdHu5 vector did not alter the size of the pre-existing memory. These phenomena are relevant to the development of adenoviral vectors as novel vaccination strategies for diverse infections and cancers. (241 words) PMID:29281733
An assessment of forest cover trends in South and North Korea, from 1980 to 2010.
Engler, Robin; Teplyakov, Victor; Adams, Jonathan M
2014-01-01
It is generally believed that forest cover in North Korea has undergone a substantial decrease since 1980, while in South Korea, forest cover has remained relatively static during that same period of time. The United Nations Food and Agriculture Organization (FAO) Forest Resources Assessments--based on the reported forest inventories from North and South Korea--suggest a major forest cover decrease in North Korea, but only a slight decrease in South Korea during the last 30 years. In this study, we seek to check and validate those assessments by comparing them to independently derived forest cover maps compiled for three time intervals between 1990 and 2010, as well as to provide a spatially explicit view of forest cover change in the Korean Peninsula since the 1990s. We extracted tree cover data for the Korean Peninsula from existing global datasets derived from satellite imagery. Our estimates, while qualitatively supporting the FAO results, show that North Korea has lost a large number of densely forested areas, and thus in this sense has suffered heavier forest loss than the FAO assessment suggests. Given the limited time interval studied in our assessment, the overall forest loss from North Korea during the whole span of time since 1980 may have been even heavier than in our estimate. For South Korea, our results indicate that the forest cover has remained relatively stable at the national level, but that important variability in forest cover evolution exists at the regional level: While the northern and western provinces show an overall decrease in forested areas, large areas in the southeastern part of the country have increased their forest cover.
Ishii; Tromp
1999-08-20
With the use of a large collection of free-oscillation data and additional constraints imposed by the free-air gravity anomaly, lateral variations in shear velocity, compressional velocity, and density within the mantle; dynamic topography on the free surface; and topography on the 660-km discontinuity and the core-mantle boundary were determined. The velocity models are consistent with existing models based on travel-time and waveform inversions. In the lowermost mantle, near the core-mantle boundary, denser than average material is found beneath regions of upwellings centered on the Pacific Ocean and Africa that are characterized by slow shear velocities. These anomalies suggest the existence of compositional heterogeneity near the core-mantle boundary.
NASA Astrophysics Data System (ADS)
Fan, Jishan; Li, Fucai; Nakamura, Gen
2018-06-01
In this paper we continue our study on the establishment of uniform estimates of strong solutions with respect to the Mach number and the dielectric constant to the full compressible Navier-Stokes-Maxwell system in a bounded domain Ω \\subset R^3. In Fan et al. (Kinet Relat Models 9:443-453, 2016), the uniform estimates have been obtained for large initial data in a short time interval. Here we shall show that the uniform estimates exist globally if the initial data are small. Based on these uniform estimates, we obtain the convergence of the full compressible Navier-Stokes-Maxwell system to the incompressible magnetohydrodynamic equations for well-prepared initial data.
Developing Science Operations Concepts for the Future of Planetary Surface Exploration
NASA Technical Reports Server (NTRS)
Young, K. E.; Bleacher, J. E.; Rogers, A. D.; McAdam, A.; Evans, C. A.; Graff, T. G.; Garry, W. B.; Whelley,; Scheidt, S.; Carter, L.;
2017-01-01
Through fly-by, orbiter, rover, and even crewed missions, National Aeronautics and Space Administration (NASA) has been extremely successful in exploring planetary bodies throughout our Solar System. The focus on increasingly complex Mars orbiter and rover missions has helped us understand how Mars has evolved over time and whether life has ever existed on the red planet. However, large strategic knowledge gaps (SKGs) still exist in our understanding of the evolution of the Solar System (e.g. the Lunar Exploration Analysis Group, Small Bodies Analysis Group, and Mars Exploration Program Analysis Group). Sending humans to these bodies is a critical part of addressing these SKGs in order to transition to a new era of planetary exploration by 2050.
Spatial ecology and movement of reintroduced Canada lynx
Buderman, Frances E.; Hooten, Mevin B.; Ivan, Jacob S.; Shenk, Tanya
2017-01-01
Understanding movement behavior and identifying areas of landscape connectivity is critical for the conservation of many species. However, collecting fine‐scale movement data can be prohibitively time consuming and costly, especially for rare or endangered species, whereas existing data sets may provide the best available information on animal movement. Contemporary movement models may not be an option for modeling existing data due to low temporal resolution and large or unusual error structures, but inference can still be obtained using a functional movement modeling approach. We use a functional movement model to perform a population‐level analysis of telemetry data collected during the reintroduction of Canada lynx to Colorado. Little is known about southern lynx populations compared to those in Canada and Alaska, and inference is often limited to a few individuals due to their low densities. Our analysis of a population of Canada lynx fills significant gaps in the knowledge of Canada lynx behavior at the southern edge of its historical range. We analyzed functions of individual‐level movement paths, such as speed, residence time, and tortuosity, and identified a region of connectivity that extended north from the San Juan Mountains, along the continental divide, and terminated in Wyoming at the northern edge of the Southern Rocky Mountains. Individuals were able to traverse large distances across non‐boreal habitat, including exploratory movements to the Greater Yellowstone area and beyond. We found evidence for an effect of seasonality and breeding status on many of the movement quantities and documented a potential reintroduction effect. Our findings provide the first analysis of Canada lynx movement in Colorado and substantially augment the information available for conservation and management decisions. The functional movement framework can be extended to other species and demonstrates that information on movement behavior can be obtained using existing data sets.
Constraining Alternative Theories of Gravity Using Pulsar Timing Arrays
NASA Astrophysics Data System (ADS)
Cornish, Neil J.; O'Beirne, Logan; Taylor, Stephen R.; Yunes, Nicolás
2018-05-01
The opening of the gravitational wave window by ground-based laser interferometers has made possible many new tests of gravity, including the first constraints on polarization. It is hoped that, within the next decade, pulsar timing will extend the window by making the first detections in the nanohertz frequency regime. Pulsar timing offers several advantages over ground-based interferometers for constraining the polarization of gravitational waves due to the many projections of the polarization pattern provided by the different lines of sight to the pulsars, and the enhanced response to longitudinal polarizations. Here, we show that existing results from pulsar timing arrays can be used to place stringent limits on the energy density of longitudinal stochastic gravitational waves. However, unambiguously distinguishing these modes from noise will be very difficult due to the large variances in the pulsar-pulsar correlation patterns. Existing upper limits on the power spectrum of pulsar timing residuals imply that the amplitude of vector longitudinal (VL) and scalar longitudinal (SL) modes at frequencies of 1/year are constrained, AVL<4 ×10-16 and ASL<4 ×10-17, while the bounds on the energy density for a scale invariant cosmological background are ΩVLh2<4 ×10-11 and ΩSLh2<3 ×10-13.
Constraining Alternative Theories of Gravity Using Pulsar Timing Arrays.
Cornish, Neil J; O'Beirne, Logan; Taylor, Stephen R; Yunes, Nicolás
2018-05-04
The opening of the gravitational wave window by ground-based laser interferometers has made possible many new tests of gravity, including the first constraints on polarization. It is hoped that, within the next decade, pulsar timing will extend the window by making the first detections in the nanohertz frequency regime. Pulsar timing offers several advantages over ground-based interferometers for constraining the polarization of gravitational waves due to the many projections of the polarization pattern provided by the different lines of sight to the pulsars, and the enhanced response to longitudinal polarizations. Here, we show that existing results from pulsar timing arrays can be used to place stringent limits on the energy density of longitudinal stochastic gravitational waves. However, unambiguously distinguishing these modes from noise will be very difficult due to the large variances in the pulsar-pulsar correlation patterns. Existing upper limits on the power spectrum of pulsar timing residuals imply that the amplitude of vector longitudinal (VL) and scalar longitudinal (SL) modes at frequencies of 1/year are constrained, A_{VL}<4×10^{-16} and A_{SL}<4×10^{-17}, while the bounds on the energy density for a scale invariant cosmological background are Ω_{VL}h^{2}<4×10^{-11} and Ω_{SL}h^{2}<3×10^{-13}.
Relation of morphology of electrodeposited zinc to ion concentration profile
NASA Technical Reports Server (NTRS)
May, C. E.; Kautz, H. E.; Sabo, B. B.
1977-01-01
The morphology of electrodeposited zinc was studied with special attention to the ion concentration profile. The initial concentrations were 9M hydroxide ion and 1.21M zincate. Current densities were 6.4 to 64 mA/sq cm. Experiments were run with a horizontal cathode which was observed in situ using a microscope. The morphology of the zinc deposit was found to be a function of time as well as current density; roughly, the log of the transition time from mossy to large crystalline type deposit is inversely proportional to current density. Probe electrodes indicated that the electrolyte in the cathode chamber was mixed by self inducted convection. However, relatively large concentration gradients of the involved species existed across the boundary layer of the cathode. Analysis of the data suggests that the morphology converts from mossy to large crystalline when the hydroxide activity on the cathode surface exceeds about 12 M. Other experiments show that the pulse discharge technique had no effect on the morphology in the system where the bulk concentration of the electrolyte was kept homogeneous via self induced convection.
On the verge of an astronomy CubeSat revolution
NASA Astrophysics Data System (ADS)
Shkolnik, Evgenya L.
2018-05-01
CubeSats are small satellites built in standard sizes and form factors, which have been growing in popularity but have thus far been largely ignored within the field of astronomy. When deployed as space-based telescopes, they enable science experiments not possible with existing or planned large space missions, filling several key gaps in astronomical research. Unlike expensive and highly sought after space telescopes such as the Hubble Space Telescope, whose time must be shared among many instruments and science programs, CubeSats can monitor sources for weeks or months at time, and at wavelengths not accessible from the ground such as the ultraviolet, far-infrared and low-frequency radio. Science cases for CubeSats being developed now include a wide variety of astrophysical experiments, including exoplanets, stars, black holes and radio transients. Achieving high-impact astronomical research with CubeSats is becoming increasingly feasible with advances in technologies such as precision pointing, compact sensitive detectors and the miniaturization of propulsion systems. CubeSats may also pair with the large space- and ground-based telescopes to provide complementary data to better explain the physical processes observed.
Plasmodium vivax malaria: a re-emerging threat for temperate climate zones?
Petersen, Eskild; Severini, Carlo; Picot, Stephane
2013-01-01
Plasmodium vivax was endemic in temperate areas in historic times up to the middle of last century. Temperate climate P. vivax has a long incubation time of up to 8-10 months, which partly explain how it can be endemic in temperate areas with a could winter. P. vivax disappeared from Europe within the last 40-60 years, and this change was not related to climatic changes. The surge of P. vivax in Northern Europe after the second world war was related to displacement of refugees and large movement of military personnel exposed to malaria. Lately P. vivax has been seen along the demilitarized zone in South Korea replication a high endemicity in North Korea. The potential of transmission of P. vivax still exist in temperate zones, but reintroduction in a larger scale of P. vivax to areas without present transmission require large population movements of P. vivax infected people. The highest threat at present is refugees from P. vivax endemic North Korea entering China and South Korea in large numbers. Copyright © 2013 Elsevier Ltd. All rights reserved.
Diurnal variation of eye movement and heart rate variability in the human fetus at term.
Morokuma, S; Horimoto, N; Satoh, S; Nakano, H
2001-07-01
To elucidate diurnal variations in eye movement and fetal heart rate (FHR) variability in the term fetus, we observed these two parameters continuously for 24 h, using real-time ultrasound and Doppler cardiotocograph, respectively. Studied were five uncomplicated fetuses at term. The time series data of the presence and absence of eye movement and mean FHR value for each 1 min were analyzed using the maximum entropy method (MEM) and subsequent nonlinear least squares fitting. According to the power value of eye movement, all five cases were classified into two groups: three cases in the large power group and two cases in the small power group. The acrophases of eye movement and FHR variability in the large power group were close, thereby implying the existence of a diurnal rhythm in both these parameters and also that they are synchronized. In the small power group, the acrophases were separated. The synchronization of eye movement and FHR variability in the large power group suggests that these phenomena are governed by a common central mechanism related to diurnal rhythm generation.
Highly-stretchable 3D-architected Mechanical Metamaterials
Jiang, Yanhui; Wang, Qiming
2016-01-01
Soft materials featuring both 3D free-form architectures and high stretchability are highly desirable for a number of engineering applications ranging from cushion modulators, soft robots to stretchable electronics; however, both the manufacturing and fundamental mechanics are largely elusive. Here, we overcome the manufacturing difficulties and report a class of mechanical metamaterials that not only features 3D free-form lattice architectures but also poses ultrahigh reversible stretchability (strain > 414%), 4 times higher than that of the existing counterparts with the similar complexity of 3D architectures. The microarchitected metamaterials, made of highly stretchable elastomers, are realized through an additive manufacturing technique, projection microstereolithography, and its postprocessing. With the fabricated metamaterials, we reveal their exotic mechanical behaviors: Under large-strain tension, their moduli follow a linear scaling relationship with their densities regardless of architecture types, in sharp contrast to the architecture-dependent modulus power-law of the existing engineering materials; under large-strain compression, they present tunable negative-stiffness that enables ultrahigh energy absorption efficiencies. To harness their extraordinary stretchability and microstructures, we demonstrate that the metamaterials open a number of application avenues in lightweight and flexible structure connectors, ultraefficient dampers, 3D meshed rehabilitation structures and stretchable electronics with designed 3D anisotropic conductivity. PMID:27667638
Statistical mechanics of soft-boson phase transitions
NASA Technical Reports Server (NTRS)
Gupta, Arun K.; Hill, Christopher T.; Holman, Richard; Kolb, Edward W.
1991-01-01
The existence of structure on large (100 Mpc) scales, and limits to anisotropies in the cosmic microwave background radiation (CMBR), have imperiled models of structure formation based solely upon the standard cold dark matter scenario. Novel scenarios, which may be compatible with large scale structure and small CMBR anisotropies, invoke nonlinear fluctuations in the density appearing after recombination, accomplished via the use of late time phase transitions involving ultralow mass scalar bosons. Herein, the statistical mechanics are studied of such phase transitions in several models involving naturally ultralow mass pseudo-Nambu-Goldstone bosons (pNGB's). These models can exhibit several interesting effects at high temperature, which is believed to be the most general possibilities for pNGB's.
Antenna grout replacement system
NASA Technical Reports Server (NTRS)
Mcclung, C. E. (Inventor)
1983-01-01
An epoxy grout suitable for use in mounting and positioning bearing runner plates used in hydrostatic bearing assemblies for rotatably mounting large radio telescope structures to stationary support pedestals is described. The epoxy grout may be used in original mountings or may be used as part of a replacement system for repairing cavities in existing grout resulting from grout deterioration. The epoxy grout has a relatively short work life and cure time even in the presence of hydraulic oil. The epoxy grout cures without shrinking or sagging to form a grout which is sufficiently strong and durable to provide a grout especially well suited for use under the high pressure loading and close tolerance requirements of large hydrostatic bearing assemblies.
New trends in logic synthesis for both digital designing and data processing
NASA Astrophysics Data System (ADS)
Borowik, Grzegorz; Łuba, Tadeusz; Poźniak, Krzysztof
2016-09-01
FPGA devices are equipped with memory-based structures. These memories act as very large logic cells where the number of inputs equals the number of address lines. At the same time, there is a huge demand in the market of Internet of Things for devices implementing virtual routers, intrusion detection systems, etc.; where such memories are crucial for realizing pattern matching circuits, IP address tables, and other. Unfortunately, existing CAD tools are not well suited to utilize capabilities that such large memory blocks offer due to the lack of appropriate synthesis procedures. This paper presents methods which are useful for memory-based implementations: minimization of the number of input variables and functional decomposition.
Pre-Global Surveyor evidence for Martian ground water
Donahue, Thomas M.
2001-01-01
A time-dependent theory for the evolution of water on Mars is presented. Using this theory and invoking a large number of observational constraints, I argue that these constraints require that a large reservoir of water exists in the Martian crust at depths shallow enough to interact strongly with the atmosphere. The constraints include the abundance of atmospheric water vapor, escape fluxes of hydrogen and deuterium, D/H ratios in the atmosphere and in hydrous minerals found in one Martian meteorite, alteration of minerals in other meteorites, and fluvial features on the Martian surface. These results are consonant with visual evidence for recent groundwater seepage obtained by the Mars Global Surveyor satellite. PMID:11158555
Code of Federal Regulations, 2013 CFR
2013-07-01
... Existing Affected Sources Classified as Large Iron and Steel Foundries 4 Table 4 to Subpart ZZZZZ of Part... Emission Standards for Hazardous Air Pollutants for Iron and Steel Foundries Area Sources Pt. 63, Subpt... Affected Sources Classified as Large Iron and Steel Foundries As required by § 63.10900(b), your...
Code of Federal Regulations, 2012 CFR
2012-07-01
... Existing Affected Sources Classified as Large Iron and Steel Foundries 4 Table 4 to Subpart ZZZZZ of Part... Emission Standards for Hazardous Air Pollutants for Iron and Steel Foundries Area Sources Pt. 63, Subpt... Affected Sources Classified as Large Iron and Steel Foundries As required by § 63.10900(b), your...
Code of Federal Regulations, 2014 CFR
2014-07-01
... Existing Affected Sources Classified as Large Iron and Steel Foundries 4 Table 4 to Subpart ZZZZZ of Part... Emission Standards for Hazardous Air Pollutants for Iron and Steel Foundries Area Sources Pt. 63, Subpt... Affected Sources Classified as Large Iron and Steel Foundries As required by § 63.10900(b), your...
Code of Federal Regulations, 2011 CFR
2011-07-01
... Existing Affected Sources Classified as Large Iron and Steel Foundries 4 Table 4 to Subpart ZZZZZ of Part... Emission Standards for Hazardous Air Pollutants for Iron and Steel Foundries Area Sources Pt. 63, Subpt... Affected Sources Classified as Large Iron and Steel Foundries As required by § 63.10900(b), your...
Code of Federal Regulations, 2010 CFR
2010-07-01
... Existing Affected Sources Classified as Large Iron and Steel Foundries 4 Table 4 to Subpart ZZZZZ of Part... Emission Standards for Hazardous Air Pollutants for Iron and Steel Foundries Area Sources Pt. 63, Subpt... Affected Sources Classified as Large Iron and Steel Foundries As required by § 63.10900(b), your...
Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing
NASA Technical Reports Server (NTRS)
Ozguner, Fusun
1996-01-01
Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.
Sidereal variations deep underground in Tasmania
NASA Technical Reports Server (NTRS)
Humble, J. E.; Fenton, A. G.; Fenton, K. B.
1985-01-01
Data from the deep underground vertically directed muon telescopes at Poatina, Tasmania, have been used since 1972 for a number of investigations, including the daily intensity variations, atmospheric influences, and checking for possible effects due to the interplanetary magnetic field. These telescopes have a total sensitive area of only 3 square meters, with the result that the counting rate is low (about 1680 events per hour) and the statistical errors on the results are rather large. Consequently, it was decided several years ago to construct larger detectors for this station. The first of these telescopes has been in operation for two complete years, and the results from it are presented. Results from the new, more stable equipment at Poatina appear to confirm the existence of a first harmonic in the daily variations in sidereal time reported earlier, and are consistent with small or non-existent first harmonics in solar and anti-sidereal time. All the second harmonics appear to be small, if not zero at these energies.
Wind-assist irrigation and electrical-power generation
NASA Astrophysics Data System (ADS)
Nelson, V.; Starcher, K.
1982-07-01
A wind turbine is mechanically connected to an existing irrigation well. The system can be operated in three modes: electric motor driving the water turbine pump. Wind assist mode where wind turbine supplements power from the utility line to drive the water turbine pump. At wind speeds of 12 m/s and greater, the wind turbine can pump water (15 kW) and feed power (10 kW) back into the utility grid at the same time. Electrical generation mode where the water pump is disconnected and all power is fed back to the utility grid. The concept is technically viable as the mechanical connection allows for a smooth transfer of power in parallel with an existing power source. Minor problems caused delays and major problems of two rotor failures precluded enough operation time to obtain a good estimation of the economics. Because reliability and maintenance are difficult problems with prototype or limited production wind energy conversion systems, the expense of the demonstration project has exceeded the estimated cost by a large amount.
Transforming GIS data into functional road models for large-scale traffic simulation.
Wilkie, David; Sewall, Jason; Lin, Ming C
2012-06-01
There exists a vast amount of geographic information system (GIS) data that model road networks around the world as polylines with attributes. In this form, the data are insufficient for applications such as simulation and 3D visualization-tools which will grow in power and demand as sensor data become more pervasive and as governments try to optimize their existing physical infrastructure. In this paper, we propose an efficient method for enhancing a road map from a GIS database to create a geometrically and topologically consistent 3D model to be used in real-time traffic simulation, interactive visualization of virtual worlds, and autonomous vehicle navigation. The resulting representation provides important road features for traffic simulations, including ramps, highways, overpasses, legal merge zones, and intersections with arbitrary states, and it is independent of the simulation methodologies. We test the 3D models of road networks generated by our algorithm on real-time traffic simulation using both macroscopic and microscopic techniques.
A framework to preserve the privacy of electronic health data streams.
Kim, Soohyung; Sung, Min Kyoung; Chung, Yon Dohn
2014-08-01
The anonymization of health data streams is important to protect these data against potential privacy breaches. A large number of research studies aiming at offering privacy in the context of data streams has been recently conducted. However, the techniques that have been proposed in these studies generate a significant delay during the anonymization process, since they concentrate on applying existing privacy models (e.g., k-anonymity and l-diversity) to batches of data extracted from data streams in a period of time. In this paper, we present delay-free anonymization, a framework for preserving the privacy of electronic health data streams. Unlike existing works, our method does not generate an accumulation delay, since input streams are anonymized immediately with counterfeit values. We further devise late validation for increasing the data utility of the anonymization results and managing the counterfeit values. Through experiments, we show the efficiency and effectiveness of the proposed method for the real-time release of data streams. Copyright © 2014 Elsevier Inc. All rights reserved.
Effects of heterogeneous convergence rate on consensus in opinion dynamics
NASA Astrophysics Data System (ADS)
Huang, Changwei; Dai, Qionglin; Han, Wenchen; Feng, Yuee; Cheng, Hongyan; Li, Haihong
2018-06-01
The Deffuant model has attracted much attention in the study of opinion dynamics. Here, we propose a modified version by introducing into the model a heterogeneous convergence rate which is dependent on the opinion difference between interacting agents and a tunable parameter κ. We study the effects of heterogeneous convergence rate on consensus by investigating the probability of complete consensus, the size of the largest opinion cluster, the number of opinion clusters, and the relaxation time. We find that the decrease of the convergence rate is favorable to decreasing the confidence threshold for the population to always reach complete consensus, and there exists optimal κ resulting in the minimal bounded confidence threshold. Moreover, we find that there exists a window before the threshold of confidence in which complete consensus may be reached with a nonzero probability when κ is not too large. We also find that, within a certain confidence range, decreasing the convergence rate will reduce the relaxation time, which is somewhat counterintuitive.
NASA Astrophysics Data System (ADS)
Kotulla, Ralf; Gopu, Arvind; Hayashi, Soichi
2016-08-01
Processing astronomical data to science readiness was and remains a challenge, in particular in the case of multi detector instruments such as wide-field imagers. One such instrument, the WIYN One Degree Imager, is available to the astronomical community at large, and, in order to be scientifically useful to its varied user community on a short timescale, provides its users fully calibrated data in addition to the underlying raw data. However, time-efficient re-processing of the often large datasets with improved calibration data and/or software requires more than just a large number of CPU-cores and disk space. This is particularly relevant if all computing resources are general purpose and shared with a large number of users in a typical university setup. Our approach to address this challenge is a flexible framework, combining the best of both high performance (large number of nodes, internal communication) and high throughput (flexible/variable number of nodes, no dedicated hardware) computing. Based on the Advanced Message Queuing Protocol, we a developed a Server-Manager- Worker framework. In addition to the server directing the work flow and the worker executing the actual work, the manager maintains a list of available worker, adds and/or removes individual workers from the worker pool, and re-assigns worker to different tasks. This provides the flexibility of optimizing the worker pool to the current task and workload, improves load balancing, and makes the most efficient use of the available resources. We present performance benchmarks and scaling tests, showing that, today and using existing, commodity shared- use hardware we can process data with data throughputs (including data reduction and calibration) approaching that expected in the early 2020s for future observatories such as the Large Synoptic Survey Telescope.
Flocking particles in a non-Newtonian shear thickening fluid
NASA Astrophysics Data System (ADS)
Mucha, Piotr B.; Peszek, Jan; Pokorný, Milan
2018-06-01
We prove the existence of strong solutions to the Cucker–Smale flocking model coupled with an incompressible viscous non-Newtonian fluid with the stress tensor of a power–law structure for . The fluid part of the system admits strong solutions while the solutions to the CS part are weak. The coupling is performed through a drag force on a periodic spatial domain . Additionally, we construct a Lyapunov functional determining the large time behavior of solutions to the system.
Kouyoumdjian, Fiona G; McIsaac, Kathryn E
2015-10-03
About one in nine Canadians who are infected with hepatitis C spend time in a correctional facility each year. With high rates of current injection drug use and needle sharing, this population may account for a large proportion of new infections. Any national strategy to address hepatitis C should include a focus on persons in correctional facilities, and should build on existing evidence regarding primary, secondary and tertiary prevention.
An evaluation of the use of ERTS-1 satellite imagery for grizzly bear habitat analysis. [Montana
NASA Technical Reports Server (NTRS)
Varney, J. R.; Craighead, J. J.; Sumner, J. S.
1974-01-01
Improved classification and mapping of grizzly habitat will permit better estimates of population density and distribution, and allow accurate evaluation of the potential effects of changes in land use, hunting regulation, and management policies on existing populations. Methods of identifying favorable habitat from ERTS-1 multispectral scanner imagery were investigated and described. This technique could reduce the time and effort required to classify large wilderness areas in the Western United States.
Measurement-Based Linear Optics
NASA Astrophysics Data System (ADS)
Alexander, Rafael N.; Gabay, Natasha C.; Rohde, Peter P.; Menicucci, Nicolas C.
2017-03-01
A major challenge in optical quantum processing is implementing large, stable interferometers. We offer a novel approach: virtual, measurement-based interferometers that are programed on the fly solely by the choice of homodyne measurement angles. The effects of finite squeezing are captured as uniform amplitude damping. We compare our proposal to existing (physical) interferometers and consider its performance for BosonSampling, which could demonstrate postclassical computational power in the near future. We prove its efficiency in time and squeezing (energy) in this setting.
Communication: Analysing kinetic transition networks for rare events.
Stevenson, Jacob D; Wales, David J
2014-07-28
The graph transformation approach is a recently proposed method for computing mean first passage times, rates, and committor probabilities for kinetic transition networks. Here we compare the performance to existing linear algebra methods, focusing on large, sparse networks. We show that graph transformation provides a much more robust framework, succeeding when numerical precision issues cause the other methods to fail completely. These are precisely the situations that correspond to rare event dynamics for which the graph transformation was introduced.
The application of mobile satellite services to emergency response communications
NASA Technical Reports Server (NTRS)
Freibaum, J.
1980-01-01
The application of an integrated satellite/terrestrial emergency response communications system in disaster relief operations is discussed. Large area coverage communications capability, full-time availability, a high degree of mobility, plus reliability, are pointed out as criteria for an effective emergency communications system. Response time is seen as a major factor determining the possible survival and/or protection of property. These criteria, can not be met by existing communications systems and complete blackouts were experienced during the past decades caused by either interruption or destruction of existing power lines, and overload or inadequacy of remaining lines. Several emergency cases, caused by either hurricanes, tornados, or floods, during which communication via satellite was instrumental to inform rescue and relief teams, are described in detail. Seismic Risk Maps and charts of Major Tectonic Plates Earthquake Epicenters are given, and it is noted that, 35 percent of the U.S. population is living in critical areas. National and international agreements for the implementation of a satellite-aided global Search and Rescue Program is mentioned. Technological and economic breakthroughs are still needed in large multibeam antennas, switching circuits, and low cost mobile ground terminals. A pending plan of NASA to initiate a multiservice program in 1982/83, with a Land Mobile Satellite capability operating in the 806 - 890 MHz band as a major element, may help to accelerate the needed breakthroughs.
Big data analytics as a service infrastructure: challenges, desired properties and solutions
NASA Astrophysics Data System (ADS)
Martín-Márquez, Manuel
2015-12-01
CERN's accelerator complex generates a very large amount of data. A large volumen of heterogeneous data is constantly generated from control equipment and monitoring agents. These data must be stored and analysed. Over the decades, CERN's researching and engineering teams have applied different approaches, techniques and technologies for this purpose. This situation has minimised the necessary collaboration and, more relevantly, the cross data analytics over different domains. These two factors are essential to unlock hidden insights and correlations between the underlying processes, which enable better and more efficient daily-based accelerator operations and more informed decisions. The proposed Big Data Analytics as a Service Infrastructure aims to: (1) integrate the existing developments; (2) centralise and standardise the complex data analytics needs for CERN's research and engineering community; (3) deliver real-time, batch data analytics and information discovery capabilities; and (4) provide transparent access and Extract, Transform and Load (ETL), mechanisms to the various and mission-critical existing data repositories. This paper presents the desired objectives and properties resulting from the analysis of CERN's data analytics requirements; the main challenges: technological, collaborative and educational and; potential solutions.
Refugia revisited: individualistic responses of species in space and time
Stewart, John R.; Lister, Adrian M.; Barnes, Ian; Dalén, Love
2010-01-01
Climate change in the past has led to significant changes in species' distributions. However, how individual species respond to climate change depends largely on their adaptations and environmental tolerances. In the Quaternary, temperate-adapted taxa are in general confined to refugia during glacials while cold-adapted taxa are in refugia during interglacials. In the Northern Hemisphere, evidence appears to be mounting that in addition to traditional southern refugia for temperate species, cryptic refugia existed in the North during glacials. Equivalent cryptic southern refugia, to the south of the more conventional high-latitude polar refugia, exist in montane areas during periods of warm climate, such as the current interglacial. There is also a continental/oceanic longitudinal gradient, which should be included in a more complete consideration of the interaction between species ranges and climates. Overall, it seems clear that there is large variation in both the size of refugia and the duration during which species are confined to them. This has implications for the role of refugia in the evolution of species and their genetic diversity. PMID:19864280
Shibuta, Yasushi; Sakane, Shinji; Miyoshi, Eisuke; Okita, Shin; Takaki, Tomohiro; Ohno, Munekazu
2017-04-05
Can completely homogeneous nucleation occur? Large scale molecular dynamics simulations performed on a graphics-processing-unit rich supercomputer can shed light on this long-standing issue. Here, a billion-atom molecular dynamics simulation of homogeneous nucleation from an undercooled iron melt reveals that some satellite-like small grains surrounding previously formed large grains exist in the middle of the nucleation process, which are not distributed uniformly. At the same time, grains with a twin boundary are formed by heterogeneous nucleation from the surface of the previously formed grains. The local heterogeneity in the distribution of grains is caused by the local accumulation of the icosahedral structure in the undercooled melt near the previously formed grains. This insight is mainly attributable to the multi-graphics processing unit parallel computation combined with the rapid progress in high-performance computational environments.Nucleation is a fundamental physical process, however it is a long-standing issue whether completely homogeneous nucleation can occur. Here the authors reveal, via a billion-atom molecular dynamics simulation, that local heterogeneity exists during homogeneous nucleation in an undercooled iron melt.
Summer Proceedings 2016: The Center for Computing Research at Sandia National Laboratories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carleton, James Brian; Parks, Michael L.
Solving sparse linear systems from the discretization of elliptic partial differential equations (PDEs) is an important building block in many engineering applications. Sparse direct solvers can solve general linear systems, but are usually slower and use much more memory than effective iterative solvers. To overcome these two disadvantages, a hierarchical solver (LoRaSp) based on H2-matrices was introduced in [22]. Here, we have developed a parallel version of the algorithm in LoRaSp to solve large sparse matrices on distributed memory machines. On a single processor, the factorization time of our parallel solver scales almost linearly with the problem size for three-dimensionalmore » problems, as opposed to the quadratic scalability of many existing sparse direct solvers. Moreover, our solver leads to almost constant numbers of iterations, when used as a preconditioner for Poisson problems. On more than one processor, our algorithm has significant speedups compared to sequential runs. With this parallel algorithm, we are able to solve large problems much faster than many existing packages as demonstrated by the numerical experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masada, Youhei; Sano, Takayoshi, E-mail: ymasada@auecc.aichi-edu.ac.jp, E-mail: sano@ile.osaka-u.ac.jp
We report the first successful simulation of spontaneous formation of surface magnetic structures from a large-scale dynamo by strongly stratified thermal convection in Cartesian geometry. The large-scale dynamo observed in our strongly stratified model has physical properties similar to those in earlier weakly stratified convective dynamo simulations, indicating that the α {sup 2}-type mechanism is responsible for the dynamo. In addition to the large-scale dynamo, we find that large-scale structures of the vertical magnetic field are spontaneously formed in the convection zone (CZ) surface only in cases with a strongly stratified atmosphere. The organization of the vertical magnetic field proceedsmore » in the upper CZ within tens of convective turnover time and band-like bipolar structures recurrently appear in the dynamo-saturated stage. We consider several candidates to be possibly be the origin of the surface magnetic structure formation, and then suggest the existence of an as-yet-unknown mechanism for the self-organization of the large-scale magnetic structure, which should be inherent in the strongly stratified convective atmosphere.« less
Heterogeneous network epidemics: real-time growth, variance and extinction of infection.
Ball, Frank; House, Thomas
2017-09-01
Recent years have seen a large amount of interest in epidemics on networks as a way of representing the complex structure of contacts capable of spreading infections through the modern human population. The configuration model is a popular choice in theoretical studies since it combines the ability to specify the distribution of the number of contacts (degree) with analytical tractability. Here we consider the early real-time behaviour of the Markovian SIR epidemic model on a configuration model network using a multitype branching process. We find closed-form analytic expressions for the mean and variance of the number of infectious individuals as a function of time and the degree of the initially infected individual(s), and write down a system of differential equations for the probability of extinction by time t that are numerically fast compared to Monte Carlo simulation. We show that these quantities are all sensitive to the degree distribution-in particular we confirm that the mean prevalence of infection depends on the first two moments of the degree distribution and the variance in prevalence depends on the first three moments of the degree distribution. In contrast to most existing analytic approaches, the accuracy of these results does not depend on having a large number of infectious individuals, meaning that in the large population limit they would be asymptotically exact even for one initial infectious individual.
Possible relationship between Seismic Electric Signals (SES) lead time and earthquake stress drop
DOLOGLOU, Elizabeth
2008-01-01
Stress drop values for fourteen large earthquakes with MW ≥ 5.4 which occurred in Greece during the period 1983–2007 are available. All these earthquakes were preceded by Seismic Electric Signals (SES). An attempt has been made to investigate possible correlation between their stress drop values and the corresponding SES lead times. For the stress drop, we considered the Brune stress drop, ΔσB, estimated from far field body wave displacement source spectra and ΔσSB derived from the strong motion acceleration response spectra. The results show a relation may exist between Brune stress drop, ΔσB, and lead time which implies that earthquakes with higher stress drop values are preceded by SES with shorter lead time. PMID:18941291
Mynodbcsv: lightweight zero-config database solution for handling very large CSV files.
Adaszewski, Stanisław
2014-01-01
Volumes of data used in science and industry are growing rapidly. When researchers face the challenge of analyzing them, their format is often the first obstacle. Lack of standardized ways of exploring different data layouts requires an effort each time to solve the problem from scratch. Possibility to access data in a rich, uniform manner, e.g. using Structured Query Language (SQL) would offer expressiveness and user-friendliness. Comma-separated values (CSV) are one of the most common data storage formats. Despite its simplicity, with growing file size handling it becomes non-trivial. Importing CSVs into existing databases is time-consuming and troublesome, or even impossible if its horizontal dimension reaches thousands of columns. Most databases are optimized for handling large number of rows rather than columns, therefore, performance for datasets with non-typical layouts is often unacceptable. Other challenges include schema creation, updates and repeated data imports. To address the above-mentioned problems, I present a system for accessing very large CSV-based datasets by means of SQL. It's characterized by: "no copy" approach--data stay mostly in the CSV files; "zero configuration"--no need to specify database schema; written in C++, with boost [1], SQLite [2] and Qt [3], doesn't require installation and has very small size; query rewriting, dynamic creation of indices for appropriate columns and static data retrieval directly from CSV files ensure efficient plan execution; effortless support for millions of columns; due to per-value typing, using mixed text/numbers data is easy; very simple network protocol provides efficient interface for MATLAB and reduces implementation time for other languages. The software is available as freeware along with educational videos on its website [4]. It doesn't need any prerequisites to run, as all of the libraries are included in the distribution package. I test it against existing database solutions using a battery of benchmarks and discuss the results.
Mynodbcsv: Lightweight Zero-Config Database Solution for Handling Very Large CSV Files
Adaszewski, Stanisław
2014-01-01
Volumes of data used in science and industry are growing rapidly. When researchers face the challenge of analyzing them, their format is often the first obstacle. Lack of standardized ways of exploring different data layouts requires an effort each time to solve the problem from scratch. Possibility to access data in a rich, uniform manner, e.g. using Structured Query Language (SQL) would offer expressiveness and user-friendliness. Comma-separated values (CSV) are one of the most common data storage formats. Despite its simplicity, with growing file size handling it becomes non-trivial. Importing CSVs into existing databases is time-consuming and troublesome, or even impossible if its horizontal dimension reaches thousands of columns. Most databases are optimized for handling large number of rows rather than columns, therefore, performance for datasets with non-typical layouts is often unacceptable. Other challenges include schema creation, updates and repeated data imports. To address the above-mentioned problems, I present a system for accessing very large CSV-based datasets by means of SQL. It's characterized by: “no copy” approach – data stay mostly in the CSV files; “zero configuration” – no need to specify database schema; written in C++, with boost [1], SQLite [2] and Qt [3], doesn't require installation and has very small size; query rewriting, dynamic creation of indices for appropriate columns and static data retrieval directly from CSV files ensure efficient plan execution; effortless support for millions of columns; due to per-value typing, using mixed text/numbers data is easy; very simple network protocol provides efficient interface for MATLAB and reduces implementation time for other languages. The software is available as freeware along with educational videos on its website [4]. It doesn't need any prerequisites to run, as all of the libraries are included in the distribution package. I test it against existing database solutions using a battery of benchmarks and discuss the results. PMID:25068261
A rapid estimation of near field tsunami run-up
Riqueime, Sebastian; Fuentes, Mauricio; Hayes, Gavin; Campos, Jamie
2015-01-01
Many efforts have been made to quickly estimate the maximum run-up height of tsunamis associated with large earthquakes. This is a difficult task, because of the time it takes to construct a tsunami model using real time data from the source. It is possible to construct a database of potential seismic sources and their corresponding tsunami a priori.However, such models are generally based on uniform slip distributions and thus oversimplify the knowledge of the earthquake source. Here, we show how to predict tsunami run-up from any seismic source model using an analytic solution, that was specifically designed for subduction zones with a well defined geometry, i.e., Chile, Japan, Nicaragua, Alaska. The main idea of this work is to provide a tool for emergency response, trading off accuracy for speed. The solutions we present for large earthquakes appear promising. Here, run-up models are computed for: The 1992 Mw 7.7 Nicaragua Earthquake, the 2001 Mw 8.4 Perú Earthquake, the 2003Mw 8.3 Hokkaido Earthquake, the 2007 Mw 8.1 Perú Earthquake, the 2010 Mw 8.8 Maule Earthquake, the 2011 Mw 9.0 Tohoku Earthquake and the recent 2014 Mw 8.2 Iquique Earthquake. The maximum run-up estimations are consistent with measurements made inland after each event, with a peak of 9 m for Nicaragua, 8 m for Perú (2001), 32 m for Maule, 41 m for Tohoku, and 4.1 m for Iquique. Considering recent advances made in the analysis of real time GPS data and the ability to rapidly resolve the finiteness of a large earthquake close to existing GPS networks, it will be possible in the near future to perform these calculations within the first minutes after the occurrence of similar events. Thus, such calculations will provide faster run-up information than is available from existing uniform-slip seismic source databases or past events of pre-modeled seismic sources.
A rapid estimation of tsunami run-up based on finite fault models
NASA Astrophysics Data System (ADS)
Campos, J.; Fuentes, M. A.; Hayes, G. P.; Barrientos, S. E.; Riquelme, S.
2014-12-01
Many efforts have been made to estimate the maximum run-up height of tsunamis associated with large earthquakes. This is a difficult task, because of the time it takes to construct a tsunami model using real time data from the source. It is possible to construct a database of potential seismic sources and their corresponding tsunami a priori. However, such models are generally based on uniform slip distributions and thus oversimplify our knowledge of the earthquake source. Instead, we can use finite fault models of earthquakes to give a more accurate prediction of the tsunami run-up. Here we show how to accurately predict tsunami run-up from any seismic source model using an analytic solution found by Fuentes et al, 2013 that was especially calculated for zones with a very well defined strike, i.e, Chile, Japan, Alaska, etc. The main idea of this work is to produce a tool for emergency response, trading off accuracy for quickness. Our solutions for three large earthquakes are promising. Here we compute models of the run-up for the 2010 Mw 8.8 Maule Earthquake, the 2011 Mw 9.0 Tohoku Earthquake, and the recent 2014 Mw 8.2 Iquique Earthquake. Our maximum rup-up predictions are consistent with measurements made inland after each event, with a peak of 15 to 20 m for Maule, 40 m for Tohoku, and 2,1 m for the Iquique earthquake. Considering recent advances made in the analysis of real time GPS data and the ability to rapidly resolve the finiteness of a large earthquake close to existing GPS networks, it will be possible in the near future to perform these calculations within the first five minutes after the occurrence of any such event. Such calculations will thus provide more accurate run-up information than is otherwise available from existing uniform-slip seismic source databases.
The “jaundice hotline” for the rapid assessment of patients with jaundice
Mitchell, Jonathan; Hussaini, Hyder; McGovern, Dermot; Farrow, Richard; Maskell, Giles; Dalton, Harry
2002-01-01
Problem Patients with jaundice require rapid diagnosis and treatment, yet such patients are often subject to delay. Design An open referral, rapid access jaundice clinic was established by reorganisation of existing services and without the need for significant extra resources. Background and setting A large general hospital in a largely rural and geographically isolated area. Key measures for improvement Waiting times for referral, consultation, diagnosis, and treatment, length of stay in hospital, and general practitioners' and patients' satisfaction with the service. Strategies for change Referrals were made through a 24 hour telephone answering machine and fax line. Initial assessment of patients was carried out by junior staff as part of their working week. Dedicated ultrasonography appointments were made available. Effects of change Of 107 patients seen in the first year of the service, 62 had biliary obstruction. The mean time between referral and consultation was 2.5 days. Patients who went on to endoscopic retrograde cholangiopancreatography waited 5.7 days on average. The mean length of stay in hospital in the 69 patients who were admitted was 6.1 days, compared with 11.5 days in 1996, as shown by audit data. Nearly all the 36 general practices (95%) and the 30 consecutive patients (97%) that were surveyed rated the service as above average or excellent. Lessons learnt An open referral, rapid access service for patients with jaundice can shorten time to diagnosis and treatment and length of stay in hospital. These improvements can occur through the reorganisation of existing services and with minimal extra cost. PMID:12142314
GPU-Q-J, a fast method for calculating root mean square deviation (RMSD) after optimal superposition
2011-01-01
Background Calculation of the root mean square deviation (RMSD) between the atomic coordinates of two optimally superposed structures is a basic component of structural comparison techniques. We describe a quaternion based method, GPU-Q-J, that is stable with single precision calculations and suitable for graphics processor units (GPUs). The application was implemented on an ATI 4770 graphics card in C/C++ and Brook+ in Linux where it was 260 to 760 times faster than existing unoptimized CPU methods. Source code is available from the Compbio website http://software.compbio.washington.edu/misc/downloads/st_gpu_fit/ or from the author LHH. Findings The Nutritious Rice for the World Project (NRW) on World Community Grid predicted de novo, the structures of over 62,000 small proteins and protein domains returning a total of 10 billion candidate structures. Clustering ensembles of structures on this scale requires calculation of large similarity matrices consisting of RMSDs between each pair of structures in the set. As a real-world test, we calculated the matrices for 6 different ensembles from NRW. The GPU method was 260 times faster that the fastest existing CPU based method and over 500 times faster than the method that had been previously used. Conclusions GPU-Q-J is a significant advance over previous CPU methods. It relieves a major bottleneck in the clustering of large numbers of structures for NRW. It also has applications in structure comparison methods that involve multiple superposition and RMSD determination steps, particularly when such methods are applied on a proteome and genome wide scale. PMID:21453553
Deep learning with domain adaptation for accelerated projection-reconstruction MR.
Han, Yoseob; Yoo, Jaejun; Kim, Hak Hee; Shin, Hee Jung; Sung, Kyunghyun; Ye, Jong Chul
2018-09-01
The radial k-space trajectory is a well-established sampling trajectory used in conjunction with magnetic resonance imaging. However, the radial k-space trajectory requires a large number of radial lines for high-resolution reconstruction. Increasing the number of radial lines causes longer acquisition time, making it more difficult for routine clinical use. On the other hand, if we reduce the number of radial lines, streaking artifact patterns are unavoidable. To solve this problem, we propose a novel deep learning approach with domain adaptation to restore high-resolution MR images from under-sampled k-space data. The proposed deep network removes the streaking artifacts from the artifact corrupted images. To address the situation given the limited available data, we propose a domain adaptation scheme that employs a pre-trained network using a large number of X-ray computed tomography (CT) or synthesized radial MR datasets, which is then fine-tuned with only a few radial MR datasets. The proposed method outperforms existing compressed sensing algorithms, such as the total variation and PR-FOCUSS methods. In addition, the calculation time is several orders of magnitude faster than the total variation and PR-FOCUSS methods. Moreover, we found that pre-training using CT or MR data from similar organ data is more important than pre-training using data from the same modality for different organ. We demonstrate the possibility of a domain-adaptation when only a limited amount of MR data is available. The proposed method surpasses the existing compressed sensing algorithms in terms of the image quality and computation time. © 2018 International Society for Magnetic Resonance in Medicine.
Parallel-In-Time For Moving Meshes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Falgout, R. D.; Manteuffel, T. A.; Southworth, B.
2016-02-04
With steadily growing computational resources available, scientists must develop e ective ways to utilize the increased resources. High performance, highly parallel software has be- come a standard. However until recent years parallelism has focused primarily on the spatial domain. When solving a space-time partial di erential equation (PDE), this leads to a sequential bottleneck in the temporal dimension, particularly when taking a large number of time steps. The XBraid parallel-in-time library was developed as a practical way to add temporal parallelism to existing se- quential codes with only minor modi cations. In this work, a rezoning-type moving mesh is appliedmore » to a di usion problem and formulated in a parallel-in-time framework. Tests and scaling studies are run using XBraid and demonstrate excellent results for the simple model problem considered herein.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mezei, Márk; Stanford, Douglas
We discuss the time dependence of subsystem entropies in interacting quantum systems. As a model for the time dependence, we suggest that the entropy is as large as possible given two constraints: one follows from the existence of an emergent light cone, and the other is a conjecture associated to the ''entanglement velocity'' v E. We compare this model to new holographic and spin chain computations, and to an operator growth picture. Finally, we introduce a second way of computing the emergent light cone speed in holographic theories that provides a boundary dynamics explanation for a special case of entanglementmore » wedge subregion duality in AdS/CFT.« less
A note on windowing for the waveform relaxation
NASA Technical Reports Server (NTRS)
Zhang, Hong
1994-01-01
The technique of windowing has been often used in the implementation of the waveform relaxations for solving ODE's or time dependent PDE's. Its efficiency depends upon problem stiffness and operator splitting. Using model problems, the estimates for window length and convergence rate are derived. The electiveness of windowing is then investigated for non-stiff and stiff cases respectively. lt concludes that for the former, windowing is highly recommended when a large discrepancy exists between the convergence rate on a time interval and the ones on its subintervals. For the latter, windowing does not provide any computational advantage if machine features are disregarded. The discussion is supported by experimental results.
On entanglement spreading in chaotic systems
Mezei, Márk; Stanford, Douglas
2017-05-11
We discuss the time dependence of subsystem entropies in interacting quantum systems. As a model for the time dependence, we suggest that the entropy is as large as possible given two constraints: one follows from the existence of an emergent light cone, and the other is a conjecture associated to the ''entanglement velocity'' v E. We compare this model to new holographic and spin chain computations, and to an operator growth picture. Finally, we introduce a second way of computing the emergent light cone speed in holographic theories that provides a boundary dynamics explanation for a special case of entanglementmore » wedge subregion duality in AdS/CFT.« less
Universality in chaos: Lyapunov spectrum and random matrix theory.
Hanada, Masanori; Shimada, Hidehiko; Tezuka, Masaki
2018-02-01
We propose the existence of a new universality in classical chaotic systems when the number of degrees of freedom is large: the statistical property of the Lyapunov spectrum is described by random matrix theory. We demonstrate it by studying the finite-time Lyapunov exponents of the matrix model of a stringy black hole and the mass-deformed models. The massless limit, which has a dual string theory interpretation, is special in that the universal behavior can be seen already at t=0, while in other cases it sets in at late time. The same pattern is demonstrated also in the product of random matrices.
Universality in chaos: Lyapunov spectrum and random matrix theory
NASA Astrophysics Data System (ADS)
Hanada, Masanori; Shimada, Hidehiko; Tezuka, Masaki
2018-02-01
We propose the existence of a new universality in classical chaotic systems when the number of degrees of freedom is large: the statistical property of the Lyapunov spectrum is described by random matrix theory. We demonstrate it by studying the finite-time Lyapunov exponents of the matrix model of a stringy black hole and the mass-deformed models. The massless limit, which has a dual string theory interpretation, is special in that the universal behavior can be seen already at t =0 , while in other cases it sets in at late time. The same pattern is demonstrated also in the product of random matrices.
Beretta, E; Capasso, V; Rinaldi, F
1988-01-01
The paper contains an extension of the general ODE system proposed in previous papers by the same authors, to include distributed time delays in the interaction terms. The new system describes a large class of Lotka-Volterra like population models and epidemic models with continuous time delays. Sufficient conditions for the boundedness of solutions and for the global asymptotic stability of nontrivial equilibrium solutions are given. A detailed analysis of the epidemic system is given with respect to the conditions for global stability. For a relevant subclass of these systems an existence criterion for steady states is also given.
Using an SLR inversion to measure the mass balance of Greenland before and during GRACE
NASA Astrophysics Data System (ADS)
Bonin, Jennifer
2016-04-01
The GRACE mission has done an admirable job of measuring large-scale mass changes over Greenland since its launch in 2002. However before that time, measurements of large-scale ice mass balance were few and far between, leading to a lack of baseline knowledge. High-quality Satellite Laser Ranging (SLR) data existed a decade earlier, but normally has too low a spatial resolution to be used for this purpose. I demonstrate that a least squares inversion technique can reconstitute the SLR data and use it to measure ice loss over Greenland. To do so, I first simulate the problem by degrading today's GRACE data to a level comparable with SLR, then demonstrating that the inversion can re-localize Greenland's contribution to the low-resolution signal, giving an accurate time series of mass change over all of Greenland which compares well with the full-resolution GRACE estimates. I then utilize that method on the actual SLR data, resulting in an independent 1994-2014 time series of mass change over Greenland. I find favorable agreement between the pure-SLR inverted results and the 2012 Ice-sheet Mass Balance Inter-comparison Exercise (IMBIE) results, which are largely based on the "input-output" modeling method before GRACE's launch.
Reinhold, Ann Marie; Poole, Geoffrey C; Bramblett, Robert G; Zale, Alexander V; Roberts, David W
2018-04-24
Determining the influences of anthropogenic perturbations on side channel dynamics in large rivers is important from both assessment and monitoring perspectives because side channels provide critical habitat to numerous aquatic species. Side channel extents are decreasing in large rivers worldwide. Although riprap and other linear structures have been shown to reduce side channel extents in large rivers, we hypothesized that small "anthropogenic plugs" (flow obstructions such as dikes or berms) across side channels modify whole-river geomorphology via accelerating side channel senescence. To test this hypothesis, we conducted a geospatial assessment, comparing digitized side channel areas from aerial photographs taken during the 1950s and 2001 along 512 km of the Yellowstone River floodplain. We identified longitudinal patterns of side channel recruitment (created/enlarged side channels) and side channel attrition (destroyed/senesced side channels) across n = 17 river sections within which channels were actively migrating. We related areal measures of recruitment and attrition to the density of anthropogenic side channel plugs across river sections. Consistent with our hypothesis, a positive spatial relationship existed between the density of anthropogenic plugs and side channel attrition, but no relationship existed between plug density and side channel recruitment. Our work highlights important linkages among side channel plugs and the persistence and restoration of side channels across floodplain landscapes. Specifically, management of small plugs represents a low-cost, high-benefit restoration opportunity to facilitate scouring flows in side channels to enable the persistence of these habitats over time.
[Detection of occupational hazards in a large shipbuilding factory].
Du, Weijia; Wang, Zhi; Zhang, Hai; Zhou, Liping; Huang, Minzhi; Liu, Yimin
2014-03-01
To provide evidence for the prevention and treatment of occupational diseases by the analysis of existing major occupational hazards and health conditions of workers in a large shipbuilding factory. Field investigation of occupational conditions was conducted to examine the existence of occupational hazards from 2009 to 2012 in a large shipbuilding factory, and then the results of physical examination among its workers were analyzed. Other than the metal dust (total dust), the levels of other dusts and manganese dioxide were beyond the national standard to various degrees, and through a sampling point detection, it was found that the levels of manganese dioxide exceeded the standard by 42.8%. The maximum time-weighted average concentration in individuals was 27.927 mg/m(3), much higher than the national standard limit. For harmful gas detection in individuals, xylene was 38.4%above the standard level (the highest concentration reached 1447.7 mg/m(3)); moreover, both toluene and ethylbenzene exceeded the national standard at different levels. Among the noise-exposed workers, 71%worked in the environment where the daily noise was above the limit of the national standard (85 dB). Physical examinations in 2010 and 2012 showed that the abnormal rate of audiometry in workers was higher than 15%. Dust (total dust), manganese dioxide, benzene, and noise are the main occupational hazards among the workers in the large shipbuilding factory, and strict protection and control for these hazards should be implemented for the workers in the factory.
Michalareas, George; Schoffelen, Jan-Mathijs; Paterson, Gavin; Gross, Joachim
2013-01-01
Abstract In this work, we investigate the feasibility to estimating causal interactions between brain regions based on multivariate autoregressive models (MAR models) fitted to magnetoencephalographic (MEG) sensor measurements. We first demonstrate the theoretical feasibility of estimating source level causal interactions after projection of the sensor-level model coefficients onto the locations of the neural sources. Next, we show with simulated MEG data that causality, as measured by partial directed coherence (PDC), can be correctly reconstructed if the locations of the interacting brain areas are known. We further demonstrate, if a very large number of brain voxels is considered as potential activation sources, that PDC as a measure to reconstruct causal interactions is less accurate. In such case the MAR model coefficients alone contain meaningful causality information. The proposed method overcomes the problems of model nonrobustness and large computation times encountered during causality analysis by existing methods. These methods first project MEG sensor time-series onto a large number of brain locations after which the MAR model is built on this large number of source-level time-series. Instead, through this work, we demonstrate that by building the MAR model on the sensor-level and then projecting only the MAR coefficients in source space, the true casual pathways are recovered even when a very large number of locations are considered as sources. The main contribution of this work is that by this methodology entire brain causality maps can be efficiently derived without any a priori selection of regions of interest. Hum Brain Mapp, 2013. © 2012 Wiley Periodicals, Inc. PMID:22328419
NASA Astrophysics Data System (ADS)
Beckwith, A. W.
2008-01-01
Sean Carroll's pre-inflation state of low temperature-low entropy provides a bridge between two models with different predictions. The Wheeler-de Witt equation provides thermal input into today's universe for graviton production. Also, brane world models by Sundrum allow low entropy conditions, as given by Carroll & Chen (2005). Moreover, this paper answers the question of how to go from a brane world model to the 10 to the 32 power Kelvin conditions stated by Weinberg in 1972 as necessary for the initiation of quantum gravity processes. This is a way of getting around the fact CMBR is cut off at a red shift of z = 1100. This paper discusses the difference in values of the upper bound of the cosmological constant between a large upper bound predicated for a temperature dependent vacuum energy predicted by Park (2002), and the much lower bound predicted by Barvinsky (2006). with the difference in values in vacuum energy contributing to relic graviton production. This paper claims that this large thermal influx, with a high initial cosmological constant and a large region of space for relic gravitons interacting with space-time up to the z = 1100 CMBR observational limit are interlinked processes delineated in the Lloyd (2002) analogy of the universe as a quantum computing system. Finally, the paper claims that linking a shrinking prior universe via a worm hole solution for a pseudo time dependent Wheeler-De Witt equation permits graviton generation as thermal input from the prior universe, transferred instantaneously to relic inflationary conditions today. The existence of a wormhole is presented as a necessary condition for relic gravitons. Proving the sufficiency of the existence of a worm hole for relic gravitons is a future project.
Estimates of the maximum time required to originate life
NASA Technical Reports Server (NTRS)
Oberbeck, Verne R.; Fogleman, Guy
1989-01-01
Fossils of the oldest microorganisms exist in 3.5 billion year old rocks and there is indirect evidence that life may have existed 3.8 billion years ago (3.8 Ga). Impacts able to destroy life or interrupt prebiotic chemistry may have occurred after 3.5 Ga. If large impactors vaporized the oceans, sterilized the planets, and interfered with the origination of life, life must have originated in the time interval between these impacts which increased with geologic time. Therefore, the maximum time required for the origination of life is the time that occurred between sterilizing impacts just before 3.8 Ga or 3.5 Ga, depending upon when life first appeared on earth. If life first originated 3.5 Ga, and impacts with kinetic energies between 2 x 10 the the 34th and 2 x 10 to the 35th were able to vaporize the oceans, using the most probable impact flux, it is found that the maximum time required to originate life would have been 67 to 133 million years (My). If life originated 3.8 Ga, the maximum time to originate life was 2.5 to 11 My. Using a more conservative estimate for the flux of impacting objects before 3.8 Ga, a maximum time of 25 My was found for the same range of impactor kinetic energies. The impact model suggests that it is possible that life may have originated more than once.
Wang, Xiaojing; Chen, Ming-Hui; Yan, Jun
2013-07-01
Cox models with time-varying coefficients offer great flexibility in capturing the temporal dynamics of covariate effects on event times, which could be hidden from a Cox proportional hazards model. Methodology development for varying coefficient Cox models, however, has been largely limited to right censored data; only limited work on interval censored data has been done. In most existing methods for varying coefficient models, analysts need to specify which covariate coefficients are time-varying and which are not at the time of fitting. We propose a dynamic Cox regression model for interval censored data in a Bayesian framework, where the coefficient curves are piecewise constant but the number of pieces and the jump points are covariate specific and estimated from the data. The model automatically determines the extent to which the temporal dynamics is needed for each covariate, resulting in smoother and more stable curve estimates. The posterior computation is carried out via an efficient reversible jump Markov chain Monte Carlo algorithm. Inference of each coefficient is based on an average of models with different number of pieces and jump points. A simulation study with three covariates, each with a coefficient of different degree in temporal dynamics, confirmed that the dynamic model is preferred to the existing time-varying model in terms of model comparison criteria through conditional predictive ordinate. When applied to a dental health data of children with age between 7 and 12 years, the dynamic model reveals that the relative risk of emergence of permanent tooth 24 between children with and without an infected primary predecessor is the highest at around age 7.5, and that it gradually reduces to one after age 11. These findings were not seen from the existing studies with Cox proportional hazards models.
Wang, Chun-Yong; Chan, W.W.; Mooney, W.D.
2003-01-01
Using P and S arrival times from 4625 local and regional earthquakes recorded at 174 seismic stations and associated geophysical investigations, this paper presents a three-dimensional crustal and upper mantle velocity structure of southwestern China (21??-34??N, 97??-105??E). Southwestern China lies in the transition zone between the uplifted Tibetan plateau to the west and the Yangtze continental platform to the east. In the upper crust a positive velocity anomaly exists in the Sichuan Basin, whereas a large-scale negative velocity anomaly exists in the western Sichuan Plateau, consistent with the upper crustal structure under the southern Tibetan plateau. The boundary between these two anomaly zones is the Longmen Shan Fault. The negative velocity anomalies at 50-km depth in the Tengchong volcanic area and the Panxi tectonic zone appear to be associated with temperature and composition variations in the upper mantle. The Red River Fault is the boundary between the positive and negative velocity anomalies at 50-km depth. The overall features of the crustal and the upper mantle structures in southwestern China are a low average velocity, large crustal thickness variations, the existence of a high-conductivity layer in the crust or/and upper mantle, and a high heat flow value. All these features are closely related to the collision between the Indian and the Asian plates.
ERIC Educational Resources Information Center
Nolin, Anna P.
2014-01-01
This study explored the role of professional learning communities for district leadership implementing large-scale technology initiatives such as 1:1 implementations (one computing device for every student). The existing literature regarding technology leadership is limited, as is literature on how districts use existing collaborative structures…
An Iterative Time Windowed Signature Algorithm for Time Dependent Transcription Module Discovery
Meng, Jia; Gao, Shou-Jiang; Huang, Yufei
2010-01-01
An algorithm for the discovery of time varying modules using genome-wide expression data is present here. When applied to large-scale time serious data, our method is designed to discover not only the transcription modules but also their timing information, which is rarely annotated by the existing approaches. Rather than assuming commonly defined time constant transcription modules, a module is depicted as a set of genes that are co-regulated during a specific period of time, i.e., a time dependent transcription module (TDTM). A rigorous mathematical definition of TDTM is provided, which is serve as an objective function for retrieving modules. Based on the definition, an effective signature algorithm is proposed that iteratively searches the transcription modules from the time series data. The proposed method was tested on the simulated systems and applied to the human time series microarray data during Kaposi's sarcoma-associated herpesvirus (KSHV) infection. The result has been verified by Expression Analysis Systematic Explorer. PMID:21552463
NASA Astrophysics Data System (ADS)
Nazari, B.; Seo, D.; Cannon, A.
2013-12-01
With many diverse features such as channels, pipes, culverts, buildings, etc., hydraulic modeling in urban areas for inundation mapping poses significant challenges. Identifying the practical extent of the details to be modeled in order to obtain sufficiently accurate results in a timely manner for effective emergency management is one of them. In this study we assess the tradeoffs between model complexity vs. information content for decision making in applying high-resolution hydrologic and hydraulic models for real-time flash flood forecasting and inundation mapping in urban areas. In a large urban area such as the Dallas-Fort Worth Metroplex (DFW), there exists very large spatial variability in imperviousness depending on the area of interest. As such, one may expect significant sensitivity of hydraulic model results to the resolution and accuracy of hydrologic models. In this work, we present the initial results from coupling of high-resolution hydrologic and hydraulic models for two 'hot spots' within the City of Fort Worth for real-time inundation mapping.
Calibration and testing of selected portable flowmeters for use on large irrigation systems
Luckey, Richard R.; Heimes, Frederick J.; Gaggiani, Neville G.
1980-01-01
Existing methods for measuring discharge of irrigation systems in the High Plains region are not suitable to provide the pumpage data required by the High Plains Regional Aquifer System Analysis. Three portable flowmeters that might be suitable for obtaining fast and accurate discharge measure-ments on large irrigation systems were tested during 1979 under both laboratory and field conditions: propeller type gated-pipe meter, a Doppler meter, and a transient-time meter.The gated-pipe meter was found to be difficult to use and sensitive to particulate matter in the fluid. The Doppler meter, while easy to use, would not function suitably on steel pipe 6 inches or larger in diameter, or on aluminum pipe larger than 8 inches in diameter. The transient-time meter was more difficult to use than the other two meters; however, this instrument provided a high degree of accuracy and reliability under a variety of conditions. Of the three meters tested, only the transient-time meter was found to be suitable for providing reliable discharge measurements on the variety of irrigation systems used in the High Plains region.
Short apsidal period of three eccentric eclipsing binaries discovered in the Large Magellanic Cloud
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Kyeongsoo; Lee, Chung-Uk; Kim, Seung-Lee
2014-06-01
We present new elements of apsidal motion in three eccentric eclipsing binaries located in the Large Magellanic Cloud. The apsidal motions of the systems were analyzed using both light curves and eclipse timings. The OGLE-III data obtained during the long period of 8 yr (2002-2009) allowed us to determine the apsidal motion period from their analyses. The existence of third light in all selected systems was investigated by light curve analysis. The O – C diagrams of EROS 1018, EROS 1041, and EROS 1054 were analyzed using the 30, 44, and 26 new times of minimum light, respectively, determined frommore » full light curves constructed from EROS, MACHO, OGLE-II, OGLE-III, and our own observations. This enabled a detailed study of the apsidal motion in these systems for the first time. All of the systems have a significant apsidal motion below 100 yr. In particular, EROS 1018 shows a very fast apsidal period of 19.9 ± 2.2 yr in a detached system.« less
NASA Astrophysics Data System (ADS)
Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun
2016-01-01
Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.
Young organic matter as a source of carbon dioxide outgassing from Amazonian rivers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayorga, E; Aufdenkampe, A K; Masiello, C A
2005-06-23
Rivers are generally supersaturated with respect to carbon dioxide, resulting in large gas evasion fluxes that can be a significant component of regional net carbon budgets. Amazonian rivers were recently shown to outgas more than ten times the amount of carbon exported to the ocean in the form of total organic carbon or dissolved inorganic carbon. High carbon dioxide concentrations in rivers originate largely from in situ respiration of organic carbon, but little agreement exists about the sources or turnover times of this carbon. Here we present results of an extensive survey of the carbon isotope composition ({sup 13}C andmore » {sup 14}C) of dissolved inorganic carbon and three size-fractions of organic carbon across the Amazonian river system. We find that respiration of contemporary organic matter (less than 5 years old) originating on land and near rivers is the dominant source of excess carbon dioxide that drives outgassing in mid-size to large rivers, although we find that bulk organic carbon fractions transported by these rivers range from tens to thousands of years in age. We therefore suggest that a small, rapidly cycling pool of organic carbon is responsible for the large carbon fluxes from land to water to atmosphere in the humid tropics.« less
Time-dependent breakdown of fiber networks: Uncertainty of lifetime
NASA Astrophysics Data System (ADS)
Mattsson, Amanda; Uesaka, Tetsu
2017-05-01
Materials often fail when subjected to stresses over a prolonged period. The time to failure, also called the lifetime, is known to exhibit large variability of many materials, particularly brittle and quasibrittle materials. For example, a coefficient of variation reaches 100% or even more. Its distribution shape is highly skewed toward zero lifetime, implying a large number of premature failures. This behavior contrasts with that of normal strength, which shows a variation of only 4%-10% and a nearly bell-shaped distribution. The fundamental cause of this large and unique variability of lifetime is not well understood because of the complex interplay between stochastic processes taking place on the molecular level and the hierarchical and disordered structure of the material. We have constructed fiber network models, both regular and random, as a paradigm for general material structures. With such networks, we have performed Monte Carlo simulations of creep failure to establish explicit relationships among fiber characteristics, network structures, system size, and lifetime distribution. We found that fiber characteristics have large, sometimes dominating, influences on the lifetime variability of a network. Among the factors investigated, geometrical disorders of the network were found to be essential to explain the large variability and highly skewed shape of the lifetime distribution. With increasing network size, the distribution asymptotically approaches a double-exponential form. The implication of this result is that, so-called "infant mortality," which is often predicted by the Weibull approximation of the lifetime distribution, may not exist for a large system.
Menstrual Cycle Maintenance and Quality of Life After Breast Cancer Treatment: A Prospective Study.
1997-10-01
quality of life of these young patients may be compromised by premature menopause with symptoms such as hot flashes, sleep disturbances, decreased libido, and vagina dryness. Very little is known about the incidence, onset, time course, and symptomatology of premature menopause induced by breast cancer therapy and virtually nothing is known about its impact on the young survivor’s quality of life . No prospective study heretofore exists. A comprehensive analysis on a large prospective study cohort as proposed herein will
2007-03-01
westerly surface winds, the existence of a dry-adiabatic lapse rate, and often the appearance of wave cloud features (Oard, 1993). For a long time...indicate that a large-scale mountain wave feature was present across almost the entire western United States. The GFS indicates this was a standing 31... wave and not a propagating feature since it persisted with very little movement from about 0600 UTC 6 Mar until about 0000 UTC 7 Mar. A cross
Design and Development of a Prototype Organizational Effectiveness Information System
1984-11-01
information from a large number of people. The existing survey support process for the GOQ is not satisfac- * tory. Most OESOs elect not to use it, because...reporting process uses screen queries and menus to simplify data entry, it is estimated that only 4-6 hours of data entry time would be required for ...description for the file named EVEDIR. The Resource System allows users of the Event Directory to select from the following processing options. o Add a new
Fault-tolerant clock synchronization in distributed systems
NASA Technical Reports Server (NTRS)
Ramanathan, Parameswaran; Shin, Kang G.; Butler, Ricky W.
1990-01-01
Existing fault-tolerant clock synchronization algorithms are compared and contrasted. These include the following: software synchronization algorithms, such as convergence-averaging, convergence-nonaveraging, and consistency algorithms, as well as probabilistic synchronization; hardware synchronization algorithms; and hybrid synchronization. The worst-case clock skews guaranteed by representative algorithms are compared, along with other important aspects such as time, message, and cost overhead imposed by the algorithms. More recent developments such as hardware-assisted software synchronization and algorithms for synchronizing large, partially connected distributed systems are especially emphasized.
2009-12-01
tall coconut groves tower over cassava plants. Livestock is minimal and used mainly to support individual households. Dense forests still exist at...may also be “borrowed” for cultivating cassava, but in this case it is not customary to pay for the right. To cultivate coconuts , it is not...shallow waters . Large woven nets are used by full-time fisherman, while traditional boats may now sport gas motors. While discouraged and even
QU at TREC-2015: Building Real-Time Systems for Tweet Filtering and Question Answering
2015-11-20
from Yahoo ! An- swers. We adopted a very simple approach that searched an archived Yahoo ! Answers QA dataset for similar questions to the asked ones and...users to post and answer questions. Yahoo ! An- swers1 is by far one of the largest sQA platforms. Questions and answers on such platforms share some...multiple domains [5]. However, the existence of large social question answering websites, such as Yahoo ! Answers specifically, makes the development of
A therapy inactivating the tumor angiogenic factors.
Morales-Rodrigo, Cristian
2013-02-01
This paper is devoted to a nonlinear system of partial differential equations modeling the effect of an anti-angiogenic therapy based on an agent that binds to the tumor angiogenic factors. The main feature of the model under consideration is a nonlinear flux production of tumor angiogenic factors at the boundary of the tumor. It is proved the global existence for the nonlinear system and the effect in the large time behavior of the system for high doses of the therapeutic agent.
Colloidal Assemblies Effect on Chemical Reactions
1988-12-01
phenoxyacetic acid (2,4,5-T) and 2,4,5-trichlorophenol on TiO led to complete mineralization into CO and HCl. In the degradatior of 2,4,5-T several...for the reasons already given. Of course this effect is much more evident in the acidic medium than in the alkaline me- dium where a very large amount...nearly similar times (see Fig.42). The results of Table 4 also indicate that a HC104 effect exists, and at higher concentrations of this acid the
Uncertainties in climate data sets
NASA Technical Reports Server (NTRS)
Mcguirk, James P.
1992-01-01
Climate diagnostics are constructed from either analyzed fields or from observational data sets. Those that have been commonly used are normally considered ground truth. However, in most of these collections, errors and uncertainties exist which are generally ignored due to the consistency of usage over time. Examples of uncertainties and errors are described in NMC and ECMWF analyses and in satellite observational sets-OLR, TOVS, and SMMR. It is suggested that these errors can be large, systematic, and not negligible in climate analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cary, J.R.
During the most recent funding period the authors obtained results important for helical confinement systems and in the use of modern computational methods for modeling of fusion systems. The most recent results include showing that the set of magnetic field functions that are omnigenous (i.e., the bounce-average drift lies within the flux surface) and, therefore, have good transport properties, is much larger than the set of quasihelical systems. This is important as quasihelical systems exist only for large aspect ratio. The authors have also carried out extensive earlier work on developing integrable three-dimensional magnetic fields, on trajectories in three-dimensional configurations,more » and on the existence of three-dimensional MHD equilibria close to vacuum integrable fields. At the same time they have been investigating the use of object oriented methods for scientific computing.« less
Stability analysis of the Euler discretization for SIR epidemic model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suryanto, Agus
2014-06-19
In this paper we consider a discrete SIR epidemic model obtained by the Euler method. For that discrete model, existence of disease free equilibrium and endemic equilibrium is established. Sufficient conditions on the local asymptotical stability of both disease free equilibrium and endemic equilibrium are also derived. It is found that the local asymptotical stability of the existing equilibrium is achieved only for a small time step size h. If h is further increased and passes the critical value, then both equilibriums will lose their stability. Our numerical simulations show that a complex dynamical behavior such as bifurcation or chaosmore » phenomenon will appear for relatively large h. Both analytical and numerical results show that the discrete SIR model has a richer dynamical behavior than its continuous counterpart.« less
Estimating clinical chemistry reference values based on an existing data set of unselected animals.
Dimauro, Corrado; Bonelli, Piero; Nicolussi, Paola; Rassu, Salvatore P G; Cappio-Borlino, Aldo; Pulina, Giuseppe
2008-11-01
In an attempt to standardise the determination of biological reference values, the International Federation of Clinical Chemistry (IFCC) has published a series of recommendations on developing reference intervals. The IFCC recommends the use of an a priori sampling of at least 120 healthy individuals. However, such a high number of samples and laboratory analysis is expensive, time-consuming and not always feasible, especially in veterinary medicine. In this paper, an alternative (a posteriori) method is described and is used to determine reference intervals for biochemical parameters of farm animals using an existing laboratory data set. The method used was based on the detection and removal of outliers to obtain a large sample of animals likely to be healthy from the existing data set. This allowed the estimation of reliable reference intervals for biochemical parameters in Sarda dairy sheep. This method may also be useful for the determination of reference intervals for different species, ages and gender.
Moderate-magnitude earthquakes induced by magma reservoir inflation at Kīlauea Volcano, Hawai‘i
Wauthier, Christelle; Roman, Diana C.; Poland, Michael P.
2013-01-01
Although volcano-tectonic (VT) earthquakes often occur in response to magma intrusion, it is rare for them to have magnitudes larger than ~M4. On 24 May 2007, two shallow M4+ earthquakes occurred beneath the upper part of the east rift zone of Kīlauea Volcano, Hawai‘i. An integrated analysis of geodetic, seismic, and field data, together with Coulomb stress modeling, demonstrates that the earthquakes occurred due to strike-slip motion on pre-existing faults that bound Kīlauea Caldera to the southeast and that the pressurization of Kīlauea's summit magma system may have been sufficient to promote faulting. For the first time, we infer a plausible origin to generate rare moderate-magnitude VTs at Kīlauea by reactivation of suitably oriented pre-existing caldera-bounding faults. Rare moderate- to large-magnitude VTs at Kīlauea and other volcanoes can therefore result from reactivation of existing fault planes due to stresses induced by magmatic processes.
Methodology for Augmenting Existing Paths with Additional Parallel Transects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, John E.
2013-09-30
Visual Sample Plan (VSP) is sample planning software that is used, among other purposes, to plan transect sampling paths to detect areas that were potentially used for munition training. This module was developed for application on a large site where existing roads and trails were to be used as primary sampling paths. Gap areas between these primary paths needed to found and covered with parallel transect paths. These gap areas represent areas on the site that are more than a specified distance from a primary path. These added parallel paths needed to optionally be connected together into a single path—themore » shortest path possible. The paths also needed to optionally be attached to existing primary paths, again with the shortest possible path. Finally, the process must be repeatable and predictable so that the same inputs (primary paths, specified distance, and path options) will result in the same set of new paths every time. This methodology was developed to meet those specifications.« less
Physics reach of MoEDAL at LHC: magnetic monopoles, supersymmetry and beyond
NASA Astrophysics Data System (ADS)
Mavromatos, Nick E.; Mitsou, Vasiliki A.
2017-12-01
MoEDAL is a pioneering experiment designed to search for highly ionising messengers of new physics such as magnetic monopoles or massive (pseudo-)stable charged particles, that are predicted to exist in a plethora of models beyond the Standard Model. Its ground-breaking physics program defines a number of scenarios that yield potentially revolutionary insights into such foundational questions as, are there extra dimensions or new symmetries, what is the mechanism for the generation of mass, does magnetic charge exist, what is the nature of dark matter, and, how did the big-bang develop at the earliest times. MoEDAL's purpose is to meet such far-reaching challenges at the frontier of the field. The physics reach of the existing MoEDAL detector is discussed, giving emphasis on searches for magnetic monopoles, supersymmetric (semi)stable partners, doubly charged Higgs bosons, and exotic structures such as black-hole remnants in models with large extra spatial dimensions and D-matter in some brane theories.
Correlates of mobile screen media use among children aged 0-8: protocol for a systematic review.
Paudel, Susan; Leavy, Justine; Jancey, Jonine
2016-06-03
Childhood is a crucial period for shaping healthy behaviours; however, it currently appears to be dominated by screen time. A large proportion of young children do not adhere to the screen time recommendations, with the use of mobile screen devices becoming more common than fixed screens. Existing systematic reviews on correlates of screen time have focused largely on the traditional fixed screen devices such as television. Reviews specially focused on mobile screen media are almost non-existent. This paper describes the protocol for conducting a systematic review of papers published between 2009 and 2015 to identify the correlates of mobile screen media use among children aged 0-8 years. A systematic literature search of electronic databases will be carried out using different combinations of keywords for papers published in English between January 2009 and December 2015. Additionally, a manual search of reference lists and citations will also be conducted. Papers that have examined correlates of screen time among children aged 0-8 will be included in the review. Studies must include at least one type of mobile screen media (mobile phones, electronic tablets or handheld computers) to be eligible for inclusion. This study will identify correlates of mobile screen-viewing among children in five categories: (i) child biological and demographic correlates, (ii) behavioural correlates, (iii) family biological and demographic correlates, (iv) family structure-related correlates and (v) socio-cultural and environmental correlates. PRISMA statement will be used for ensuring transparency and scientific reporting of the results. This study will identify the correlates associated with increased mobile screen media use among young children through the systematic review of published peer-reviewed papers. This will contribute to addressing the knowledge gap in this area. The results will provide an evidence base to better understand correlates of mobile screen media use and potentially inform the development of recommendations to reduce screen time among those aged 0-8 years. PROSPERO CRD42015028028 .
Indurkhya, Sagar; Beal, Jacob
2010-01-06
ODE simulations of chemical systems perform poorly when some of the species have extremely low concentrations. Stochastic simulation methods, which can handle this case, have been impractical for large systems due to computational complexity. We observe, however, that when modeling complex biological systems: (1) a small number of reactions tend to occur a disproportionately large percentage of the time, and (2) a small number of species tend to participate in a disproportionately large percentage of reactions. We exploit these properties in LOLCAT Method, a new implementation of the Gillespie Algorithm. First, factoring reaction propensities allows many propensities dependent on a single species to be updated in a single operation. Second, representing dependencies between reactions with a bipartite graph of reactions and species requires only storage for reactions, rather than the required for a graph that includes only reactions. Together, these improvements allow our implementation of LOLCAT Method to execute orders of magnitude faster than currently existing Gillespie Algorithm variants when simulating several yeast MAPK cascade models.
Indurkhya, Sagar; Beal, Jacob
2010-01-01
ODE simulations of chemical systems perform poorly when some of the species have extremely low concentrations. Stochastic simulation methods, which can handle this case, have been impractical for large systems due to computational complexity. We observe, however, that when modeling complex biological systems: (1) a small number of reactions tend to occur a disproportionately large percentage of the time, and (2) a small number of species tend to participate in a disproportionately large percentage of reactions. We exploit these properties in LOLCAT Method, a new implementation of the Gillespie Algorithm. First, factoring reaction propensities allows many propensities dependent on a single species to be updated in a single operation. Second, representing dependencies between reactions with a bipartite graph of reactions and species requires only storage for reactions, rather than the required for a graph that includes only reactions. Together, these improvements allow our implementation of LOLCAT Method to execute orders of magnitude faster than currently existing Gillespie Algorithm variants when simulating several yeast MAPK cascade models. PMID:20066048
Modifications to the Conduit Flow Process Mode 2 for MODFLOW-2005
Reimann, T.; Birk, S.; Rehrl, C.; Shoemaker, W.B.
2012-01-01
As a result of rock dissolution processes, karst aquifers exhibit highly conductive features such as caves and conduits. Within these structures, groundwater flow can become turbulent and therefore be described by nonlinear gradient functions. Some numerical groundwater flow models explicitly account for pipe hydraulics by coupling the continuum model with a pipe network that represents the conduit system. In contrast, the Conduit Flow Process Mode 2 (CFPM2) for MODFLOW-2005 approximates turbulent flow by reducing the hydraulic conductivity within the existing linear head gradient of the MODFLOW continuum model. This approach reduces the practical as well as numerical efforts for simulating turbulence. The original formulation was for large pore aquifers where the onset of turbulence is at low Reynolds numbers (1 to 100) and not for conduits or pipes. In addition, the existing code requires multiple time steps for convergence due to iterative adjustment of the hydraulic conductivity. Modifications to the existing CFPM2 were made by implementing a generalized power function with a user-defined exponent. This allows for matching turbulence in porous media or pipes and eliminates the time steps required for iterative adjustment of hydraulic conductivity. The modified CFPM2 successfully replicated simple benchmark test problems. ?? 2011 The Author(s). Ground Water ?? 2011, National Ground Water Association.
NASA Astrophysics Data System (ADS)
Davis, Nikolaos; Rybicki, Andrzej; Szczurek, Antoni
2017-12-01
We review our studies of spectator-induced electromagnetic (EM) effects on charged pion emission in ultrarelativistic heavy ion collisions. These effects are found to consist in the electromagnetic charge splitting of pion directed flow as well as very large distortions in spectra and ratios of produced charged particles. As it emerges from our analysis, they offer sensitivity to the actual distance, dE, between the pion formation zone at freeze-out and the spectator matter. As a result, this offers a new possibility of studying the space-time evolution of dense and hot matter created in the course of the collision. Having established that dE traces the longitudinal evolution of the system and therefore rapidly decreases as a function of pion rapidity, we investigate the latter finding in view of pion feed-over from intermediate resonance production. As a result, we obtain a first estimate of the pion decoupling time from EM effects which we compare to existing HBT data. We conclude that spectator-induced EM interactions can serve as a new tool for studying the space-time characteristics and longitudinal evolution of the system. We discuss the future perspectives for this activity on the basis of existing and future data from the NA61/SHINE experiment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-11-01
The National Power Corporation (NAPCOR) of Philippines has requested the Trade and Development Program (TDP) to fund a study to evaluate the technical and economic feasibility of converting its existing oil and coal fired power plants to natural gas. The decision to undertake the study resulted from preliminary information on a large gas find off the coast of Palawan island. However, a second exploration well has come up dry. Now, the conversion of the existing power plants to natural gas seems very questionable. Even if the proven gas reserves prove to be commercially viable, the gas will not be availablemore » until 1998 or later for utilization. At that time several of NAPCOR's plants would have aged further, the political and economic situation in Philippines could have altered significantly, possibly improved, private power companies might be able to use the gas more efficiently by building state-of-the-art combined cycle power plants which will make more economic sense than converting existing old boilers to natural gas. In addition, most of the existing power equipment was manufactured by Japanese and/or European firms. It makes sense for NAPCOR to solicit services from these firms if it decides to go ahead with the implementation of the power plant conversion project. The potential for any follow on work for U.S. businesses is minimal to zero in the thermal conversion project. Therefore, at this time, TDP funding for the feasibility would be premature and not recommended.« less
Mannetje, Andrea 't; Steenland, Kyle; Checkoway, Harvey; Koskela, Riitta-Sisko; Koponen, Matti; Attfield, Michael; Chen, Jingqiong; Hnizdo, Eva; DeKlerk, Nicholas; Dosemeci, Mustafa
2002-08-01
Comprehensive quantitative silica exposure estimates over time, measured in the same units across a number of cohorts, would make possible a pooled exposure-response analysis for lung cancer. Such an analysis would help clarify the continuing controversy regarding whether silica causes lung cancer. Existing quantitative exposure data for 10 silica-exposed cohorts were retrieved from the original investigators. Occupation- and time-specific exposure estimates were either adopted/adapted or developed for each cohort, and converted to milligram per cubic meter (mg/m(3)) respirable crystalline silica. Quantitative exposure assignments were typically based on a large number (thousands) of raw measurements, or otherwise consisted of exposure estimates by experts (for two cohorts). Median exposure level of the cohorts ranged between 0.04 and 0.59 mg/m(3) respirable crystalline silica. Exposure estimates were partially validated via their successful prediction of silicosis in these cohorts. Existing data were successfully adopted or modified to create comparable quantitative exposure estimates over time for 10 silica-exposed cohorts, permitting a pooled exposure-response analysis. The difficulties encountered in deriving common exposure estimates across cohorts are discussed. Copyright 2002 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Fauza, G.; Prasetyo, H.; Amanto, B. S.
2018-05-01
Studies on an integrated production-inventory model for deteriorating items have been done extensively. Most of the studies define deterioration as physical depletion of some inventories over time. This definition may not represent the deterioration characteristics of food products. The quality of food production decreases over time while the quantity remains the same. Further, in the existing models, the raw material is replenished several times (or at least once) within one production cycle. In food industries, however, a food company, for several reasons (e.g., the seasonal raw materials, discounted price, etc.) sometimes will get more benefit if it orders raw materials in a large quantity. Considering this fact, this research, therefore, is aimed at developing a more representative inventory model by (i) considering the quality losses in food and (ii) adopting a general raw material procurement policy. A mathematical model is established to represent the proposed policy in which the total profit of the system is the objective function. To evaluate the performance of the model, a numerical test was conducted. The numerical test indicates that the developed model has better performance, i.e., the total profit is 2.3% higher compared to the existing model.
How Travel Demand Affects Detection of Non-Recurrent Traffic Congestion on Urban Road Networks
NASA Astrophysics Data System (ADS)
Anbaroglu, B.; Heydecker, B.; Cheng, T.
2016-06-01
Occurrence of non-recurrent traffic congestion hinders the economic activity of a city, as travellers could miss appointments or be late for work or important meetings. Similarly, for shippers, unexpected delays may disrupt just-in-time delivery and manufacturing processes, which could lose them payment. Consequently, research on non-recurrent congestion detection on urban road networks has recently gained attention. By analysing large amounts of traffic data collected on a daily basis, traffic operation centres can improve their methods to detect non-recurrent congestion rapidly and then revise their existing plans to mitigate its effects. Space-time clusters of high link journey time estimates correspond to non-recurrent congestion events. Existing research, however, has not considered the effect of travel demand on the effectiveness of non-recurrent congestion detection methods. Therefore, this paper investigates how travel demand affects detection of non-recurrent traffic congestion detection on urban road networks. Travel demand has been classified into three categories as low, normal and high. The experiments are carried out on London's urban road network, and the results demonstrate the necessity to adjust the relative importance of the component evaluation criteria depending on the travel demand level.
Uncovering hidden nodes in complex networks in the presence of noise
Su, Ri-Qi; Lai, Ying-Cheng; Wang, Xiao; Do, Younghae
2014-01-01
Ascertaining the existence of hidden objects in a complex system, objects that cannot be observed from the external world, not only is curiosity-driven but also has significant practical applications. Generally, uncovering a hidden node in a complex network requires successful identification of its neighboring nodes, but a challenge is to differentiate its effects from those of noise. We develop a completely data-driven, compressive-sensing based method to address this issue by utilizing complex weighted networks with continuous-time oscillatory or discrete-time evolutionary-game dynamics. For any node, compressive sensing enables accurate reconstruction of the dynamical equations and coupling functions, provided that time series from this node and all its neighbors are available. For a neighboring node of the hidden node, this condition cannot be met, resulting in abnormally large prediction errors that, counterintuitively, can be used to infer the existence of the hidden node. Based on the principle of differential signal, we demonstrate that, when strong noise is present, insofar as at least two neighboring nodes of the hidden node are subject to weak background noise only, unequivocal identification of the hidden node can be achieved. PMID:24487720
Large-scale transport across narrow gaps in rod bundles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guellouz, M.S.; Tavoularis, S.
1995-09-01
Flow visualization and how-wire anemometry were used to investigate the velocity field in a rectangular channel containing a single cylindrical rod, which could be traversed on the centreplane to form gaps of different widths with the plane wall. The presence of large-scale, quasi-periodic structures in the vicinity of the gap has been demonstrated through flow visualization, spectral analysis and space-time correlation measurements. These structures are seen to exist even for relatively large gaps, at least up to W/D=1.350 (W is the sum of the rod diameter, D, and the gap width). The above measurements appear to compatible with the fieldmore » of a street of three-dimensional, counter-rotating vortices, whose detailed structure, however, remains to be determined. The convection speed and the streamwise spacing of these vortices have been determined as functions of the gap size.« less
SAR correlation technique - An algorithm for processing data with large range walk
NASA Technical Reports Server (NTRS)
Jin, M.; Wu, C.
1983-01-01
This paper presents an algorithm for synthetic aperture radar (SAR) azimuth correlation with extraneously large range migration effect which can not be accommodated by the existing frequency domain interpolation approach used in current SEASAT SAR processing. A mathematical model is first provided for the SAR point-target response in both the space (or time) and the frequency domain. A simple and efficient processing algorithm derived from the hybrid algorithm is then given. This processing algorithm enables azimuth correlation by two steps. The first step is a secondary range compression to handle the dispersion of the spectra of the azimuth response along range. The second step is the well-known frequency domain range migration correction approach for the azimuth compression. This secondary range compression can be processed simultaneously with range pulse compression. Simulation results provided here indicate that this processing algorithm yields a satisfactory compressed impulse response for SAR data with large range migration.
Tether Impact Rate Simulation and Prediction with Orbiting Satellites
NASA Technical Reports Server (NTRS)
Harrison, Jim
2002-01-01
Space elevators and other large space structures have been studied and proposed as worthwhile by futuristic space planners for at least a couple of decades. In June 1999 the Marshall Space Flight Center sponsored a Space Elevator workshop in Huntsville, Alabama, to bring together technical experts and advanced planners to discuss the current status and to define the magnitude of the technical and programmatic problems connected with the development of these massive space systems. One obvious problem that was identified, although not for the first time, were the collision probabilities between space elevators and orbital debris. Debate and uncertainty presently exist about the extent of the threat to these large structures, one in this study as large in size as a space elevator. We have tentatively concluded that orbital debris although a major concern not sufficient justification to curtail the study and development of futuristic new millennium concepts like the space elevators.
BGFit: management and automated fitting of biological growth curves.
Veríssimo, André; Paixão, Laura; Neves, Ana Rute; Vinga, Susana
2013-09-25
Existing tools to model cell growth curves do not offer a flexible integrative approach to manage large datasets and automatically estimate parameters. Due to the increase of experimental time-series from microbiology and oncology, the need for a software that allows researchers to easily organize experimental data and simultaneously extract relevant parameters in an efficient way is crucial. BGFit provides a web-based unified platform, where a rich set of dynamic models can be fitted to experimental time-series data, further allowing to efficiently manage the results in a structured and hierarchical way. The data managing system allows to organize projects, experiments and measurements data and also to define teams with different editing and viewing permission. Several dynamic and algebraic models are already implemented, such as polynomial regression, Gompertz, Baranyi, Logistic and Live Cell Fraction models and the user can add easily new models thus expanding current ones. BGFit allows users to easily manage their data and models in an integrated way, even if they are not familiar with databases or existing computational tools for parameter estimation. BGFit is designed with a flexible architecture that focus on extensibility and leverages free software with existing tools and methods, allowing to compare and evaluate different data modeling techniques. The application is described in the context of bacterial and tumor cells growth data fitting, but it is also applicable to any type of two-dimensional data, e.g. physical chemistry and macroeconomic time series, being fully scalable to high number of projects, data and model complexity.
NASA Astrophysics Data System (ADS)
Nemoto, Takahiro; Jack, Robert L.; Lecomte, Vivien
2017-03-01
We analyze large deviations of the time-averaged activity in the one-dimensional Fredrickson-Andersen model, both numerically and analytically. The model exhibits a dynamical phase transition, which appears as a singularity in the large deviation function. We analyze the finite-size scaling of this phase transition numerically, by generalizing an existing cloning algorithm to include a multicanonical feedback control: this significantly improves the computational efficiency. Motivated by these numerical results, we formulate an effective theory for the model in the vicinity of the phase transition, which accounts quantitatively for the observed behavior. We discuss potential applications of the numerical method and the effective theory in a range of more general contexts.
Fluctuations, ghosts, and the cosmological constant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirayama, T.; Holdom, B.
2004-12-15
For a large region of parameter space involving the cosmological constant and mass parameters, we discuss fluctuating spacetime solutions that are effectively Minkowskian on large time and distance scales. Rapid, small amplitude oscillations in the scale factor have a frequency determined by the size of a negative cosmological constant. A field with modes of negative energy is required. If it is gravity that induces a coupling between the ghostlike and normal fields, we find that this results in stochastic rather than unstable behavior. The negative energy modes may also permit the existence of Lorentz invariant fluctuating solutions of finite energymore » density. Finally we consider higher derivative gravity theories and find oscillating metric solutions in these theories without the addition of other fields.« less
Travelling wave effects in large space structures
NASA Technical Reports Server (NTRS)
Vonflotow, A.
1983-01-01
Several aspects of travelling waves in Large Space Structures(LSS) are discussed. The dynamic similarity among LSS's, electric power systems, microwave circuits and communications network is noted. The existence of time lag between actuation and response is illuminated with the aid of simple examples, and their prediction is demonstrated. To prevent echoes, communications lines have matched terminations; this idea is applied to the design of dampers of one dimensional structures. Periodic structures act as mechanical band pass filters. Implications of this behavior are examined on a simple example. It is noted that the implication is twofold; continuum models of periodic lattice structures may err considerably; on the other hand, it is possible to design favorable transmission (and resonance) characteristics into the structure.
Computational solutions to large-scale data management and analysis
Schadt, Eric E.; Linderman, Michael D.; Sorenson, Jon; Lee, Lawrence; Nolan, Garry P.
2011-01-01
Today we can generate hundreds of gigabases of DNA and RNA sequencing data in a week for less than US$5,000. The astonishing rate of data generation by these low-cost, high-throughput technologies in genomics is being matched by that of other technologies, such as real-time imaging and mass spectrometry-based flow cytometry. Success in the life sciences will depend on our ability to properly interpret the large-scale, high-dimensional data sets that are generated by these technologies, which in turn requires us to adopt advances in informatics. Here we discuss how we can master the different types of computational environments that exist — such as cloud and heterogeneous computing — to successfully tackle our big data problems. PMID:20717155
A fast image simulation algorithm for scanning transmission electron microscopy.
Ophus, Colin
2017-01-01
Image simulation for scanning transmission electron microscopy at atomic resolution for samples with realistic dimensions can require very large computation times using existing simulation algorithms. We present a new algorithm named PRISM that combines features of the two most commonly used algorithms, namely the Bloch wave and multislice methods. PRISM uses a Fourier interpolation factor f that has typical values of 4-20 for atomic resolution simulations. We show that in many cases PRISM can provide a speedup that scales with f 4 compared to multislice simulations, with a negligible loss of accuracy. We demonstrate the usefulness of this method with large-scale scanning transmission electron microscopy image simulations of a crystalline nanoparticle on an amorphous carbon substrate.
A fast image simulation algorithm for scanning transmission electron microscopy
Ophus, Colin
2017-05-10
Image simulation for scanning transmission electron microscopy at atomic resolution for samples with realistic dimensions can require very large computation times using existing simulation algorithms. Here, we present a new algorithm named PRISM that combines features of the two most commonly used algorithms, namely the Bloch wave and multislice methods. PRISM uses a Fourier interpolation factor f that has typical values of 4-20 for atomic resolution simulations. We show that in many cases PRISM can provide a speedup that scales with f 4 compared to multislice simulations, with a negligible loss of accuracy. We demonstrate the usefulness of this methodmore » with large-scale scanning transmission electron microscopy image simulations of a crystalline nanoparticle on an amorphous carbon substrate.« less
Energy dependence of SEP electron and proton onset times
NASA Astrophysics Data System (ADS)
Xie, H.; Mäkelä, P.; Gopalswamy, N.; St. Cyr, O. C.
2016-07-01
We study the large solar energetic particle (SEP) events that were detected by GOES in the >10 MeV energy channel during December 2006 to March 2014. We derive and compare solar particle release (SPR) times for the 0.25-10.4 MeV electrons and 10-100 MeV protons for the 28 SEP events. In the study, the electron SPR times are derived with the time-shifting analysis (TSA) and the proton SPR times are derived using both the TSA and the velocity dispersion analysis (VDA). Electron anisotropies are computed to evaluate the amount of scattering for the events under study. Our main results include (1) near-relativistic electrons and high-energy protons are released at the same time within 8 min for most (16 of 23) SEP events. (2)There exists a good correlation between electron and proton acceleration, peak intensity, and intensity time profiles. (3) The TSA SPR times for 90.5 MeV and 57.4 MeV protons have maximum errors of 6 min and 10 min compared to the proton VDA release times, respectively, while the maximum error for 15.4 MeV protons can reach to 32 min. (4) For 7 low-intensity events of the 23, large delays occurred for 6.5 MeV electrons and 90.5 MeV protons relative to 0.5 MeV electrons. Whether these delays are due to times needed for the evolving shock to be strengthened or due to particle transport effects remains unsolved.
Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources
NASA Astrophysics Data System (ADS)
Jia, Z.; Zhan, Z.
2017-12-01
Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.
Real-Time Fourier Synthesis of Ensembles with Timbral Interpolation
NASA Astrophysics Data System (ADS)
Haken, Lippold
1990-01-01
In Fourier synthesis, natural musical sounds are produced by summing time-varying sinusoids. Sounds are analyzed to find the amplitude and frequency characteristics for their sinusoids; interpolation between the characteristics of several sounds is used to produce intermediate timbres. An ensemble can be synthesized by summing all the sinusoids for several sounds, but in practice it is difficult to perform such computations in real time. To solve this problem on inexpensive hardware, it is useful to take advantage of the masking effects of the auditory system. By avoiding the computations for perceptually unimportant sinusoids, and by employing other computation reduction techniques, a large ensemble may be synthesized in real time on the Platypus signal processor. Unlike existing computation reduction techniques, the techniques described in this thesis do not sacrifice independent fine control over the amplitude and frequency characteristics of each sinusoid.
Near shot-noise limited time-resolved circular dichroism pump-probe spectrometer
NASA Astrophysics Data System (ADS)
Stadnytskyi, Valentyn; Orf, Gregory S.; Blankenship, Robert E.; Savikhin, Sergei
2018-03-01
We describe an optical near shot-noise limited time-resolved circular dichroism (TRCD) pump-probe spectrometer capable of reliably measuring circular dichroism signals in the order of μdeg with nanosecond time resolution. Such sensitivity is achieved through a modification of existing TRCD designs and introduction of a new data processing protocol that eliminates approximations that have caused substantial nonlinearities in past measurements and allows the measurement of absorption and circular dichroism transients simultaneously with a single pump pulse. The exceptional signal-to-noise ratio of the described setup makes the TRCD technique applicable to a large range of non-biological and biological systems. The spectrometer was used to record, for the first time, weak TRCD kinetics associated with the triplet state energy transfer in the photosynthetic Fenna-Matthews-Olson antenna pigment-protein complex.
NASA Astrophysics Data System (ADS)
Song, Chi; Zhang, Xuejun; Zhang, Xin; Hu, Haifei; Zeng, Xuefeng
2017-06-01
A rigid conformal (RC) lap can smooth mid-spatial-frequency (MSF) errors, which are naturally smaller than the tool size, while still removing large-scale errors in a short time. However, the RC-lap smoothing efficiency performance is poorer than expected, and existing smoothing models cannot explicitly specify the methods to improve this efficiency. We presented an explicit time-dependent smoothing evaluation model that contained specific smoothing parameters directly derived from the parametric smoothing model and the Preston equation. Based on the time-dependent model, we proposed a strategy to improve the RC-lap smoothing efficiency, which incorporated the theoretical model, tool optimization, and efficiency limit determination. Two sets of smoothing experiments were performed to demonstrate the smoothing efficiency achieved using the time-dependent smoothing model. A high, theory-like tool influence function and a limiting tool speed of 300 RPM were o
Statistical time-dependent model for the interstellar gas
NASA Technical Reports Server (NTRS)
Gerola, H.; Kafatos, M.; Mccray, R.
1974-01-01
We present models for temperature and ionization structure of low, uniform-density (approximately 0.3 per cu cm) interstellar gas in a galactic disk which is exposed to soft X rays from supernova outbursts occurring randomly in space and time. The structure was calculated by computing the time record of temperature and ionization at a given point by Monte Carlo simulation. The calculation yields probability distribution functions for ionized fraction, temperature, and their various observable moments. These time-dependent models predict a bimodal temperature distribution of the gas that agrees with various observations. Cold regions in the low-density gas may have the appearance of clouds in 21-cm absorption. The time-dependent model, in contrast to the steady-state model, predicts large fluctuations in ionization rate and the existence of cold (approximately 30 K), ionized (ionized fraction equal to about 0.1) regions.
Prethermal time crystals in a one-dimensional periodically driven Floquet system
NASA Astrophysics Data System (ADS)
Zeng, Tian-Sheng; Sheng, D. N.
2017-09-01
Motivated by experimental observations of time-symmetry breaking behavior in a periodically driven (Floquet) system, we study a one-dimensional spin model to explore the stability of such Floquet discrete time crystals (DTCs) under the interplay between interaction and the microwave driving. For intermediate interactions and high drivings, from the time evolution of both stroboscopic spin polarization and mutual information between two ends, we show that Floquet DTCs can exist in a prethermal time regime without the tuning of strong disorder. For much weak interactions the system is a symmetry-unbroken phase, while for strong interactions it gives its way to a thermal phase. Through analyzing the entanglement dynamics, we show that large driving fields protect the prethermal DTCs from many-body localization and thermalization. Our results suggest that by increasing the spin interaction, one can drive the experimental system into optimal regime for observing a robust prethermal DTC phase.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Starr, D. L.; Wozniak, P. R.; Vestrand, W. T.
2002-01-01
SkyDOT (Sky Database for Objects in Time-Domain) is a Virtual Observatory currently comprised of data from the RAPTOR, ROTSE I, and OGLE I1 survey projects. This makes it a very large time domain database. In addition, the RAPTOR project provides SkyDOT with real-time variability data as well as stereoscopic information. With its web interface, we believe SkyDOT will be a very useful tool for both astronomers, and the public. Our main task has been to construct an efficient relational database containing all existing data, while handling a real-time inflow of data. We also provide a useful web interface allowing easymore » access to both astronomers and the public. Initially, this server will allow common searches, specific queries, and access to light curves. In the future we will include machine learning classification tools and access to spectral information.« less
Statistical Properties of Lorenz-like Flows, Recent Developments and Perspectives
NASA Astrophysics Data System (ADS)
Araujo, Vitor; Galatolo, Stefano; Pacifico, Maria José
We comment on the mathematical results about the statistical behavior of Lorenz equations and its attractor, and more generally on the class of singular hyperbolic systems. The mathematical theory of such kind of systems turned out to be surprisingly difficult. It is remarkable that a rigorous proof of the existence of the Lorenz attractor was presented only around the year 2000 with a computer-assisted proof together with an extension of the hyperbolic theory developed to encompass attractors robustly containing equilibria. We present some of the main results on the statistical behavior of such systems. We show that for attractors of three-dimensional flows, robust chaotic behavior is equivalent to the existence of certain hyperbolic structures, known as singular-hyperbolicity. These structures, in turn, are associated with the existence of physical measures: in low dimensions, robust chaotic behavior for flows ensures the existence of a physical measure. We then give more details on recent results on the dynamics of singular-hyperbolic (Lorenz-like) attractors: (1) there exists an invariant foliation whose leaves are forward contracted by the flow (and further properties which are useful to understand the statistical properties of the dynamics); (2) there exists a positive Lyapunov exponent at every orbit; (3) there is a unique physical measure whose support is the whole attractor and which is the equilibrium state with respect to the center-unstable Jacobian; (4) this measure is exact dimensional; (5) the induced measure on a suitable family of cross-sections has exponential decay of correlations for Lipschitz observables with respect to a suitable Poincaré return time map; (6) the hitting time associated to Lorenz-like attractors satisfy a logarithm law; (7) the geometric Lorenz flow satisfies the Almost Sure Invariance Principle (ASIP) and the Central Limit Theorem (CLT); (8) the rate of decay of large deviations for the volume measure on the ergodic basin of a geometric Lorenz attractor is exponential; (9) a class of geometric Lorenz flows exhibits robust exponential decay of correlations; (10) all geometric Lorenz flows are rapidly mixing and their time-1 map satisfies both ASIP and CLT.
A linear stability analysis for nonlinear, grey, thermal radiative transfer problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wollaber, Allan B., E-mail: wollaber@lanl.go; Larsen, Edward W., E-mail: edlarsen@umich.ed
2011-02-20
We present a new linear stability analysis of three time discretizations and Monte Carlo interpretations of the nonlinear, grey thermal radiative transfer (TRT) equations: the widely used 'Implicit Monte Carlo' (IMC) equations, the Carter Forest (CF) equations, and the Ahrens-Larsen or 'Semi-Analog Monte Carlo' (SMC) equations. Using a spatial Fourier analysis of the 1-D Implicit Monte Carlo (IMC) equations that are linearized about an equilibrium solution, we show that the IMC equations are unconditionally stable (undamped perturbations do not exist) if {alpha}, the IMC time-discretization parameter, satisfies 0.5 < {alpha} {<=} 1. This is consistent with conventional wisdom. However, wemore » also show that for sufficiently large time steps, unphysical damped oscillations can exist that correspond to the lowest-frequency Fourier modes. After numerically confirming this result, we develop a method to assess the stability of any time discretization of the 0-D, nonlinear, grey, thermal radiative transfer problem. Subsequent analyses of the CF and SMC methods then demonstrate that the CF method is unconditionally stable and monotonic, but the SMC method is conditionally stable and permits unphysical oscillatory solutions that can prevent it from reaching equilibrium. This stability theory provides new conditions on the time step to guarantee monotonicity of the IMC solution, although they are likely too conservative to be used in practice. Theoretical predictions are tested and confirmed with numerical experiments.« less
A linear stability analysis for nonlinear, grey, thermal radiative transfer problems
NASA Astrophysics Data System (ADS)
Wollaber, Allan B.; Larsen, Edward W.
2011-02-01
We present a new linear stability analysis of three time discretizations and Monte Carlo interpretations of the nonlinear, grey thermal radiative transfer (TRT) equations: the widely used “Implicit Monte Carlo” (IMC) equations, the Carter Forest (CF) equations, and the Ahrens-Larsen or “Semi-Analog Monte Carlo” (SMC) equations. Using a spatial Fourier analysis of the 1-D Implicit Monte Carlo (IMC) equations that are linearized about an equilibrium solution, we show that the IMC equations are unconditionally stable (undamped perturbations do not exist) if α, the IMC time-discretization parameter, satisfies 0.5 < α ⩽ 1. This is consistent with conventional wisdom. However, we also show that for sufficiently large time steps, unphysical damped oscillations can exist that correspond to the lowest-frequency Fourier modes. After numerically confirming this result, we develop a method to assess the stability of any time discretization of the 0-D, nonlinear, grey, thermal radiative transfer problem. Subsequent analyses of the CF and SMC methods then demonstrate that the CF method is unconditionally stable and monotonic, but the SMC method is conditionally stable and permits unphysical oscillatory solutions that can prevent it from reaching equilibrium. This stability theory provides new conditions on the time step to guarantee monotonicity of the IMC solution, although they are likely too conservative to be used in practice. Theoretical predictions are tested and confirmed with numerical experiments.
Building large area CZT imaging detectors for a wide-field hard X-ray telescope—ProtoEXIST1
NASA Astrophysics Data System (ADS)
Hong, J.; Allen, B.; Grindlay, J.; Chammas, N.; Barthelemy, S.; Baker, R.; Gehrels, N.; Nelson, K. E.; Labov, S.; Collins, J.; Cook, W. R.; McLean, R.; Harrison, F.
2009-07-01
We have constructed a moderately large area (32cm), fine pixel (2.5 mm pixel, 5 mm thick) CZT imaging detector which constitutes the first section of a detector module (256cm) developed for a balloon-borne wide-field hard X-ray telescope, ProtoEXIST1. ProtoEXIST1 is a prototype for the High Energy Telescope (HET) in the Energetic X-ray imaging Survey Telescope (EXIST), a next generation space-borne multi-wavelength telescope. We have constructed a large (nearly gapless) detector plane through a modularization scheme by tiling of a large number of 2cm×2cm CZT crystals. Our innovative packaging method is ideal for many applications such as coded-aperture imaging, where a large, continuous detector plane is desirable for the optimal performance. Currently we have been able to achieve an energy resolution of 3.2 keV (FWHM) at 59.6 keV on average, which is exceptional considering the moderate pixel size and the number of detectors in simultaneous operation. We expect to complete two modules (512cm) within the next few months as more CZT becomes available. We plan to test the performance of these detectors in a near space environment in a series of high altitude balloon flights, the first of which is scheduled for Fall 2009. These detector modules are the first in a series of progressively more sophisticated detector units and packaging schemes planned for ProtoEXIST2 & 3, which will demonstrate the technology required for the advanced CZT imaging detectors (0.6 mm pixel, 4.5m area) required in EXIST/HET.
Field Observations of Precursors to Large Earthquakes: Interpreting and Verifying Their Causes
NASA Astrophysics Data System (ADS)
Suyehiro, K.; Sacks, S. I.; Rydelek, P. A.; Smith, D. E.; Takanami, T.
2017-12-01
Many reports of precursory anomalies before large earthquakes exist. However, it has proven elusive to even identify these signals before their actual occurrences. They often only become evident in retrospect. A probabilistic cellular automaton model (Sacks and Rydelek, 1995) explains many of the statistical and dynamic natures of earthquakes including the observed b-value decrease towards a large earthquake or a small stress perturbation to have effect on earthquake occurrence pattern. It also reproduces dynamic characters of each earthquake rupture. This model is useful in gaining insights on causal relationship behind complexities. For example, some reported cases of background seismicity quiescence before a main shock only seen for events larger than M=3 4 at years time scale can be reproduced by this model, if only a small fraction ( 2%) of the component cells are strengthened by a small amount. Such an enhancement may physically occur if a tiny and scattered portion of the seismogenic crust undergoes dilatancy hardening. Such a process to occur will be dependent on the fluid migration and microcracks developments under tectonic loading. Eventual large earthquake faulting will be promoted by the intrusion of excess water from surrounding rocks into the zone capable of cascading slips to a large area. We propose this process manifests itself on the surface as hydrologic, geochemical, or macroscopic anomalies, for which so many reports exist. We infer from seismicity that the eastern Nankai Trough (Tokai) area of central Japan is already in the stage of M-dependent seismic quiescence. Therefore, we advocate that new observations sensitive to detecting water migration in Tokai should be implemented. In particular, vertical component strain, gravity, and/or electrical conductivity, should be observed for verification.
Market-based control strategy for long-span structures considering the multi-time delay issue
NASA Astrophysics Data System (ADS)
Li, Hongnan; Song, Jianzhu; Li, Gang
2017-01-01
To solve the different time delays that exist in the control device installed on spatial structures, in this study, discrete analysis using a 2 N precise algorithm was selected to solve the multi-time-delay issue for long-span structures based on the market-based control (MBC) method. The concept of interval mixed energy was introduced from computational structural mechanics and optimal control research areas, and it translates the design of the MBC multi-time-delay controller into a solution for the segment matrix. This approach transforms the serial algorithm in time to parallel computing in space, greatly improving the solving efficiency and numerical stability. The designed controller is able to consider the issue of time delay with a linear controlling force combination and is especially effective for large time-delay conditions. A numerical example of a long-span structure was selected to demonstrate the effectiveness of the presented controller, and the time delay was found to have a significant impact on the results.
Value of information of repair times for offshore wind farm maintenance planning
NASA Astrophysics Data System (ADS)
Seyr, Helene; Muskulus, Michael
2016-09-01
A large contribution to the total cost of energy in offshore wind farms is due to maintenance costs. In recent years research has focused therefore on lowering the maintenance costs using different approaches. Decision support models for scheduling the maintenance exist already, dealing with different factors influencing the scheduling. Our contribution deals with the uncertainty in the repair times. Given the mean repair times for different turbine components we make some assumptions regarding the underlying repair time distribution. We compare the results of a decision support model for the mean times to repair and those repair time distributions. Additionally, distributions with the same mean but different variances are compared under the same conditions. The value of lowering the uncertainty in the repair time is calculated and we find that using distributions significantly decreases the availability, when scheduling maintenance for multiple turbines in a wind park. Having detailed information about the repair time distribution may influence the results of maintenance modeling and might help identify cost factors.
A real-time coherent dedispersion pipeline for the giant metrewave radio telescope
NASA Astrophysics Data System (ADS)
De, Kishalay; Gupta, Yashwant
2016-02-01
A fully real-time coherent dedispersion system has been developed for the pulsar back-end at the Giant Metrewave Radio Telescope (GMRT). The dedispersion pipeline uses the single phased array voltage beam produced by the existing GMRT software back-end (GSB) to produce coherently dedispersed intensity output in real time, for the currently operational bandwidths of 16 MHz and 32 MHz. Provision has also been made to coherently dedisperse voltage beam data from observations recorded on disk. We discuss the design and implementation of the real-time coherent dedispersion system, describing the steps carried out to optimise the performance of the pipeline. Presently functioning on an Intel Xeon X5550 CPU equipped with a NVIDIA Tesla C2075 GPU, the pipeline allows dispersion free, high time resolution data to be obtained in real-time. We illustrate the significant improvements over the existing incoherent dedispersion system at the GMRT, and present some preliminary results obtained from studies of pulsars using this system, demonstrating its potential as a useful tool for low frequency pulsar observations. We describe the salient features of our implementation, comparing it with other recently developed real-time coherent dedispersion systems. This implementation of a real-time coherent dedispersion pipeline for a large, low frequency array instrument like the GMRT, will enable long-term observing programs using coherent dedispersion to be carried out routinely at the observatory. We also outline the possible improvements for such a pipeline, including prospects for the upgraded GMRT which will have bandwidths about ten times larger than at present.
NASA Astrophysics Data System (ADS)
Agatova, A. R.; Nepop, R. K.
2017-07-01
The complexity of the age dating of the Pleistocene ice-dammed paleolakes in the Altai Mountains is a reason why geologists consider the Early Paleolithic archaeological sites as an independent age marker for dating geological objects. However, in order to use these sites for paleogeographic reconstructions, their locations, the character of stratification, and the age of stone artifacts need to be comprehensively studied. We investigate 20 Late Paleolithic archaeological sites discovered in the Chuya depression of the Russian Altai (Altai Mountains) with the aim of their possible use for reconstructions of the period of development of the Kurai-Chuya glacio-limnosystem in the Late Neopleistocene. The results of our investigation show that it is improper to use the Paleolithic archaeological sites for the dating of the existence period and the draining time of ice-dammed lakes of the Chuya Depression in the modern period of their study owing to a lack of quantitative age estimates, a wide age range of possible existence of these sites, possible redeposition of the majority of artifacts, and their surface occurrence. It is established that all stratified sites where cultural layers are expected to be dated in the future lie above the uppermost and well-expressed paleolake level (2100 m a.s.l.). Accordingly, there are no grounds to determine the existence time of shallower paleolakes. Since the whole stone material collected below the level of 2100 m a.s.l. is represented by surface finds, it is problematic to use these artifacts for absolute geochronology. The Late Paleolithic Bigdon and Chechketerek sites are of great interest for paleogeographic reconstructions of ice-dammed lakes. The use of iceberg rafting products as cores is evidence that these sites appeared after the draining of a paleolake (2000 m a.s.l.). At this time, the location of these archaeological sites on the slope of the Chuya Depression allows one to assume the existence of a large lake as deep as 250 m synchronously with the above paleolake or later. The location of the lowermost archaeological sites is evidence that a paleolake could have existed at an altitude below 1770 m a.s.l. in the Late Neopleistocene-Early Holocene. The absolute geochronology of the archaeological sites (cultural layers in multilayered sites, split surfaces on dropstones, etc.) can be useful for further reconstructions of the existence time, depths, and a number of ice-dammed lakes in the Kurai-Chuya system of depressions.
A pulse of mid-Pleistocene rift volcanism in Ethiopia at the dawn of modern humans.
Hutchison, William; Fusillo, Raffaella; Pyle, David M; Mather, Tamsin A; Blundy, Jon D; Biggs, Juliet; Yirgu, Gezahegn; Cohen, Benjamin E; Brooker, Richard A; Barfod, Dan N; Calvert, Andrew T
2016-10-18
The Ethiopian Rift Valley hosts the longest record of human co-existence with volcanoes on Earth, however, current understanding of the magnitude and timing of large explosive eruptions in this region is poor. Detailed records of volcanism are essential for interpreting the palaeoenvironments occupied by our hominin ancestors; and also for evaluating the volcanic hazards posed to the 10 million people currently living within this active rift zone. Here we use new geochronological evidence to suggest that a 200 km-long segment of rift experienced a major pulse of explosive volcanic activity between 320 and 170 ka. During this period, at least four distinct volcanic centres underwent large-volume (>10 km 3 ) caldera-forming eruptions, and eruptive fluxes were elevated five times above the average eruption rate for the past 700 ka. We propose that such pulses of episodic silicic volcanism would have drastically remodelled landscapes and ecosystems occupied by early hominin populations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-08-21
Recent advancements in technology scaling have shown a trend towards greater integration with large-scale chips containing thousands of processors connected to memories and other I/O devices using non-trivial network topologies. Software simulation proves insufficient to study the tradeoffs in such complex systems due to slow execution time, whereas hardware RTL development is too time-consuming. We present OpenSoC Fabric, an on-chip network generation infrastructure which aims to provide a parameterizable and powerful on-chip network generator for evaluating future high performance computing architectures based on SoC technology. OpenSoC Fabric leverages a new hardware DSL, Chisel, which contains powerful abstractions provided by itsmore » base language, Scala, and generates both software (C++) and hardware (Verilog) models from a single code base. The OpenSoC Fabric2 infrastructure is modeled after existing state-of-the-art simulators, offers large and powerful collections of configuration options, and follows object-oriented design and functional programming to make functionality extension as easy as possible.« less
Acceleration and loss of relativistic electrons during small geomagnetic storms
Anderson, B. R.; Millan, R. M.; Reeves, G. D.; ...
2015-12-02
We report that past studies of radiation belt relativistic electrons have favored active storm time periods, while the effects of small geomagnetic storms (Dst >₋50 nT) have not been statistically characterized. In this timely study, given the current weak solar cycle, we identify 342 small storms from 1989 through 2000 and quantify the corresponding change in relativistic electron flux at geosynchronous orbit. Surprisingly, small storms can be equally as effective as large storms at enhancing and depleting fluxes. Slight differences exist, as small storms are 10% less likely to result in flux enhancement and 10% more likely to result inmore » flux depletion than large storms. Nevertheless, it is clear that neither acceleration nor loss mechanisms scale with storm drivers as would be expected. Small geomagnetic storms play a significant role in radiation belt relativistic electron dynamics and provide opportunities to gain new insights into the complex balance of acceleration and loss processes.« less
A pulse of mid-Pleistocene rift volcanism in Ethiopia at the dawn of modern humans
Hutchison, William; Fusillo, Raffaella; Pyle, David M.; Mather, Tamsin A.; Blundy, Jon D.; Biggs, Juliet; Yirgu, Gezahegn; Cohen, Benjamin E.; Brooker, Richard A.; Barfod, Dan N.; Calvert, Andrew T.
2016-01-01
The Ethiopian Rift Valley hosts the longest record of human co-existence with volcanoes on Earth, however, current understanding of the magnitude and timing of large explosive eruptions in this region is poor. Detailed records of volcanism are essential for interpreting the palaeoenvironments occupied by our hominin ancestors; and also for evaluating the volcanic hazards posed to the 10 million people currently living within this active rift zone. Here we use new geochronological evidence to suggest that a 200 km-long segment of rift experienced a major pulse of explosive volcanic activity between 320 and 170 ka. During this period, at least four distinct volcanic centres underwent large-volume (>10 km3) caldera-forming eruptions, and eruptive fluxes were elevated five times above the average eruption rate for the past 700 ka. We propose that such pulses of episodic silicic volcanism would have drastically remodelled landscapes and ecosystems occupied by early hominin populations. PMID:27754479
GDC 2: Compression of large collections of genomes
Deorowicz, Sebastian; Danek, Agnieszka; Niemiec, Marcin
2015-01-01
The fall of prices of the high-throughput genome sequencing changes the landscape of modern genomics. A number of large scale projects aimed at sequencing many human genomes are in progress. Genome sequencing also becomes an important aid in the personalized medicine. One of the significant side effects of this change is a necessity of storage and transfer of huge amounts of genomic data. In this paper we deal with the problem of compression of large collections of complete genomic sequences. We propose an algorithm that is able to compress the collection of 1092 human diploid genomes about 9,500 times. This result is about 4 times better than what is offered by the other existing compressors. Moreover, our algorithm is very fast as it processes the data with speed 200 MB/s on a modern workstation. In a consequence the proposed algorithm allows storing the complete genomic collections at low cost, e.g., the examined collection of 1092 human genomes needs only about 700 MB when compressed, what can be compared to about 6.7 TB of uncompressed FASTA files. The source code is available at http://sun.aei.polsl.pl/REFRESH/index.php?page=projects&project=gdc&subpage=about. PMID:26108279
General-relativistic Large-eddy Simulations of Binary Neutron Star Mergers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radice, David, E-mail: dradice@astro.princeton.edu
The flow inside remnants of binary neutron star (NS) mergers is expected to be turbulent, because of magnetohydrodynamics instability activated at scales too small to be resolved in simulations. To study the large-scale impact of these instabilities, we develop a new formalism, based on the large-eddy simulation technique, for the modeling of subgrid-scale turbulent transport in general relativity. We apply it, for the first time, to the simulation of the late-inspiral and merger of two NSs. We find that turbulence can significantly affect the structure and survival time of the merger remnant, as well as its gravitational-wave (GW) and neutrinomore » emissions. The former will be relevant for GW observation of merging NSs. The latter will affect the composition of the outflow driven by the merger and might influence its nucleosynthetic yields. The accretion rate after black hole formation is also affected. Nevertheless, we find that, for the most likely values of the turbulence mixing efficiency, these effects are relatively small and the GW signal will be affected only weakly by the turbulence. Thus, our simulations provide a first validation of all existing post-merger GW models.« less
GDC 2: Compression of large collections of genomes.
Deorowicz, Sebastian; Danek, Agnieszka; Niemiec, Marcin
2015-06-25
The fall of prices of the high-throughput genome sequencing changes the landscape of modern genomics. A number of large scale projects aimed at sequencing many human genomes are in progress. Genome sequencing also becomes an important aid in the personalized medicine. One of the significant side effects of this change is a necessity of storage and transfer of huge amounts of genomic data. In this paper we deal with the problem of compression of large collections of complete genomic sequences. We propose an algorithm that is able to compress the collection of 1092 human diploid genomes about 9,500 times. This result is about 4 times better than what is offered by the other existing compressors. Moreover, our algorithm is very fast as it processes the data with speed 200 MB/s on a modern workstation. In a consequence the proposed algorithm allows storing the complete genomic collections at low cost, e.g., the examined collection of 1092 human genomes needs only about 700 MB when compressed, what can be compared to about 6.7 TB of uncompressed FASTA files. The source code is available at http://sun.aei.polsl.pl/REFRESH/index.php?page=projects&project=gdc&subpage=about.
High-resolution, large dynamic range fiber-optic thermometer with cascaded Fabry-Perot cavities.
Liu, Guigen; Sheng, Qiwen; Hou, Weilin; Han, Ming
2016-11-01
The paradox between a large dynamic range and a high resolution commonly exists in nearly all kinds of sensors. Here, we propose a fiber-optic thermometer based on dual Fabry-Perot interferometers (FPIs) made from the same material (silicon), but with different cavity lengths, which enables unambiguous recognition of the dense fringes associated with the thick FPI over the free-spectral range determined by the thin FPI. Therefore, the sensor combines the large dynamic range of the thin FPI and the high resolution of the thick FPI. To verify this new concept, a sensor with one 200 μm thick silicon FPI cascaded by another 10 μm thick silicon FPI was fabricated. A temperature range of -50°C to 130°C and a resolution of 6.8×10-3°C were demonstrated using a simple average wavelength tracking demodulation. Compared to a sensor with only the thick silicon FPI, the dynamic range of the hybrid sensor was more than 10 times larger. Compared to a sensor with only the thin silicon FPI, the resolution of the hybrid sensor was more than 18 times higher.
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.; Agueros, M. A.; Fournier, A.; Street, R.; Ofek, E.; Levitan, D. B.; PTF Collaboration
2013-01-01
Many current photometric, time-domain surveys are driven by specific goals such as searches for supernovae or transiting exoplanets, or studies of stellar variability. These goals in turn set the cadence with which individual fields are re-imaged. In the case of the Palomar Transient Factory (PTF), several such sub-surveys are being conducted in parallel, leading to extremely non-uniform sampling over the survey's nearly 20,000 sq. deg. footprint. While the typical 7.26 sq. deg. PTF field has been imaged 20 times in R-band, ~2300 sq. deg. have been observed more than 100 times. We use the existing PTF data 6.4x107 light curves) to study the trade-off that occurs when searching for microlensing events when one has access to a large survey footprint with irregular sampling. To examine the probability that microlensing events can be recovered in these data, we also test previous statistics used on uniformly sampled data to identify variables and transients. We find that one such statistic, the von Neumann ratio, performs best for identifying simulated microlensing events. We develop a selection method using this statistic and apply it to data from all PTF fields with >100 observations to uncover a number of interesting candidate events. This work can help constrain all-sky event rate predictions and tests microlensing signal recovery in large datasets, both of which will be useful to future wide-field, time-domain surveys such as the LSST.
Tuning the presence of dynamical phase transitions in a generalized XY spin chain.
Divakaran, Uma; Sharma, Shraddha; Dutta, Amit
2016-05-01
We study an integrable spin chain with three spin interactions and the staggered field (λ) while the latter is quenched either slowly [in a linear fashion in time (t) as t/τ, where t goes from a large negative value to a large positive value and τ is the inverse rate of quenching] or suddenly. In the process, the system crosses quantum critical points and gapless phases. We address the question whether there exist nonanalyticities [known as dynamical phase transitions (DPTs)] in the subsequent real-time evolution of the state (reached following the quench) governed by the final time-independent Hamiltonian. In the case of sufficiently slow quenching (when τ exceeds a critical value τ_{1}), we show that DPTs, of the form similar to those occurring for quenching across an isolated critical point, can occur even when the system is slowly driven across more than one critical point and gapless phases. More interestingly, in the anisotropic situation we show that DPTs can completely disappear for some values of the anisotropy term (γ) and τ, thereby establishing the existence of boundaries in the (γ-τ) plane between the DPT and no-DPT regions in both isotropic and anisotropic cases. Our study therefore leads to a unique situation when DPTs may not occur even when an integrable model is slowly ramped across a QCP. On the other hand, considering sudden quenches from an initial value λ_{i} to a final value λ_{f}, we show that the condition for the presence of DPTs is governed by relations involving λ_{i},λ_{f}, and γ, and the spin chain must be swept across λ=0 for DPTs to occur.
Fransz, Duncan P; Huurnink, Arnold; de Boode, Vosse A; Kingma, Idsart; van Dieën, Jaap H
2015-01-01
Time to stabilization (TTS) is the time it takes for an individual to return to a baseline or stable state following a jump or hop landing. A large variety exists in methods to calculate the TTS. These methods can be described based on four aspects: (1) the input signal used (vertical, anteroposterior, or mediolateral ground reaction force) (2) signal processing (smoothed by sequential averaging, a moving root-mean-square window, or fitting an unbounded third order polynomial), (3) the stable state (threshold), and (4) the definition of when the (processed) signal is considered stable. Furthermore, differences exist with regard to the sample rate, filter settings and trial length. Twenty-five healthy volunteers performed ten 'single leg drop jump landing' trials. For each trial, TTS was calculated according to 18 previously reported methods. Additionally, the effects of sample rate (1000, 500, 200 and 100 samples/s), filter settings (no filter, 40, 15 and 10 Hz), and trial length (20, 14, 10, 7, 5 and 3s) were assessed. The TTS values varied considerably across the calculation methods. The maximum effect of alterations in the processing settings, averaged over calculation methods, were 2.8% (SD 3.3%) for sample rate, 8.8% (SD 7.7%) for filter settings, and 100.5% (SD 100.9%) for trial length. Differences in TTS calculation methods are affected differently by sample rate, filter settings and trial length. The effects of differences in sample rate and filter settings are generally small, while trial length has a large effect on TTS values. Copyright © 2014 Elsevier B.V. All rights reserved.
Caldas, Victor E A; Punter, Christiaan M; Ghodke, Harshad; Robinson, Andrew; van Oijen, Antoine M
2015-10-01
Recent technical advances have made it possible to visualize single molecules inside live cells. Microscopes with single-molecule sensitivity enable the imaging of low-abundance proteins, allowing for a quantitative characterization of molecular properties. Such data sets contain information on a wide spectrum of important molecular properties, with different aspects highlighted in different imaging strategies. The time-lapsed acquisition of images provides information on protein dynamics over long time scales, giving insight into expression dynamics and localization properties. Rapid burst imaging reveals properties of individual molecules in real-time, informing on their diffusion characteristics, binding dynamics and stoichiometries within complexes. This richness of information, however, adds significant complexity to analysis protocols. In general, large datasets of images must be collected and processed in order to produce statistically robust results and identify rare events. More importantly, as live-cell single-molecule measurements remain on the cutting edge of imaging, few protocols for analysis have been established and thus analysis strategies often need to be explored for each individual scenario. Existing analysis packages are geared towards either single-cell imaging data or in vitro single-molecule data and typically operate with highly specific algorithms developed for particular situations. Our tool, iSBatch, instead allows users to exploit the inherent flexibility of the popular open-source package ImageJ, providing a hierarchical framework in which existing plugins or custom macros may be executed over entire datasets or portions thereof. This strategy affords users freedom to explore new analysis protocols within large imaging datasets, while maintaining hierarchical relationships between experiments, samples, fields of view, cells, and individual molecules.
Time-Efficient High-Rate Data Flooding in One-Dimensional Acoustic Underwater Sensor Networks
Kwon, Jae Kyun; Seo, Bo-Min; Yun, Kyungsu; Cho, Ho-Shin
2015-01-01
Because underwater communication environments have poor characteristics, such as severe attenuation, large propagation delays and narrow bandwidths, data is normally transmitted at low rates through acoustic waves. On the other hand, as high traffic has recently been required in diverse areas, high rate transmission has become necessary. In this paper, transmission/reception timing schemes that maximize the time axis use efficiency to improve the resource efficiency for high rate transmission are proposed. The excellence of the proposed scheme is identified by examining the power distributions by node, rate bounds, power levels depending on the rates and number of nodes, and network split gains through mathematical analysis and numerical results. In addition, the simulation results show that the proposed scheme outperforms the existing packet train method. PMID:26528983
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aziz, Azizan; Lasternas, Bertrand; Alschuler, Elena
The American Recovery and Reinvestment Act stimulus funding of 2009 for smart grid projects resulted in the tripling of smart meters deployment. In 2012, the Green Button initiative provided utility customers with access to their real-time1 energy usage. The availability of finely granular data provides an enormous potential for energy data analytics and energy benchmarking. The sheer volume of time-series utility data from a large number of buildings also poses challenges in data collection, quality control, and database management for rigorous and meaningful analyses. In this paper, we will describe a building portfolio-level data analytics tool for operational optimization, businessmore » investment and policy assessment using 15-minute to monthly intervals utility data. The analytics tool is developed on top of the U.S. Department of Energy’s Standard Energy Efficiency Data (SEED) platform, an open source software application that manages energy performance data of large groups of buildings. To support the significantly large volume of granular interval data, we integrated a parallel time-series database to the existing relational database. The time-series database improves on the current utility data input, focusing on real-time data collection, storage, analytics and data quality control. The fully integrated data platform supports APIs for utility apps development by third party software developers. These apps will provide actionable intelligence for building owners and facilities managers. Unlike a commercial system, this platform is an open source platform funded by the U.S. Government, accessible to the public, researchers and other developers, to support initiatives in reducing building energy consumption.« less
The Innisfree meteorite: Dynamical history of the orbit - Possible family of meteor bodies
NASA Astrophysics Data System (ADS)
Galibina, I. V.; Terent'eva, A. K.
1987-09-01
Evolution of the Innisfree meteorite orbit caused by secular perturbations is studied over the time interval of 500000 yrs (from the current epoch backwards). Calculations are made by the Gauss-Halphen-Gorjatschew method taking into account perturbations from the four outer planets - Jupiter, Saturn, Uranus and Neptune. In the above mentioned time interval the meteorite orbit has undergone no essential transformations. The Innisfree orbit intersected in 91 cases the Earth orbit and in 94 - the Mars orbit. A system of small and large meteor bodies (producing ordinary meteors and fireballs) which may be genetically related to the Innisfree meteorite has been found, i.e. there probably exists an Innisfree family of meteor bodies.
NASA Astrophysics Data System (ADS)
Wan, Ling; Wang, Tao
2017-06-01
We consider the Navier-Stokes equations for compressible heat-conducting ideal polytropic gases in a bounded annular domain when the viscosity and thermal conductivity coefficients are general smooth functions of temperature. A global-in-time, spherically or cylindrically symmetric, classical solution to the initial boundary value problem is shown to exist uniquely and converge exponentially to the constant state as the time tends to infinity under certain assumptions on the initial data and the adiabatic exponent γ. The initial data can be large if γ is sufficiently close to 1. These results are of Nishida-Smoller type and extend the work (Liu et al. (2014) [16]) restricted to the one-dimensional flows.
North polar region of Mars: imaging results from viking 2.
Cutts, J A; Blasius, K R; Briggs, G A; Carr, M H; Greeley, R; Masursky, H
1976-12-11
During October 1976, the Viking 2 orbiter acquired approximately 700 high-resolution images of the north polar region of Mars. These images confirm the existence at the north pole of extensive layered deposits largely covered over with deposits of perennial ice. An unconformity within the layered deposits suggests a complex history of climate change during their time of deposition. A pole-girdling accumulation of dunes composed of very dark materials is revealed for the first time by the Viking cameras. The entire region is devoid of fresh impact craters. Rapid rates of erosion or deposition are implied. A scenario for polar geological evolution, involving two types of climate change, is proposed.
Improved disturbance rejection for predictor-based control of MIMO linear systems with input delay
NASA Astrophysics Data System (ADS)
Shi, Shang; Liu, Wenhui; Lu, Junwei; Chu, Yuming
2018-02-01
In this paper, we are concerned with the predictor-based control of multi-input multi-output (MIMO) linear systems with input delay and disturbances. By taking the future values of disturbances into consideration, a new improved predictive scheme is proposed. Compared with the existing predictive schemes, our proposed predictive scheme can achieve a finite-time exact state prediction for some smooth disturbances including the constant disturbances, and a better disturbance attenuation can also be achieved for a large class of other time-varying disturbances. The attenuation of mismatched disturbances for second-order linear systems with input delay is also investigated by using our proposed predictor-based controller.
The discrete hungry Lotka Volterra system and a new algorithm for computing matrix eigenvalues
NASA Astrophysics Data System (ADS)
Fukuda, Akiko; Ishiwata, Emiko; Iwasaki, Masashi; Nakamura, Yoshimasa
2009-01-01
The discrete hungry Lotka-Volterra (dhLV) system is a generalization of the discrete Lotka-Volterra (dLV) system which stands for a prey-predator model in mathematical biology. In this paper, we show that (1) some invariants exist which are expressed by dhLV variables and are independent from the discrete time and (2) a dhLV variable converges to some positive constant or zero as the discrete time becomes sufficiently large. Some characteristic polynomial is then factorized with the help of the dhLV system. The asymptotic behaviour of the dhLV system enables us to design an algorithm for computing complex eigenvalues of a certain band matrix.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ianakiev, Kiril Dimitrov; Iliev, Metodi; Swinhoe, Martyn Thomas
The KM200 device is a versatile, configurable front-end electronics boards that can be used as a functional replacement for Canberra’s JAB-01 boards based on the Amptek A-111 hybrid chip, which continues to be the preferred choice of electronics for large number of the boards in junction boxes of multiplicity counters that process the signal from an array of 3He detectors. Unlike the A-111 chip’s fixed time constants and sensitivity range, the shaping time and sensitivity of the new KM200 can be optimized for demanding applications such as spent fuel, and thus could improve the safeguards measurements of existing systems wheremore » the A-111 or PDT electronics does not perform well.« less
Relativistic electron plasma oscillations in an inhomogeneous ion background
NASA Astrophysics Data System (ADS)
Karmakar, Mithun; Maity, Chandan; Chakrabarti, Nikhil
2018-06-01
The combined effect of relativistic electron mass variation and background ion inhomogeneity on the phase mixing process of large amplitude electron oscillations in cold plasmas have been analyzed by using Lagrangian coordinates. An inhomogeneity in the ion density is assumed to be time-independent but spatially periodic, and a periodic perturbation in the electron density is considered as well. An approximate space-time dependent solution is obtained in the weakly-relativistic limit by employing the Bogolyubov and Krylov method of averaging. It is shown that the phase mixing process of relativistically corrected electron oscillations is strongly influenced by the presence of a pre-existing ion density ripple in the plasma background.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marekova, Elisaveta
Series of relatively large earthquakes in different regions of the Earth are studied. The regions chooses are of a high seismic activity and has a good contemporary network for recording of the seismic events along them. The main purpose of this investigation is the attempt to describe analytically the seismic process in the space and time. We are considering the statistical distributions the distances and the times between consecutive earthquakes (so called pair analysis). Studies conducted on approximating the statistical distribution of the parameters of consecutive seismic events indicate the existence of characteristic functions that describe them best. Such amore » mathematical description allows the distributions of the examined parameters to be compared to other model distributions.« less
Coherent Coupled Qubits for Quantum Annealing
NASA Astrophysics Data System (ADS)
Weber, Steven J.; Samach, Gabriel O.; Hover, David; Gustavsson, Simon; Kim, David K.; Melville, Alexander; Rosenberg, Danna; Sears, Adam P.; Yan, Fei; Yoder, Jonilyn L.; Oliver, William D.; Kerman, Andrew J.
2017-07-01
Quantum annealing is an optimization technique which potentially leverages quantum tunneling to enhance computational performance. Existing quantum annealers use superconducting flux qubits with short coherence times limited primarily by the use of large persistent currents Ip. Here, we examine an alternative approach using qubits with smaller Ip and longer coherence times. We demonstrate tunable coupling, a basic building block for quantum annealing, between two flux qubits with small (approximately 50-nA) persistent currents. Furthermore, we characterize qubit coherence as a function of coupler setting and investigate the effect of flux noise in the coupler loop on qubit coherence. Our results provide insight into the available design space for next-generation quantum annealers with improved coherence.
Using the EXIST Active Shields for Earth Occultation Observations of X-Ray Sources
NASA Technical Reports Server (NTRS)
Wilson, Colleen A.; Fishman, Gerald; Hong, Jae-Sub; Gridlay, Jonathan; Krawczynski, Henric
2005-01-01
The EXIST active shields, now being planned for the main detectors of the coded aperture telescope, will have approximately 15 times the area of the BATSE detectors; and they will have a good geometry on the spacecraft for viewing both the leading and training Earth's limb for occultation observations. These occultation observations will complement the imaging observations of EXIST and can extend them to higher energies. Earth occultatio observations of the hard X-ray sky with BATSE on the Compton Gamma Ray Observatory developed and demonstrated the capabilities of large, flat, uncollimated detectors for this method. With BATSE, a catalog of 179 X-ray sources was monitored twice every spacecraft orbit for 9 years at energies above about 25 keV, resulting in 83 definite detections and 36 possible detections with 5-sigma detection sensitivities of 3.5-20 mcrab (20-430 keV) depending on the sky location. This catalog included four transients discovered with this technique and many variable objects (galactic and extragalactic). This poster will describe the Earth occultation technique, summarize the BATSE occultation observations, and compare the basic observational parameters of the occultation detector elements of BATSE and EXIST.
Inferring epidemiological parameters from phylogenies using regression-ABC: A comparative study
Gascuel, Olivier
2017-01-01
Inferring epidemiological parameters such as the R0 from time-scaled phylogenies is a timely challenge. Most current approaches rely on likelihood functions, which raise specific issues that range from computing these functions to finding their maxima numerically. Here, we present a new regression-based Approximate Bayesian Computation (ABC) approach, which we base on a large variety of summary statistics intended to capture the information contained in the phylogeny and its corresponding lineage-through-time plot. The regression step involves the Least Absolute Shrinkage and Selection Operator (LASSO) method, which is a robust machine learning technique. It allows us to readily deal with the large number of summary statistics, while avoiding resorting to Markov Chain Monte Carlo (MCMC) techniques. To compare our approach to existing ones, we simulated target trees under a variety of epidemiological models and settings, and inferred parameters of interest using the same priors. We found that, for large phylogenies, the accuracy of our regression-ABC is comparable to that of likelihood-based approaches involving birth-death processes implemented in BEAST2. Our approach even outperformed these when inferring the host population size with a Susceptible-Infected-Removed epidemiological model. It also clearly outperformed a recent kernel-ABC approach when assuming a Susceptible-Infected epidemiological model with two host types. Lastly, by re-analyzing data from the early stages of the recent Ebola epidemic in Sierra Leone, we showed that regression-ABC provides more realistic estimates for the duration parameters (latency and infectiousness) than the likelihood-based method. Overall, ABC based on a large variety of summary statistics and a regression method able to perform variable selection and avoid overfitting is a promising approach to analyze large phylogenies. PMID:28263987
EMERALD: A Flexible Framework for Managing Seismic Data
NASA Astrophysics Data System (ADS)
West, J. D.; Fouch, M. J.; Arrowsmith, R.
2010-12-01
The seismological community is challenged by the vast quantity of new broadband seismic data provided by large-scale seismic arrays such as EarthScope’s USArray. While this bonanza of new data enables transformative scientific studies of the Earth’s interior, it also illuminates limitations in the methods used to prepare and preprocess those data. At a recent seismic data processing focus group workshop, many participants expressed the need for better systems to minimize the time and tedium spent on data preparation in order to increase the efficiency of scientific research. Another challenge related to data from all large-scale transportable seismic experiments is that there currently exists no system for discovering and tracking changes in station metadata. This critical information, such as station location, sensor orientation, instrument response, and clock timing data, may change over the life of an experiment and/or be subject to post-experiment correction. Yet nearly all researchers utilize metadata acquired with the downloaded data, even though subsequent metadata updates might alter or invalidate results produced with older metadata. A third long-standing issue for the seismic community is the lack of easily exchangeable seismic processing codes. This problem stems directly from the storage of seismic data as individual time series files, and the history of each researcher developing his or her preferred data file naming convention and directory organization. Because most processing codes rely on the underlying data organization structure, such codes are not easily exchanged between investigators. To address these issues, we are developing EMERALD (Explore, Manage, Edit, Reduce, & Analyze Large Datasets). The goal of the EMERALD project is to provide seismic researchers with a unified, user-friendly, extensible system for managing seismic event data, thereby increasing the efficiency of scientific enquiry. EMERALD stores seismic data and metadata in a state-of-the-art open source relational database (PostgreSQL), and can, on a timed basis or on demand, download the most recent metadata, compare it with previously acquired values, and alert the user to changes. The backend relational database is capable of easily storing and managing many millions of records. The extensible, plug-in architecture of the EMERALD system allows any researcher to contribute new visualization and processing methods written in any of 12 programming languages, and a central Internet-enabled repository for such methods provides users with the opportunity to download, use, and modify new processing methods on demand. EMERALD includes data acquisition tools allowing direct importation of seismic data, and also imports data from a number of existing seismic file formats. Pre-processed clean sets of data can be exported as standard sac files with user-defined file naming and directory organization, for use with existing processing codes. The EMERALD system incorporates existing acquisition and processing tools, including SOD, TauP, GMT, and FISSURES/DHI, making much of the functionality of those tools available in a unified system with a user-friendly web browser interface. EMERALD is now in beta test. See emerald.asu.edu or contact john.d.west@asu.edu for more details.
Detection of anomalous signals in temporally correlated data (Invited)
NASA Astrophysics Data System (ADS)
Langbein, J. O.
2010-12-01
Detection of transient tectonic signals in data obtained from large geodetic networks requires the ability to detect signals that are both temporally and spatially coherent. In this report I will describe a modification to an existing method that estimates both the coefficients of temporally correlated noise model and an efficient filter based on the noise model. This filter, when applied to the original time-series, effectively whitens (or flattens) the power spectrum. The filtered data provide the means to calculate running averages which are then used to detect deviations from the background trends. For large networks, time-series of signal-to-noise ratio (SNR) can be easily constructed since, by filtering, each of the original time-series has been transformed into one that is closer to having a Gaussian distribution with a variance of 1.0. Anomalous intervals may be identified by counting the number of GPS sites for which the SNR exceeds a specified value. For example, during one time interval, if there were 5 out of 20 time-series with SNR>2, this would be considered anomalous; typically, one would expect at 95% confidence that there would be at least 1 out of 20 time-series with an SNR>2. For time intervals with an anomalously large number of high SNR, the spatial distribution of the SNR is mapped to identify the location of the anomalous signal(s) and their degree of spatial clustering. Estimating the filter that should be used to whiten the data requires modification of the existing methods that employ maximum likelihood estimation to determine the temporal covariance of the data. In these methods, it is assumed that the noise components in the data are a combination of white, flicker and random-walk processes and that they are derived from three different and independent sources. Instead, in this new method, the covariance matrix is constructed assuming that only one source is responsible for the noise and that source can be represented as a white-noise random-number generator convolved with a filter whose spectral properties are frequency (f) independent at its highest frequencies, 1/f at the middle frequencies, and 1/f2 at the lowest frequencies. For data sets with no gaps in their time-series, construction of covariance and inverse covariance matrices is extremely efficient. Application of the above algorithm to real data potentially involves several iterations as small, tectonic signals of interest are often indistinguishable from background noise. Consequently, simply plotting the time-series of each GPS site is used to identify the largest outliers and signals independent of their cause. Any analysis of the background noise levels must factor in these other signals while the gross outliers need to be removed.
Towards Noise Tomography and Passive Monitoring Using Distributed Acoustic Sensing
NASA Astrophysics Data System (ADS)
Paitz, P.; Fichtner, A.
2017-12-01
Distributed Acoustic Sensing (DAS) has the potential to revolutionize the field of seismic data acquisition. Thanks to their cost-effectiveness, fiber-optic cables may have the capability of complementing conventional geophones and seismometers by filling a niche of applications utilizing large amounts of data. Therefore, DAS may serve as an additional tool to investigate the internal structure of the Earth and its changes over time; on scales ranging from hydrocarbon or geothermal reservoirs to the entire globe. An additional potential may be in the existence of large fibre networks deployed already for telecommunication purposes. These networks that already exist today could serve as distributed seismic antennas. We investigate theoretically how ambient noise tomography may be used with DAS data. For this we extend the theory of seismic interferometry to the measurement of strain. With numerical, 2D finite-difference examples we investigate the impact of source and receiver effects. We study the effect of heterogeneous source distributions and the cable orientation by assessing similarities and differences to the Green's function. We also compare the obtained interferometric waveforms from strain interferometry to displacement interferometric wave fields obtained with existing methods. Intermediate results show that the obtained interferometric waveforms can be connected to the Green's Functions and provide consistent information about the propagation medium. These simulations will be extended to reservoir scale subsurface structures. Future work will include the application of the theory to real-data examples. The presented research depicts the early stage of a combination of theoretical investigations, numerical simulations and real-world data applications. We will therefore evaluate the potentials and shortcomings of DAS in reservoir monitoring and seismology at the current state, with a long-term vision of global seismic tomography utilizing DAS data from existing fiber-optic cable networks.
Reinhold, Ann Marie; Poole, Geoffrey C.; Bramblett, Robert G.; Zale, Alexander V.; Roberts, David W.
2018-01-01
Determining the influences of anthropogenic perturbations on side channel dynamics in large rivers is important from both assessment and monitoring perspectives because side channels provide critical habitat to numerous aquatic species. Side channel extents are decreasing in large rivers worldwide. Although riprap and other linear structures have been shown to reduce side channel extents in large rivers, we hypothesized that small “anthropogenic plugs” (flow obstructions such as dikes or berms) across side channels modify whole-river geomorphology via accelerating side channel senescence. To test this hypothesis, we conducted a geospatial assessment, comparing digitized side channel areas from aerial photographs taken during the 1950s and 2001 along 512 km of the Yellowstone River floodplain. We identified longitudinal patterns of side channel recruitment (created/enlarged side channels) and side channel attrition (destroyed/senesced side channels) across n = 17 river sections within which channels were actively migrating. We related areal measures of recruitment and attrition to the density of anthropogenic side channel plugs across river sections. Consistent with our hypothesis, a positive spatial relationship existed between the density of anthropogenic plugs and side channel attrition, but no relationship existed between plug density and side channel recruitment. Our work highlights important linkages among side channel plugs and the persistence and restoration of side channels across floodplain landscapes. Specifically, management of small plugs represents a low-cost, high-benefit restoration opportunity to facilitate scouring flows in side channels to enable the persistence of these habitats over time.
NASA Astrophysics Data System (ADS)
Blackwell, B. A. B.; Skinner, A. R.; Smith, J. R.; Hill, C. L.; Churcher, C. S.; Kieniewicz, J. M.; Adelsberger, K. A.; Blickstein, J. I. B.; Florentin, J. A.; Deely, A. E.; Spillar, K. V.
2017-12-01
Today, Bir Tarfawi, Kharga and Dakhleh Oases all sit in Egypt's hyperarid Western Desert. A dearth of naturally occurring surface water coupled with ≤ 0.1 mm/y of precipitation, and evaporation rates > 2 m/y make Bir Tarfawi uninhabitable today, while Dakhleh and Kharga depend on borehole water to support human inhabitation. Yet in scattered locations dotting the Quaternary surfaces and deposits near each oasis, Paleolithic artefacts, fossil ungulate teeth, and snails record times when surface water did exist in wetlands, small ponds, and even large lakes. At Bir Tarfawi in Marine Isotope Stages (MIS) 5, 7, and 13, wetlands or small lakes supported freshwater snails, large herbivores, and hominins. Dakhleh Oasis hosted a large lake in MIS 6 that provided a deep reliable water supply for many millennia subsequently. ESR dates on fossils and tufa dates show thriving lacustrine and terrestrial ecosystems at Dakhleh during MIS 5, 7, 9, 11, and 17, and in shorter episodes in MIS 1, 2, 3, 6, and 12. At Kharga Oasis, springs discharged along the Libyan Escarpment edge, but the water was ponded in small basins dammed within tufa deposits. These dated deposits and fossils attest that water existed there in MIS 2-11, and one spot dating to ∼ 2.3 Ma. This proxy evidence suggest that, thanks to higher rainfall and/or groundwater tables, sufficient water persisted for much of the Pleistocene, supporting food resources, like large herbivores and molluscs, to thrive and enabling hominin habitation. and activity in the Western Desert.
Zhu, Lingyun; Li, Lianjie; Meng, Chunyan
2014-12-01
There have been problems in the existing multiple physiological parameter real-time monitoring system, such as insufficient server capacity for physiological data storage and analysis so that data consistency can not be guaranteed, poor performance in real-time, and other issues caused by the growing scale of data. We therefore pro posed a new solution which was with multiple physiological parameters and could calculate clustered background data storage and processing based on cloud computing. Through our studies, a batch processing for longitudinal analysis of patients' historical data was introduced. The process included the resource virtualization of IaaS layer for cloud platform, the construction of real-time computing platform of PaaS layer, the reception and analysis of data stream of SaaS layer, and the bottleneck problem of multi-parameter data transmission, etc. The results were to achieve in real-time physiological information transmission, storage and analysis of a large amount of data. The simulation test results showed that the remote multiple physiological parameter monitoring system based on cloud platform had obvious advantages in processing time and load balancing over the traditional server model. This architecture solved the problems including long turnaround time, poor performance of real-time analysis, lack of extensibility and other issues, which exist in the traditional remote medical services. Technical support was provided in order to facilitate a "wearable wireless sensor plus mobile wireless transmission plus cloud computing service" mode moving towards home health monitoring for multiple physiological parameter wireless monitoring.
Toward Exposing Timing-Based Probing Attacks in Web Applications †
Mao, Jian; Chen, Yue; Shi, Futian; Jia, Yaoqi; Liang, Zhenkai
2017-01-01
Web applications have become the foundation of many types of systems, ranging from cloud services to Internet of Things (IoT) systems. Due to the large amount of sensitive data processed by web applications, user privacy emerges as a major concern in web security. Existing protection mechanisms in modern browsers, e.g., the same origin policy, prevent the users’ browsing information on one website from being directly accessed by another website. However, web applications executed in the same browser share the same runtime environment. Such shared states provide side channels for malicious websites to indirectly figure out the information of other origins. Timing is a classic side channel and the root cause of many recent attacks, which rely on the variations in the time taken by the systems to process different inputs. In this paper, we propose an approach to expose the timing-based probing attacks in web applications. It monitors the browser behaviors and identifies anomalous timing behaviors to detect browser probing attacks. We have prototyped our system in the Google Chrome browser and evaluated the effectiveness of our approach by using known probing techniques. We have applied our approach on a large number of top Alexa sites and reported the suspicious behavior patterns with corresponding analysis results. Our theoretical analysis illustrates that the effectiveness of the timing-based probing attacks is dramatically limited by our approach. PMID:28245610
Toward Exposing Timing-Based Probing Attacks in Web Applications.
Mao, Jian; Chen, Yue; Shi, Futian; Jia, Yaoqi; Liang, Zhenkai
2017-02-25
Web applications have become the foundation of many types of systems, ranging from cloud services to Internet of Things (IoT) systems. Due to the large amount of sensitive data processed by web applications, user privacy emerges as a major concern in web security. Existing protection mechanisms in modern browsers, e.g., the same origin policy, prevent the users' browsing information on one website from being directly accessed by another website. However, web applications executed in the same browser share the same runtime environment. Such shared states provide side channels for malicious websites to indirectly figure out the information of other origins. Timing is a classic side channel and the root cause of many recent attacks, which rely on the variations in the time taken by the systems to process different inputs. In this paper, we propose an approach to expose the timing-based probing attacks in web applications. It monitors the browser behaviors and identifies anomalous timing behaviors to detect browser probing attacks. We have prototyped our system in the Google Chrome browser and evaluated the effectiveness of our approach by using known probing techniques. We have applied our approach on a large number of top Alexa sites and reported the suspicious behavior patterns with corresponding analysis results. Our theoretical analysis illustrates that the effectiveness of the timing-based probing attacks is dramatically limited by our approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aziz, H. M. Abdul; Ukkusuri, Satish V.
We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less
Aziz, H. M. Abdul; Ukkusuri, Satish V.
2017-06-29
We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less
Non-destructive evaluation of coating thickness using guided waves
NASA Astrophysics Data System (ADS)
Ostiguy, Pierre-Claude; Quaegebeur, Nicolas; Masson, Patrice
2015-04-01
Among existing strategies for non-destructive evaluation of coating thickness, ultrasonic methods based on the measurement of the Time-of-Flight (ToF) of high frequency bulk waves propagating through the thickness of a structure are widespread. However, these methods only provide a very localized measurement of the coating thickness and the precision on the results is largely affected by the surface roughness, porosity or multi-layered nature of the host structure. Moreover, since the measurement is very local, inspection of large surfaces can be time consuming. This article presents a robust methodology for coating thickness estimation based on the generation and measurement of guided waves. Guided waves have the advantage over ultrasonic bulk waves of being less sensitive to surface roughness, and of measuring an average thickness over a wider area, thus reducing the time required to inspect large surfaces. The approach is based on an analytical multi-layer model and intercorrelation of reference and measured signals. The method is first assessed numerically for an aluminum plate, where it is demonstrated that coating thickness can be measured within a precision of 5 micrometers using the S0 mode at frequencies below 500 kHz. Then, an experimental validation is conducted and results show that coating thicknesses in the range of 10 to 200 micrometers can be estimated within a precision of 10 micrometers of the exact coating thickness on this type of structure.
Fast surface-based travel depth estimation algorithm for macromolecule surface shape description.
Giard, Joachim; Alface, Patrice Rondao; Gala, Jean-Luc; Macq, Benoît
2011-01-01
Travel Depth, introduced by Coleman and Sharp in 2006, is a physical interpretation of molecular depth, a term frequently used to describe the shape of a molecular active site or binding site. Travel Depth can be seen as the physical distance a solvent molecule would have to travel from a point of the surface, i.e., the Solvent-Excluded Surface (SES), to its convex hull. Existing algorithms providing an estimation of the Travel Depth are based on a regular sampling of the molecule volume and the use of the Dijkstra's shortest path algorithm. Since Travel Depth is only defined on the molecular surface, this volume-based approach is characterized by a large computational complexity due to the processing of unnecessary samples lying inside or outside the molecule. In this paper, we propose a surface-based approach that restricts the processing to data defined on the SES. This algorithm significantly reduces the complexity of Travel Depth estimation and makes possible the analysis of large macromolecule surface shape description with high resolution. Experimental results show that compared to existing methods, the proposed algorithm achieves accurate estimations with considerably reduced processing times.
Invited review: gender issues related to spaceflight: a NASA perspective.
Harm, D L; Jennings, R T; Meck, J V; Powell, M R; Putcha, L; Sams, C P; Schneider, S M; Shackelford, L C; Smith, S M; Whitson, P A
2001-11-01
This minireview provides an overview of known and potential gender differences in physiological responses to spaceflight. The paper covers cardiovascular and exercise physiology, barophysiology and decompression sickness, renal stone risk, immunology, neurovestibular and sensorimotor function, nutrition, pharmacotherapeutics, and reproduction. Potential health and functional impacts associated with the various physiological changes during spaceflight are discussed, and areas needing additional research are highlighted. Historically, studies of physiological responses to microgravity have not been aimed at examining gender-specific differences in the astronaut population. Insufficient data exist in most of the discipline areas at this time to draw valid conclusions about gender-specific differences in astronauts, in part due to the small ratio of women to men. The only astronaut health issue for which a large enough data set exists to allow valid conclusions to be drawn about gender differences is orthostatic intolerance following shuttle missions, in which women have a significantly higher incidence of presyncope during stand tests than do men. The most common observation across disciplines is that individual differences in physiological responses within genders are usually as large as, or larger than, differences between genders. Individual characteristics usually outweigh gender differences per se.
On small beams with large topological charge: II. Photons, electrons and gravitational waves
NASA Astrophysics Data System (ADS)
Krenn, Mario; Zeilinger, Anton
2018-06-01
Beams of light with a large topological charge significantly change their spatial structure when they are focused strongly. Physically, it can be explained by an emerging electromagnetic field component in the direction of propagation, which is neglected in the simplified scalar wave picture in optics. Here we ask: is this a specific photonic behavior, or can similar phenomena also be predicted for other species of particles? We show that the same modification of the spatial structure exists for relativistic electrons as well as for focused gravitational waves. However, this is for different physical reasons: for electrons, which are described by the Dirac equation, the spatial structure changes due to a spin–orbit coupling in the relativistic regime. In gravitational waves described with linearized general relativity, the curvature of space–time between the transverse and propagation direction leads to the modification of the spatial structure. Thus, this universal phenomenon exists for both massive and massless elementary particles with spin 1/2, 1 and 2. It would be very interesting whether other types of particles such as composite systems (neutrons or C60) or neutrinos show a similar behavior and how this phenomenon can be explained in a unified physical way.
Long-Period Tidal Variations in the Length of Day
NASA Technical Reports Server (NTRS)
Ray, Richard D.; Erofeeva, Svetlana Y.
2014-01-01
A new model of long-period tidal variations in length of day is developed. The model comprises 80 spectral lines with periods between 18.6 years and 4.7 days, and it consistently includes effects of mantle anelasticity and dynamic ocean tides for all lines. The anelastic properties followWahr and Bergen; experimental confirmation for their results now exists at the fortnightly period, but there remains uncertainty when extrapolating to the longest periods. The ocean modeling builds on recent work with the fortnightly constituent, which suggests that oceanic tidal angular momentum can be reliably predicted at these periods without data assimilation. This is a critical property when modeling most long-period tides, for which little observational data exist. Dynamic ocean effects are quite pronounced at shortest periods as out-of-phase rotation components become nearly as large as in-phase components. The model is tested against a 20 year time series of space geodetic measurements of length of day. The current international standard model is shown to leave significant residual tidal energy, and the new model is found to mostly eliminate that energy, with especially large variance reduction for constituents Sa, Ssa, Mf, and Mt.
JIT: A Strategic Tool of Inventory Management
NASA Astrophysics Data System (ADS)
Singh, D. K.; Singh, Satyendra
2012-03-01
Investment in inventory absorbs a large portion of the working capital of a company and often it represents a large portion of the total assets of a business. By improving return on investment by increasing the rate of inventory turnover, management often wants to ensure economic efficiency. Effective inventory management enables a firm to provide lower costs, rapid response and flexibility for its customers. Just-in-time (JIT) philosophy is most widely adopted and practices in the recent years worldwide. It aims at reducing total production costs by producing only what is immediately needed and eliminates wastes. It is based on a radically different concept, deviating substantially from the existing manufacturing practices in many respects. It is a very effective tool to reduce the wastage of inventory and manage it effectively. It has the potential to bring substantial changes in the existing setup of a company; can give it a new face, broaden its acceptability and ensure a longer life. It can strategically change the atmosphere needed for longer survival. JIT is radically different from MRP and goes beyond materials management. The new outlook acquired by the company can meet global expectations of the cust
Code Parallelization with CAPO: A User Manual
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Frumkin, Michael; Yan, Jerry; Biegel, Bryan (Technical Monitor)
2001-01-01
A software tool has been developed to assist the parallelization of scientific codes. This tool, CAPO, extends an existing parallelization toolkit, CAPTools developed at the University of Greenwich, to generate OpenMP parallel codes for shared memory architectures. This is an interactive toolkit to transform a serial Fortran application code to an equivalent parallel version of the software - in a small fraction of the time normally required for a manual parallelization. We first discuss the way in which loop types are categorized and how efficient OpenMP directives can be defined and inserted into the existing code using the in-depth interprocedural analysis. The use of the toolkit on a number of application codes ranging from benchmark to real-world application codes is presented. This will demonstrate the great potential of using the toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of processors. The second part of the document gives references to the parameters and the graphic user interface implemented in the toolkit. Finally a set of tutorials is included for hands-on experiences with this toolkit.
Wind farm density and harvested power in very large wind farms: A low-order model
NASA Astrophysics Data System (ADS)
Cortina, G.; Sharma, V.; Calaf, M.
2017-07-01
In this work we create new understanding of wind turbine wakes recovery process as a function of wind farm density using large-eddy simulations of an atmospheric boundary layer diurnal cycle. Simulations are forced with a constant geostrophic wind and a time varying surface temperature extracted from a selected period of the Cooperative Atmospheric Surface Exchange Study field experiment. Wind turbines are represented using the actuator disk model with rotation and yaw alignment. A control volume analysis around each turbine has been used to evaluate wind turbine wake recovery and corresponding harvested power. Results confirm the existence of two dominant recovery mechanisms, advection and flux of mean kinetic energy, which are modulated by the background thermal stratification. For the low-density arrangements advection dominates, while for the highly loaded wind farms the mean kinetic energy recovers through fluxes of mean kinetic energy. For those cases in between, a smooth balance of both mechanisms exists. From the results, a low-order model for the wind farms' harvested power as a function of thermal stratification and wind farm density has been developed, which has the potential to be used as an order-of-magnitude assessment tool.
Invited review: gender issues related to spaceflight: a NASA perspective
NASA Technical Reports Server (NTRS)
Harm, D. L.; Jennings, R. T.; Meck, J. V.; Powell, M. R.; Putcha, L.; Sams, C. P.; Schneider, S. M.; Shackelford, L. C.; Smith, S. M.; Whitson, P. A.
2001-01-01
This minireview provides an overview of known and potential gender differences in physiological responses to spaceflight. The paper covers cardiovascular and exercise physiology, barophysiology and decompression sickness, renal stone risk, immunology, neurovestibular and sensorimotor function, nutrition, pharmacotherapeutics, and reproduction. Potential health and functional impacts associated with the various physiological changes during spaceflight are discussed, and areas needing additional research are highlighted. Historically, studies of physiological responses to microgravity have not been aimed at examining gender-specific differences in the astronaut population. Insufficient data exist in most of the discipline areas at this time to draw valid conclusions about gender-specific differences in astronauts, in part due to the small ratio of women to men. The only astronaut health issue for which a large enough data set exists to allow valid conclusions to be drawn about gender differences is orthostatic intolerance following shuttle missions, in which women have a significantly higher incidence of presyncope during stand tests than do men. The most common observation across disciplines is that individual differences in physiological responses within genders are usually as large as, or larger than, differences between genders. Individual characteristics usually outweigh gender differences per se.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arsenin, V. V., E-mail: arsenin-vv@nrcki.ru; Skovoroda, A. A., E-mail: skovoroda-aa@nrcki.ru
2015-12-15
Using a cylindrical model, a relatively simple description is presented of how a magnetic field perturbation stimulated by a low external helical current or a small helical distortion of the boundary and generating magnetic islands penetrates into a plasma column with a magnetic surface q=m/n to which tearing instability is attached. Linear analysis of the classical instability with an aperiodic growth of the perturbation in time shows that the perturbation amplitude in plasma increases in a resonant manner as the discharge parameters approach the threshold of tearing instability. In a stationary case, under the assumption on the helical character ofmore » equilibrium, which can be found from the two-dimensional nonlinear equation for the helical flux, there is no requirement for the small size of the island. Examples of calculations in which magnetic islands are large near the threshold of tearing instability are presented. The bifurcation of equilibrium near the threshold of tearing instability in plasma with a cylindrical boundary, i.e., the existence of helical equilibrium (along with cylindrical equilibrium) with large islands, is described. Moreover, helical equilibrium can also exist in the absence of instability.« less
Mission definition study for a VLBI station utilizing the Space Shuttle
NASA Technical Reports Server (NTRS)
Burke, B. F.
1982-01-01
The uses of the Space Shuttle transportation system for orbiting VeryLong-Baseline Interferometry (OVLBI) were examined, both with respect to technical feasibility and its scientific possibilities. The study consisted of a critical look at the adaptability of current technology to an orbiting environment, the suitability of current data reduction facilities for the new technique, and a review of the new science that is made possible by using the Space Shuttle as a moving platform for a VLBI terminal in space. The conclusions are positive in all respects: no technological deficiencies exist that would need remedy, the data processing problem can be handled easily by straightforward adaptations of existing systems, and there is a significant new research frontier to be explored, with the Space Shuttle providing the first step. The VLBI technique utilizes the great frequency stability of modern atomic time standards, the power of integrated circuitry to perform real-time signal conditioning, and the ability of magnetic tape recorders to provide essentially error-free data recording, all of which combine to permit the realization of radio interferometry at arbitrarily large baselines.
NASA Technical Reports Server (NTRS)
Ryan, Deirdre A.; Luebbers, Raymond J.; Nguyen, Truong X.; Kunz, Karl S.; Steich, David J.
1992-01-01
Prediction of anechoic chamber performance is a difficult problem. Electromagnetic anechoic chambers exist for a wide range of frequencies but are typically very large when measured in wavelengths. Three dimensional finite difference time domain (FDTD) modeling of anechoic chambers is possible with current computers but at frequencies lower than most chamber design frequencies. However, two dimensional FDTD (2D-FTD) modeling enables much greater detail at higher frequencies and offers significant insight into compact anechoic chamber design and performance. A major subsystem of an anechoic chamber for which computational electromagnetic analyses exist is the reflector. First, an analysis of the quiet zone fields of a low frequency anechoic chamber produced by a uniform source and a reflector in two dimensions using the FDTD method is presented. The 2D-FDTD results are compared with results from a three dimensional corrected physical optics calculation and show good agreement. Next, a directional source is substituted for the uniform radiator. Finally, a two dimensional anechoic chamber geometry, including absorbing materials, is considered, and the 2D-FDTD results for these geometries appear reasonable.
Guozhi, Jia; Peng, Wang; Yanbang, Zhang; Kai, Chang
2016-01-01
Localized surface plasmons (LSP), the confined collective excitations of electrons in noble metal and doped semiconductor nanostructures, enhance greatly local electric field near the surface of the nanostructures and result in strong optical response. LSPs of ordinary massive electrons have been investigated for a long time and were used as basic ingredient of plasmonics and metamaterials. LSPs of massless Dirac electrons, which could result in novel tunable plasmonic metamaterials in the terahertz and infrared frequency regime, are relatively unexplored. Here we report for first time the observation of LSPs in Bi2Se3 topological insulator hierarchical nanoflowers, which are consisted of a large number of Bi2Se3 nanocrystals. The existence of LSPs can be demonstrated by surface enhanced Raman scattering and absorbance spectra ranging from ultraviolet to near-infrared. LSPs produce an enhanced photothermal effect stimulated by near-infrared laser. The excellent photothermal conversion effect can be ascribed to the existence of topological surface states, and provides us a new way for practical application of topological insulators in nanoscale heat source and cancer therapy. PMID:27172827
A modified sparse reconstruction method for three-dimensional synthetic aperture radar image
NASA Astrophysics Data System (ADS)
Zhang, Ziqiang; Ji, Kefeng; Song, Haibo; Zou, Huanxin
2018-03-01
There is an increasing interest in three-dimensional Synthetic Aperture Radar (3-D SAR) imaging from observed sparse scattering data. However, the existing 3-D sparse imaging method requires large computing times and storage capacity. In this paper, we propose a modified method for the sparse 3-D SAR imaging. The method processes the collection of noisy SAR measurements, usually collected over nonlinear flight paths, and outputs 3-D SAR imagery. Firstly, the 3-D sparse reconstruction problem is transformed into a series of 2-D slices reconstruction problem by range compression. Then the slices are reconstructed by the modified SL0 (smoothed l0 norm) reconstruction algorithm. The improved algorithm uses hyperbolic tangent function instead of the Gaussian function to approximate the l0 norm and uses the Newton direction instead of the steepest descent direction, which can speed up the convergence rate of the SL0 algorithm. Finally, numerical simulation results are given to demonstrate the effectiveness of the proposed algorithm. It is shown that our method, compared with existing 3-D sparse imaging method, performs better in reconstruction quality and the reconstruction time.
Water and ice on Mars: Evidence from Valles Marineris
NASA Technical Reports Server (NTRS)
Lucchitta, B. K.
1987-01-01
An important contribution to the volatile history of Mars comes from a study of Valles Marineris, where stereoimages and a 3-D view of the upper Martian crust permit unusual insights. The evidence that ground water and ice existed until relatively recently or still exist in the equatorial area comes from observations of landslides, wall rock, and dark volcanic vents. Valles Marineris landslides are different in efficiency from large catastrophic landslides on Earth. One explanation for the difference might be that the Martian slides are lubricated by water. A comparison of landslide speeds also suggests that the Martian slides contain water. That Valles Marineris wall rock contained water or ice is further suggested by its difference from the interior layered deposits. Faults and fault zones in Valles Marineris also shed light on the problem of water content in the walls. Because the main evidence for water and ice in the wall rock comes from slides, their time of emplacement is important. The slides in Valles Marineris date from the time of late eruptions of the Tharsis volcanoes and thus were emplaced after the major activity of Martian outflow channels.
Guozhi, Jia; Peng, Wang; Yanbang, Zhang; Kai, Chang
2016-05-12
Localized surface plasmons (LSP), the confined collective excitations of electrons in noble metal and doped semiconductor nanostructures, enhance greatly local electric field near the surface of the nanostructures and result in strong optical response. LSPs of ordinary massive electrons have been investigated for a long time and were used as basic ingredient of plasmonics and metamaterials. LSPs of massless Dirac electrons, which could result in novel tunable plasmonic metamaterials in the terahertz and infrared frequency regime, are relatively unexplored. Here we report for first time the observation of LSPs in Bi2Se3 topological insulator hierarchical nanoflowers, which are consisted of a large number of Bi2Se3 nanocrystals. The existence of LSPs can be demonstrated by surface enhanced Raman scattering and absorbance spectra ranging from ultraviolet to near-infrared. LSPs produce an enhanced photothermal effect stimulated by near-infrared laser. The excellent photothermal conversion effect can be ascribed to the existence of topological surface states, and provides us a new way for practical application of topological insulators in nanoscale heat source and cancer therapy.
Cross-sensor iris recognition through kernel learning.
Pillai, Jaishanker K; Puertas, Maria; Chellappa, Rama
2014-01-01
Due to the increasing popularity of iris biometrics, new sensors are being developed for acquiring iris images and existing ones are being continuously upgraded. Re-enrolling users every time a new sensor is deployed is expensive and time-consuming, especially in applications with a large number of enrolled users. However, recent studies show that cross-sensor matching, where the test samples are verified using data enrolled with a different sensor, often lead to reduced performance. In this paper, we propose a machine learning technique to mitigate the cross-sensor performance degradation by adapting the iris samples from one sensor to another. We first present a novel optimization framework for learning transformations on iris biometrics. We then utilize this framework for sensor adaptation, by reducing the distance between samples of the same class, and increasing it between samples of different classes, irrespective of the sensors acquiring them. Extensive evaluations on iris data from multiple sensors demonstrate that the proposed method leads to improvement in cross-sensor recognition accuracy. Furthermore, since the proposed technique requires minimal changes to the iris recognition pipeline, it can easily be incorporated into existing iris recognition systems.
NASA Astrophysics Data System (ADS)
Boyarshinov, B. F.
2018-01-01
Experimental data on the flow structure and mass transfer near the boundaries of the region existence of the laminar and turbulent boundary layers with combustion are considered. These data include the results of in-vestigation on reacting flow stability at mixed convection, mass transfer during ethanol evaporation "on the floor" and "on the ceiling", when the flame surface curves to form the large-scale cellular structures. It is shown with the help of the PIV equipment that when Rayleigh-Taylor instability manifests, the mushroom-like structures are formed, where the motion from the flame front to the wall and back alternates. The cellular flame exists in a narrow range of velocities from 0.55 to 0.65 m/s, and mass transfer is three times higher than its level in the standard laminar boundary layer.
NASA Astrophysics Data System (ADS)
Lu, Jianbo; Xi, Yugeng; Li, Dewei; Xu, Yuli; Gan, Zhongxue
2018-01-01
A common objective of model predictive control (MPC) design is the large initial feasible region, low online computational burden as well as satisfactory control performance of the resulting algorithm. It is well known that interpolation-based MPC can achieve a favourable trade-off among these different aspects. However, the existing results are usually based on fixed prediction scenarios, which inevitably limits the performance of the obtained algorithms. So by replacing the fixed prediction scenarios with the time-varying multi-step prediction scenarios, this paper provides a new insight into improvement of the existing MPC designs. The adopted control law is a combination of predetermined multi-step feedback control laws, based on which two MPC algorithms with guaranteed recursive feasibility and asymptotic stability are presented. The efficacy of the proposed algorithms is illustrated by a numerical example.
NASA Astrophysics Data System (ADS)
Dou, Hao; Sun, Xiao; Li, Bin; Deng, Qianqian; Yang, Xubo; Liu, Di; Tian, Jinwen
2018-03-01
Aircraft detection from very high resolution remote sensing images, has gained more increasing interest in recent years due to the successful civil and military applications. However, several problems still exist: 1) how to extract the high-level features of aircraft; 2) locating objects within such a large image is difficult and time consuming; 3) A common problem of multiple resolutions of satellite images still exists. In this paper, inspirited by biological visual mechanism, the fusion detection framework is proposed, which fusing the top-down visual mechanism (deep CNN model) and bottom-up visual mechanism (GBVS) to detect aircraft. Besides, we use multi-scale training method for deep CNN model to solve the problem of multiple resolutions. Experimental results demonstrate that our method can achieve a better detection result than the other methods.
40 CFR 62.7455 - Identification of sources.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Air Emissions from Existing Commercial and Industrial Solid Waste Incineration Units § 62.7455 Identification of sources. (a) The plan applies to the following existing commercial and solid waste incineration...] Air Emissions From Existing Large and Small Municipal Waste Combustors ...
40 CFR 62.7455 - Identification of sources.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Air Emissions from Existing Commercial and Industrial Solid Waste Incineration Units § 62.7455 Identification of sources. (a) The plan applies to the following existing commercial and solid waste incineration...] Air Emissions From Existing Large and Small Municipal Waste Combustors ...
Gething, Peter W; Patil, Anand P; Hay, Simon I
2010-04-01
Risk maps estimating the spatial distribution of infectious diseases are required to guide public health policy from local to global scales. The advent of model-based geostatistics (MBG) has allowed these maps to be generated in a formal statistical framework, providing robust metrics of map uncertainty that enhances their utility for decision-makers. In many settings, decision-makers require spatially aggregated measures over large regions such as the mean prevalence within a country or administrative region, or national populations living under different levels of risk. Existing MBG mapping approaches provide suitable metrics of local uncertainty--the fidelity of predictions at each mapped pixel--but have not been adapted for measuring uncertainty over large areas, due largely to a series of fundamental computational constraints. Here the authors present a new efficient approximating algorithm that can generate for the first time the necessary joint simulation of prevalence values across the very large prediction spaces needed for global scale mapping. This new approach is implemented in conjunction with an established model for P. falciparum allowing robust estimates of mean prevalence at any specified level of spatial aggregation. The model is used to provide estimates of national populations at risk under three policy-relevant prevalence thresholds, along with accompanying model-based measures of uncertainty. By overcoming previously unchallenged computational barriers, this study illustrates how MBG approaches, already at the forefront of infectious disease mapping, can be extended to provide large-scale aggregate measures appropriate for decision-makers.
Coal resources, reserves and peak coal production in the United States
Milici, Robert C.; Flores, Romeo M.; Stricker, Gary D.
2013-01-01
In spite of its large endowment of coal resources, recent studies have indicated that United States coal production is destined to reach a maximum and begin an irreversible decline sometime during the middle of the current century. However, studies and assessments illustrating coal reserve data essential for making accurate forecasts of United States coal production have not been compiled on a national basis. As a result, there is a great deal of uncertainty in the accuracy of the production forecasts. A very large percentage of the coal mined in the United States comes from a few large-scale mines (mega-mines) in the Powder River Basin of Wyoming and Montana. Reported reserves at these mines do not account for future potential reserves or for future development of technology that may make coal classified currently as resources into reserves in the future. In order to maintain United States coal production at or near current levels for an extended period of time, existing mines will eventually have to increase their recoverable reserves and/or new large-scale mines will have to be opened elsewhere. Accordingly, in order to facilitate energy planning for the United States, this paper suggests that probabilistic assessments of the remaining coal reserves in the country would improve long range forecasts of coal production. As it is in United States coal assessment projects currently being conducted, a major priority of probabilistic assessments would be to identify the numbers and sizes of remaining large blocks of coal capable of supporting large-scale mining operations for extended periods of time and to conduct economic evaluations of those resources.
NASA Astrophysics Data System (ADS)
Flemming, Burghard W.; Kudrass, Hermann-Rudolf
2018-02-01
The existence of a continuously flowing Mozambique Current, i.e. a western geostrophic boundary current flowing southwards along the shelf break of Mozambique, was until recently accepted by oceanographers studying ocean circulation in the south-western Indian Ocean. This concept was then cast into doubt based on long-term current measurements obtained from current-meter moorings deployed across the northern Mozambique Channel, which suggested that southward flow through the Mozambique Channel took place in the form of successive, southward migrating and counter-clockwise rotating eddies. Indeed, numerical modelling found that, if at all, strong currents on the outer shelf occurred for not more than 9 days per year. In the present study, the negation of the existence of a Mozambique Current is challenged by the discovery of a large (50 km long, 12 km wide) subaqueous dune field (with up to 10 m high dunes) on the outer shelf east of the modern Zambezi River delta at water depths between 50 and 100 m. Being interpreted as representing the current-modified, early Holocene Zambezi palaeo-delta, the dune field would have migrated southwards by at least 50 km from its former location since sea level recovered to its present-day position some 7 ka ago and after the former delta had been remoulded into a migrating dune field. Because a large dune field composed of actively migrating bedforms cannot be generated and maintained by currents restricted to a period of only 9 days per year, the validity of those earlier modelling results is questioned for the western margin of the flow field. Indeed, satellite images extracted from the Perpetual Ocean display of NASA, which show monthly time-integrated surface currents in the Mozambique Channel for the 5 month period from June-October 2006, support the proposition that strong flow on the outer Mozambican shelf occurs much more frequently than postulated by those modelling results. This is consistent with more recent modelling studies comparing the application of slippage and non-slippage approaches—they suggest that, when applying partial slippage, a western boundary current can exist simultaneously with the southward migrating eddies. Considering the evidence presented in this paper, it is concluded that a quasi-persistent, though seasonally variable Mozambique Current does exist.
Maximum magnitude in the Lower Rhine Graben
NASA Astrophysics Data System (ADS)
Vanneste, Kris; Merino, Miguel; Stein, Seth; Vleminckx, Bart; Brooks, Eddie; Camelbeeck, Thierry
2014-05-01
Estimating Mmax, the assumed magnitude of the largest future earthquakes expected on a fault or in an area, involves large uncertainties. No theoretical basis exists to infer Mmax because even where we know the long-term rate of motion across a plate boundary fault, or the deformation rate across an intraplate zone, neither predict how strain will be released. As a result, quite different estimates can be made based on the assumptions used. All one can say with certainty is that Mmax is at least as large as the largest earthquake in the available record. However, because catalogs are often short relative to the average recurrence time of large earthquakes, larger earthquakes than anticipated often occur. Estimating Mmax is especially challenging within plates, where deformation rates are poorly constrained, large earthquakes are rarer and variable in space and time, and often occur on previously unrecognized faults. We explore this issue for the Lower Rhine Graben seismic zone where the largest known earthquake, the 1756 Düren earthquake, has magnitude 5.7 and should occur on average about every 400 years. However, paleoseismic studies suggest that earthquakes with magnitudes up to 6.7 occurred during the Late Pleistocene and Holocene. What to assume for Mmax is crucial for critical facilities like nuclear power plants that should be designed to withstand the maximum shaking in 10,000 years. Using the observed earthquake frequency-magnitude data, we generate synthetic earthquake histories, and sample them over shorter intervals corresponding to the real catalog's completeness. The maximum magnitudes appearing most often in the simulations tend to be those of earthquakes with mean recurrence time equal to the catalog length. Because catalogs are often short relative to the average recurrence time of large earthquakes, we expect larger earthquakes than observed to date to occur. In a next step, we will compute hazard maps for different return periods based on the synthetic catalogs, in order to determine the influence of underestimating Mmax.
NASA Astrophysics Data System (ADS)
Qin, Zhenwei
1993-04-01
Although slow melting favors the generation of basaltic melt from a mantle matrix with large radioactive disequilibrium between two actinide nuclides ( MCKENZIE, 1985a), it results in long residence time in a magma chamber, during which the disequilibrium may be removed. An equilibrium melting model modified after MCKENZIE (1985a) is presented here which suggests that, for a given actinide parent-daughter pair, there exists a specific melting rate at which disequilibrium between these two nuclides reaches its maximum. This melting rate depends on the decay constant of the daughter nuclide concerned and the magma chamber volume scaled to that of its source. For a given scaled chamber size, large radioactive disequilibrium between two actinide nuclides in basalts will be observed if the melting rate is such that the residence time of the magma in the chamber is comparable to the mean life of the daughter nuclide. With a chamber size 1% in volume of the melting source, the melting rates at which maximum disequilibrium in basalts is obtained are 10 -7, 2 × 10 -7 and 3 × 10 -6y-1, respectively, for 238U- 230Th, 235U- 231Pa and 230Th- 226Ra. This implies that, while large disequilibrium between 238U- 230Th and between 235U- 231Pa may occur together, large 230Th- 226Ra disequilibrium will not coexist with large 238U- 230Th disequilibrium, consistent with some observations. The active mantle melting zone which supplies melt to a ridge axis is inferred to be only about 10 km thick and 50 km wide. The fraction of melt present in such a mantle source at any time is about 0.01 and 0.04, respectively, if melting rate is 10 -7 and 10 -6 y -1. The corresponding residence time of the residual melt in the matrix is 10 5 and 4 × 10 4y.
Incremental fuzzy C medoids clustering of time series data using dynamic time warping distance
Chen, Jingli; Wu, Shuai; Liu, Zhizhong; Chao, Hao
2018-01-01
Clustering time series data is of great significance since it could extract meaningful statistics and other characteristics. Especially in biomedical engineering, outstanding clustering algorithms for time series may help improve the health level of people. Considering data scale and time shifts of time series, in this paper, we introduce two incremental fuzzy clustering algorithms based on a Dynamic Time Warping (DTW) distance. For recruiting Single-Pass and Online patterns, our algorithms could handle large-scale time series data by splitting it into a set of chunks which are processed sequentially. Besides, our algorithms select DTW to measure distance of pair-wise time series and encourage higher clustering accuracy because DTW could determine an optimal match between any two time series by stretching or compressing segments of temporal data. Our new algorithms are compared to some existing prominent incremental fuzzy clustering algorithms on 12 benchmark time series datasets. The experimental results show that the proposed approaches could yield high quality clusters and were better than all the competitors in terms of clustering accuracy. PMID:29795600
Incremental fuzzy C medoids clustering of time series data using dynamic time warping distance.
Liu, Yongli; Chen, Jingli; Wu, Shuai; Liu, Zhizhong; Chao, Hao
2018-01-01
Clustering time series data is of great significance since it could extract meaningful statistics and other characteristics. Especially in biomedical engineering, outstanding clustering algorithms for time series may help improve the health level of people. Considering data scale and time shifts of time series, in this paper, we introduce two incremental fuzzy clustering algorithms based on a Dynamic Time Warping (DTW) distance. For recruiting Single-Pass and Online patterns, our algorithms could handle large-scale time series data by splitting it into a set of chunks which are processed sequentially. Besides, our algorithms select DTW to measure distance of pair-wise time series and encourage higher clustering accuracy because DTW could determine an optimal match between any two time series by stretching or compressing segments of temporal data. Our new algorithms are compared to some existing prominent incremental fuzzy clustering algorithms on 12 benchmark time series datasets. The experimental results show that the proposed approaches could yield high quality clusters and were better than all the competitors in terms of clustering accuracy.
A large-scale forest fragmentation experiment: the Stability of Altered Forest Ecosystems Project.
Ewers, Robert M; Didham, Raphael K; Fahrig, Lenore; Ferraz, Gonçalo; Hector, Andy; Holt, Robert D; Kapos, Valerie; Reynolds, Glen; Sinun, Waidi; Snaddon, Jake L; Turner, Edgar C
2011-11-27
Opportunities to conduct large-scale field experiments are rare, but provide a unique opportunity to reveal the complex processes that operate within natural ecosystems. Here, we review the design of existing, large-scale forest fragmentation experiments. Based on this review, we develop a design for the Stability of Altered Forest Ecosystems (SAFE) Project, a new forest fragmentation experiment to be located in the lowland tropical forests of Borneo (Sabah, Malaysia). The SAFE Project represents an advance on existing experiments in that it: (i) allows discrimination of the effects of landscape-level forest cover from patch-level processes; (ii) is designed to facilitate the unification of a wide range of data types on ecological patterns and processes that operate over a wide range of spatial scales; (iii) has greater replication than existing experiments; (iv) incorporates an experimental manipulation of riparian corridors; and (v) embeds the experimentally fragmented landscape within a wider gradient of land-use intensity than do existing projects. The SAFE Project represents an opportunity for ecologists across disciplines to participate in a large initiative designed to generate a broad understanding of the ecological impacts of tropical forest modification.
A large-scale forest fragmentation experiment: the Stability of Altered Forest Ecosystems Project
Ewers, Robert M.; Didham, Raphael K.; Fahrig, Lenore; Ferraz, Gonçalo; Hector, Andy; Holt, Robert D.; Kapos, Valerie; Reynolds, Glen; Sinun, Waidi; Snaddon, Jake L.; Turner, Edgar C.
2011-01-01
Opportunities to conduct large-scale field experiments are rare, but provide a unique opportunity to reveal the complex processes that operate within natural ecosystems. Here, we review the design of existing, large-scale forest fragmentation experiments. Based on this review, we develop a design for the Stability of Altered Forest Ecosystems (SAFE) Project, a new forest fragmentation experiment to be located in the lowland tropical forests of Borneo (Sabah, Malaysia). The SAFE Project represents an advance on existing experiments in that it: (i) allows discrimination of the effects of landscape-level forest cover from patch-level processes; (ii) is designed to facilitate the unification of a wide range of data types on ecological patterns and processes that operate over a wide range of spatial scales; (iii) has greater replication than existing experiments; (iv) incorporates an experimental manipulation of riparian corridors; and (v) embeds the experimentally fragmented landscape within a wider gradient of land-use intensity than do existing projects. The SAFE Project represents an opportunity for ecologists across disciplines to participate in a large initiative designed to generate a broad understanding of the ecological impacts of tropical forest modification. PMID:22006969
Conroy, M.J.; Senar, J.C.; Domenech, J.
2002-01-01
We developed models for the analysis of recapture data for 2678 serins (Serinus serinus) ringed in north-eastern Spain since 1985. We investigated several time- and individual-specific factors as potential predictors of overall mortality and dispersal patterns, and of gender and age differences in these patterns. Time-specific covariates included minimum daily temperature, days below freezing, and abundance of a strong competitor, siskins (Carduelis spinus) during winter, and maximum temperature and rainfall during summer. Individual covariates included body mass (i.e. body condition), and wing length (i.e. flying ability), and interactions between body mass and environmental factors. We found little support of a predictive relationship between environmental factors and survival, but good evidence of relationships between body mass and survival, especially for juveniles. Juvenile survival appears to vary in a curvilinear manner with increasing mass, suggesting that there may exist an optimal mass beyond which increases are detrimental. The mass-survival relationship does seem to be influenced by at least one environmental factor, namely the abundance of wintering siskins. When siskins are abundant, increases in body mass appear to relate strongly to increasing survival. When siskin numbers are average or low the relationship is largely reversed, suggesting that the presence of strong competition mitigates the otherwise largely negative aspects of greater body mass. Wing length in juveniles also appears to be related positively to survival, perhaps largely due to the influence of a few unusually large juveniles with adult-like survival. Further work is needed to test these relationships, ideally under experimentation.
Meta-control of combustion performance with a data mining approach
NASA Astrophysics Data System (ADS)
Song, Zhe
Large scale combustion process is complex and proposes challenges of optimizing its performance. Traditional approaches based on thermal dynamics have limitations on finding optimal operational regions due to time-shift nature of the process. Recent advances in information technology enable people collect large volumes of process data easily and continuously. The collected process data contains rich information about the process and, to some extent, represents a digital copy of the process over time. Although large volumes of data exist in industrial combustion processes, they are not fully utilized to the level where the process can be optimized. Data mining is an emerging science which finds patterns or models from large data sets. It has found many successful applications in business marketing, medical and manufacturing domains The focus of this dissertation is on applying data mining to industrial combustion processes, and ultimately optimizing the combustion performance. However the philosophy, methods and frameworks discussed in this research can also be applied to other industrial processes. Optimizing an industrial combustion process has two major challenges. One is the underlying process model changes over time and obtaining an accurate process model is nontrivial. The other is that a process model with high fidelity is usually highly nonlinear, solving the optimization problem needs efficient heuristics. This dissertation is set to solve these two major challenges. The major contribution of this 4-year research is the data-driven solution to optimize the combustion process, where process model or knowledge is identified based on the process data, then optimization is executed by evolutionary algorithms to search for optimal operating regions.
NASA Astrophysics Data System (ADS)
Jolivet, R.; Simons, M.
2018-02-01
Interferometric synthetic aperture radar time series methods aim to reconstruct time-dependent ground displacements over large areas from sets of interferograms in order to detect transient, periodic, or small-amplitude deformation. Because of computational limitations, most existing methods consider each pixel independently, ignoring important spatial covariances between observations. We describe a framework to reconstruct time series of ground deformation while considering all pixels simultaneously, allowing us to account for spatial covariances, imprecise orbits, and residual atmospheric perturbations. We describe spatial covariances by an exponential decay function dependent of pixel-to-pixel distance. We approximate the impact of imprecise orbit information and residual long-wavelength atmosphere as a low-order polynomial function. Tests on synthetic data illustrate the importance of incorporating full covariances between pixels in order to avoid biased parameter reconstruction. An example of application to the northern Chilean subduction zone highlights the potential of this method.
Panel data analysis of cardiotocograph (CTG) data.
Horio, Hiroyuki; Kikuchi, Hitomi; Ikeda, Tomoaki
2013-01-01
Panel data analysis is a statistical method, widely used in econometrics, which deals with two-dimensional panel data collected over time and over individuals. Cardiotocograph (CTG) which monitors fetal heart rate (FHR) using Doppler ultrasound and uterine contraction by strain gage is commonly used in intrapartum treatment of pregnant women. Although the relationship between FHR waveform pattern and the outcome such as umbilical blood gas data at delivery has long been analyzed, there exists no accumulated FHR patterns from large number of cases. As time-series economic fluctuations in econometrics such as consumption trend has been studied using panel data which consists of time-series and cross-sectional data, we tried to apply this method to CTG data. The panel data composed of a symbolized segment of FHR pattern can be easily handled, and a perinatologist can get the whole FHR pattern view from the microscopic level of time-series FHR data.
Reviewing and piloting methods for decreasing discount rates; someone, somewhere in time.
Parouty, Mehraj B Y; Krooshof, Daan G M; Westra, Tjalke A; Pechlivanoglou, Petros; Postma, Maarten J
2013-08-01
There has been substantial debate on the need for decreasing discounting for monetary and health gains in economic evaluations. Next to the discussion on differential discounting, a way to identify the need for such discounting strategies is through eliciting the time preferences for monetary and health outcomes. In this article, the authors investigate the perceived time preference for money and health gains through a pilot survey on Dutch university students using methods on functional forms previously suggested. Formal objectives of the study were to review such existing methods and to pilot them on a convenience sample using a questionnaire designed for this specific purpose. Indeed, a negative relation between the time of delay and the variance of the discounting rate for all models was observed. This study was intended as a pilot for a large-scale population-based investigation using the findings from this pilot on wording of the questionnaire, interpretation, scope and analytic framework.
Changing climate shifts timing of European floods.
Blöschl, Günter; Hall, Julia; Parajka, Juraj; Perdigão, Rui A P; Merz, Bruno; Arheimer, Berit; Aronica, Giuseppe T; Bilibashi, Ardian; Bonacci, Ognjen; Borga, Marco; Čanjevac, Ivan; Castellarin, Attilio; Chirico, Giovanni B; Claps, Pierluigi; Fiala, Károly; Frolova, Natalia; Gorbachova, Liudmyla; Gül, Ali; Hannaford, Jamie; Harrigan, Shaun; Kireeva, Maria; Kiss, Andrea; Kjeldsen, Thomas R; Kohnová, Silvia; Koskela, Jarkko J; Ledvinka, Ondrej; Macdonald, Neil; Mavrova-Guirguinova, Maria; Mediero, Luis; Merz, Ralf; Molnar, Peter; Montanari, Alberto; Murphy, Conor; Osuch, Marzena; Ovcharuk, Valeryia; Radevski, Ivan; Rogger, Magdalena; Salinas, José L; Sauquet, Eric; Šraj, Mojca; Szolgay, Jan; Viglione, Alberto; Volpi, Elena; Wilson, Donna; Zaimi, Klodian; Živković, Nenad
2017-08-11
A warming climate is expected to have an impact on the magnitude and timing of river floods; however, no consistent large-scale climate change signal in observed flood magnitudes has been identified so far. We analyzed the timing of river floods in Europe over the past five decades, using a pan-European database from 4262 observational hydrometric stations, and found clear patterns of change in flood timing. Warmer temperatures have led to earlier spring snowmelt floods throughout northeastern Europe; delayed winter storms associated with polar warming have led to later winter floods around the North Sea and some sectors of the Mediterranean coast; and earlier soil moisture maxima have led to earlier winter floods in western Europe. Our results highlight the existence of a clear climate signal in flood observations at the continental scale. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Rotating Hele-Shaw cell with a time-dependent angular velocity
NASA Astrophysics Data System (ADS)
Anjos, Pedro H. A.; Alvarez, Victor M. M.; Dias, Eduardo O.; Miranda, José A.
2017-12-01
Despite the large number of existing studies of viscous flows in rotating Hele-Shaw cells, most investigations analyze rotational motion with a constant angular velocity, under vanishing Reynolds number conditions in which inertial effects can be neglected. In this work, we examine the linear and weakly nonlinear dynamics of the interface between two immiscible fluids in a rotating Hele-Shaw cell, considering the action of a time-dependent angular velocity, and taking into account the contribution of inertia. By using a generalized Darcy's law, we derive a second-order mode-coupling equation which describes the time evolution of the interfacial perturbation amplitudes. For arbitrary values of viscosity and density ratios, and for a range of values of a rotational Reynolds number, we investigate how the time-dependent angular velocity and inertia affect the important finger competition events that traditionally arise in rotating Hele-Shaw flows.
Dinstein, Ilan; Haar, Shlomi; Atsmon, Shir; Schtaerman, Hen
2017-01-01
Large controversy exists regarding the potential existence and clinical significance of larger brain volumes in toddlers who later develop autism. Assessing this relationship is important for determining the clinical utility of early head circumference (HC) measures and for assessing the validity of the early overgrowth hypothesis of autism, which suggests that early accelerated brain development may be a hallmark of the disorder. We performed a retrospective comparison of HC, height, and weight measurements between 66 toddlers who were later diagnosed with autism and 66 matched controls. These toddlers represent an unbiased regional sample from a single health service provider in the southern district of Israel. On average, participating toddlers had >8 measurements between birth and the age of two, which enabled us to characterize individual HC, height, and weight development with high precision and fit a negative exponential growth model to the data of each toddler with exceptional accuracy. The analyses revealed that HC sizes and growth rates were not significantly larger in toddlers with autism even when stratifying the autism group based on verbal capabilities at the time of diagnosis. In addition, there were no significant correlations between ADOS scores at the time of diagnosis and HC at any time-point during the first 2 years of life. These negative results add to accumulating evidence, which suggest that brain volume is not necessarily larger in toddlers who develop autism. We believe that conflicting results reported in other studies are due to small sample sizes, use of misleading population norms, changes in the clinical definition of autism over time, and/or inclusion of individuals with syndromic autism. While abnormally large brains may be evident in some individuals with autism and more clearly visible in MRI scans, converging evidence from this and other studies suggests that enlarged HC is not a common etiology of the entire autism population. Early HC measures, therefore, offer very limited clinical utility for assessment of autism risk in the general population.
Long hard road from Nuna to Rodinia
NASA Astrophysics Data System (ADS)
Pisarevsky, Sergei
2014-05-01
The popular concept of supercontinental cycles suggests the existence of at least two Precambrian supercontinents, referred to as Nuna (or Columbia) and Rodinia. The times of their assembly and breakup are debated, as are their constituents and configurations. The recent compilation of paleomagnetic data supported by the geological evidence suggests that Nuna have broken up at ca. 1450-1380 Ma by separation of the Australia-Mawson continent from western Laurentia. The recent robust paleomagnetic pole from 1210 Ma mafic dykes in Western Australia provides an additional evidence of wide separation of these continents by the time of the dykes' emplacement. On the other hand, there is the evidence that Laurentia and Baltica have been rigidly connected with present Scandinavia facing East Greenland until after 1270 Ma, when they broke up. Baltica then moved c.1000 km south and rotated clockwise 95° with respect to Laurentia by 1000 Ma and two continents recombined again with the Scandinavian margin of Baltica facing Scottish terranes of the Laurentian affinity, Rockall Bank and southeast Greenland. However, the published model of the simple fan-like opening of the Asgard Sea Between Laurentia and Baltica is somewhat hampered by the recent 1120 Ma paleomagnetic pole from Finland, which suggests a more complicated drift of Baltica with respect to Laurentia. There are also reasons to suggest that a large part of Nuna, which included Laurentia and Siberia has been incorporated into Rodinia after 1000 Ma. The c. 1300-1000 Ma Apparent Polar Wander Paths for Laurentia, Baltica, Australia, Amazonia and India are significantly different in their lengths and shapes suggesting relative movements of these continents with respect to each other. There is still not enough reliable published late Mesoproterozoic - early Neoproterozoic paleomagnetic data to make the unequivocal paleogeographic reconstructions for this time interval. However, it is unlikely that a large supercontinent did exist in the late Mesoproterozoic. This may have been a transitional time between the final breakup of Nuna and the assembly of Rodinia.
Sze, Sing-Hoi; Parrott, Jonathan J; Tarone, Aaron M
2017-12-06
While the continued development of high-throughput sequencing has facilitated studies of entire transcriptomes in non-model organisms, the incorporation of an increasing amount of RNA-Seq libraries has made de novo transcriptome assembly difficult. Although algorithms that can assemble a large amount of RNA-Seq data are available, they are generally very memory-intensive and can only be used to construct small assemblies. We develop a divide-and-conquer strategy that allows these algorithms to be utilized, by subdividing a large RNA-Seq data set into small libraries. Each individual library is assembled independently by an existing algorithm, and a merging algorithm is developed to combine these assemblies by picking a subset of high quality transcripts to form a large transcriptome. When compared to existing algorithms that return a single assembly directly, this strategy achieves comparable or increased accuracy as memory-efficient algorithms that can be used to process a large amount of RNA-Seq data, and comparable or decreased accuracy as memory-intensive algorithms that can only be used to construct small assemblies. Our divide-and-conquer strategy allows memory-intensive de novo transcriptome assembly algorithms to be utilized to construct large assemblies.
Cosmological structure formation from soft topological defects
NASA Technical Reports Server (NTRS)
Hill, Christopher T.; Schramm, David N.; Fry, J. N.
1988-01-01
Some models have extremely low-mass pseudo-Goldstone bosons that can lead to vacuum phase transitions at late times, after the decoupling of the microwave background.. This can generate structure formation at redshifts z greater than or approx 10 on mass scales as large as M approx 10 to the 18th solar masses. Such low energy transitions can lead to large but phenomenologically acceptable density inhomogeneities in soft topological defects (e.g., domain walls) with minimal variations in the microwave anisotropy, as small as delta Y/T less than or approx 10 to the minus 6 power. This mechanism is independent of the existence of hot, cold, or baryonic dark matter. It is a novel alternative to both cosmic string and to inflationary quantum fluctuations as the origin of structure in the Universe.
Predicting spatio-temporal failure in large scale observational and micro scale experimental systems
NASA Astrophysics Data System (ADS)
de las Heras, Alejandro; Hu, Yong
2006-10-01
Forecasting has become an essential part of modern thought, but the practical limitations still are manifold. We addressed future rates of change by comparing models that take into account time, and models that focus more on space. Cox regression confirmed that linear change can be safely assumed in the short-term. Spatially explicit Poisson regression, provided a ceiling value for the number of deforestation spots. With several observed and estimated rates, it was decided to forecast using the more robust assumptions. A Markov-chain cellular automaton thus projected 5-year deforestation in the Amazonian Arc of Deforestation, showing that even a stable rate of change would largely deplete the forest area. More generally, resolution and implementation of the existing models could explain many of the modelling difficulties still affecting forecasting.
A Multiscale Survival Process for Modeling Human Activity Patterns.
Zhang, Tianyang; Cui, Peng; Song, Chaoming; Zhu, Wenwu; Yang, Shiqiang
2016-01-01
Human activity plays a central role in understanding large-scale social dynamics. It is well documented that individual activity pattern follows bursty dynamics characterized by heavy-tailed interevent time distributions. Here we study a large-scale online chatting dataset consisting of 5,549,570 users, finding that individual activity pattern varies with timescales whereas existing models only approximate empirical observations within a limited timescale. We propose a novel approach that models the intensity rate of an individual triggering an activity. We demonstrate that the model precisely captures corresponding human dynamics across multiple timescales over five orders of magnitudes. Our model also allows extracting the population heterogeneity of activity patterns, characterized by a set of individual-specific ingredients. Integrating our approach with social interactions leads to a wide range of implications.
NASA Astrophysics Data System (ADS)
Chiong, Chau-Ching; Chiang, Po-Han; Hwang, Yuh-Jing; Huang, Yau-De
2016-07-01
ALMA covering 35-950 GHz is the largest existing telescope array in the world. Among the 10 receiver bands, Band-1, which covers 35-50 GHz, is the lowest. Due to its small dimension and its time-variant frequency-dependent gain characteristics, current solar filter located above the cryostat cannot be applied to Band-1 for solar observation. Here we thus adopt new strategies to fulfill the goals. Thanks to the flexible dc biasing scheme of the HEMT-based amplifier in Band-1 front-end, bias adjustment of the cryogenic low noise amplifier is investigated to accomplish solar observation without using solar filter. Large power handling range can be achieved by the de-tuning bias technique with little degradation in system performance.
Workshop Report on Ares V Solar System Science
NASA Technical Reports Server (NTRS)
Langhoff, Stephanie; Spilker, Tom; Martin, Gary; Sullivan, Greg
2008-01-01
The workshop blended three major themes: (1) How can elements of the Constellation program, and specifically, the planned Ares-V heavy-launch vehicle, benefit the planetary community by enabling the launch of large planetary payloads that cannot be launched on existing vehicles, and how can the capabilities of an Ares V allow the planetary community to redesign missions to achieve lower risk, and perhaps lower cost on these missions? (2) What are some of the planetary missions that either can be significantly enhanced or enabled by an Ares-V launch vehicle? What constraints do these mission concepts place on the payload environment of the Ares V? (3) Technology challenges that need to be addressed for launching large planetary payloads. Presentations varied in length from 15-40 minutes. Ample time was provided for discussion.
Bussewitz, Bradly; DeVries, J George; Dujela, Michael; McAlister, Jeffrey E; Hyer, Christopher F; Berlet, Gregory C
2014-07-01
Large bone defects present a difficult task for surgeons when performing single-stage, complex combined hindfoot and ankle reconstruction. There exist little data in a case series format to evaluate the use of frozen femoral head allograft during tibiotalocalcaneal arthrodesis in various populations in the literature. The authors evaluated 25 patients from 2003 to 2011 who required a femoral head allograft and an intramedullary nail. The average time of final follow-up visit was 83 ± 63.6 weeks (range, 10-265). Twelve patients healed the fusion (48%). Twenty-one patients resulted in a braceable limb (84%). Four patients resulted in major amputation (16%). This series may allow surgeons to more accurately predict the success and clinical outcome of these challenging cases. Level IV, case series. © The Author(s) 2014.
Turbulent chimeras in large semiconductor laser arrays
Shena, J.; Hizanidis, J.; Kovanis, V.; Tsironis, G. P.
2017-01-01
Semiconductor laser arrays have been investigated experimentally and theoretically from the viewpoint of temporal and spatial coherence for the past forty years. In this work, we are focusing on a rather novel complex collective behavior, namely chimera states, where synchronized clusters of emitters coexist with unsynchronized ones. For the first time, we find such states exist in large diode arrays based on quantum well gain media with nearest-neighbor interactions. The crucial parameters are the evanescent coupling strength and the relative optical frequency detuning between the emitters of the array. By employing a recently proposed figure of merit for classifying chimera states, we provide quantitative and qualitative evidence for the observed dynamics. The corresponding chimeras are identified as turbulent according to the irregular temporal behavior of the classification measure. PMID:28165053
Turbulent chimeras in large semiconductor laser arrays
NASA Astrophysics Data System (ADS)
Shena, J.; Hizanidis, J.; Kovanis, V.; Tsironis, G. P.
2017-02-01
Semiconductor laser arrays have been investigated experimentally and theoretically from the viewpoint of temporal and spatial coherence for the past forty years. In this work, we are focusing on a rather novel complex collective behavior, namely chimera states, where synchronized clusters of emitters coexist with unsynchronized ones. For the first time, we find such states exist in large diode arrays based on quantum well gain media with nearest-neighbor interactions. The crucial parameters are the evanescent coupling strength and the relative optical frequency detuning between the emitters of the array. By employing a recently proposed figure of merit for classifying chimera states, we provide quantitative and qualitative evidence for the observed dynamics. The corresponding chimeras are identified as turbulent according to the irregular temporal behavior of the classification measure.
NASA Astrophysics Data System (ADS)
Liu, Lu; Parkinson, Simon; Gidden, Matthew; Byers, Edward; Satoh, Yusuke; Riahi, Keywan; Forman, Barton
2018-04-01
Surface water reservoirs provide us with reliable water supply, hydropower generation, flood control and recreation services. Yet reservoirs also cause flow fragmentation in rivers and lead to flooding of upstream areas, thereby displacing existing land-use activities and ecosystems. Anticipated population growth and development coupled with climate change in many regions of the globe suggests a critical need to assess the potential for future reservoir capacity to help balance rising water demands with long-term water availability. Here, we assess the potential of large-scale reservoirs to provide reliable surface water yields while also considering environmental flows within 235 of the world’s largest river basins. Maps of existing cropland and habitat conservation zones are integrated with spatially-explicit population and urbanization projections from the Shared Socioeconomic Pathways to identify regions unsuitable for increasing water supply by exploiting new reservoir storage. Results show that even when maximizing the global reservoir storage to its potential limit (∼4.3–4.8 times the current capacity), firm yields would only increase by about 50% over current levels. However, there exist large disparities across different basins. The majority of river basins in North America are found to gain relatively little firm yield by increasing storage capacity, whereas basins in Southeast Asia display greater potential for expansion as well as proportional gains in firm yield under multiple uncertainties. Parts of Europe, the United States and South America show relatively low reliability of maintaining current firm yields under future climate change, whereas most of Asia and higher latitude regions display comparatively high reliability. Findings from this study highlight the importance of incorporating different factors, including human development, land-use activities, and climate change, over a time span of multiple decades and across a range of different scenarios when quantifying available surface water yields and the potential for reservoir expansion.
NASA Astrophysics Data System (ADS)
Appel, Marius; Lahn, Florian; Pebesma, Edzer; Buytaert, Wouter; Moulds, Simon
2016-04-01
Today's amount of freely available data requires scientists to spend large parts of their work on data management. This is especially true in environmental sciences when working with large remote sensing datasets, such as obtained from earth-observation satellites like the Sentinel fleet. Many frameworks like SpatialHadoop or Apache Spark address the scalability but target programmers rather than data analysts, and are not dedicated to imagery or array data. In this work, we use the open-source data management and analytics system SciDB to bring large earth-observation datasets closer to analysts. Its underlying data representation as multidimensional arrays fits naturally to earth-observation datasets, distributes storage and computational load over multiple instances by multidimensional chunking, and also enables efficient time-series based analyses, which is usually difficult using file- or tile-based approaches. Existing interfaces to R and Python furthermore allow for scalable analytics with relatively little learning effort. However, interfacing SciDB and file-based earth-observation datasets that come as tiled temporal snapshots requires a lot of manual bookkeeping during ingestion, and SciDB natively only supports loading data from CSV-like and custom binary formatted files, which currently limits its practical use in earth-observation analytics. To make it easier to work with large multi-temporal datasets in SciDB, we developed software tools that enrich SciDB with earth observation metadata and allow working with commonly used file formats: (i) the SciDB extension library scidb4geo simplifies working with spatiotemporal arrays by adding relevant metadata to the database and (ii) the Geospatial Data Abstraction Library (GDAL) driver implementation scidb4gdal allows to ingest and export remote sensing imagery from and to a large number of file formats. Using added metadata on temporal resolution and coverage, the GDAL driver supports time-based ingestion of imagery to existing multi-temporal SciDB arrays. While our SciDB plugin works directly in the database, the GDAL driver has been specifically developed using a minimum amount of external dependencies (i.e. CURL). Source code for both tools is available from github [1]. We present these tools in a case-study that demonstrates the ingestion of multi-temporal tiled earth-observation data to SciDB, followed by a time-series analysis using R and SciDBR. Through the exclusive use of open-source software, our approach supports reproducibility in scalable large-scale earth-observation analytics. In the future, these tools can be used in an automated way to let scientists only work on ready-to-use SciDB arrays to significantly reduce the data management workload for domain scientists. [1] https://github.com/mappl/scidb4geo} and \\url{https://github.com/mappl/scidb4gdal
ROADNET: A Real-time Data Aware System for Earth, Oceanographic, and Environmental Applications
NASA Astrophysics Data System (ADS)
Vernon, F.; Hansen, T.; Lindquist, K.; Ludascher, B.; Orcutt, J.; Rajasekar, A.
2003-12-01
The Real-time Observatories, Application, and Data management Network (ROADNet) Program aims to develop an integrated, seamless, and transparent environmental information network that will deliver geophysical, oceanographic, hydrological, ecological, and physical data to a variety of users in real-time. ROADNet is a multidisciplinary, multinational partnership of researchers, policymakers, natural resource managers, educators, and students who aim to use the data to advance our understanding and management of coastal, ocean, riparian, and terrestrial Earth systems in Southern California, Mexico, and well off shore. To date, project activity and funding have focused on the design and deployment of network linkages and on the exploratory development of the real-time data management system. We are currently adapting powerful "Data Grid" technologies to the unique challenges associated with the management and manipulation of real-time data. Current "Grid" projects deal with static data files, and significant technical innovation is required to address fundamental problems of real-time data processing, integration, and distribution. The technologies developed through this research will create a system that dynamically adapt downstream processing, cataloging, and data access interfaces when sensors are added or removed from the system; provide for real-time processing and monitoring of data streams--detecting events, and triggering computations, sensor and logger modifications, and other actions; integrate heterogeneous data from multiple (signal) domains; and provide for large-scale archival and querying of "consolidated" data. The software tools which must be developed do not exist, although limited prototype systems are available. This research has implications for the success of large-scale NSF initiatives in the Earth sciences (EarthScope), ocean sciences (OOI- Ocean Observatories Initiative), biological sciences (NEON - National Ecological Observatory Network) and civil engineering (NEES - Network for Earthquake Engineering Simulation). Each of these large scale initiatives aims to collect real-time data from thousands of sensors, and each will require new technologies to process, manage, and communicate real-time multidisciplinary environmental data on regional, national, and global scales.
NASA Astrophysics Data System (ADS)
Savina, Irina N.; Ingavle, Ganesh C.; Cundy, Andrew B.; Mikhalovsky, Sergey V.
2016-02-01
The development of bulk, three-dimensional (3D), macroporous polymers with high permeability, large surface area and large volume is highly desirable for a range of applications in the biomedical, biotechnological and environmental areas. The experimental techniques currently used are limited to the production of small size and volume cryogel material. In this work we propose a novel, versatile, simple and reproducible method for the synthesis of large volume porous polymer hydrogels by cryogelation. By controlling the freezing process of the reagent/polymer solution, large-scale 3D macroporous gels with wide interconnected pores (up to 200 μm in diameter) and large accessible surface area have been synthesized. For the first time, macroporous gels (of up to 400 ml bulk volume) with controlled porous structure were manufactured, with potential for scale up to much larger gel dimensions. This method can be used for production of novel 3D multi-component macroporous composite materials with a uniform distribution of embedded particles. The proposed method provides better control of freezing conditions and thus overcomes existing drawbacks limiting production of large gel-based devices and matrices. The proposed method could serve as a new design concept for functional 3D macroporous gels and composites preparation for biomedical, biotechnological and environmental applications.
New challenges for grizzly bear management in Yellowstone National Park
van Manen, Frank T.; Gunther, Kerry A.
2016-01-01
A key factor contributing to the success of grizzly bear Ursus arctos conservation in the Greater Yellowstone Ecosystem has been the existence of a large protected area, Yellowstone National Park. We provide an overview of recovery efforts, how demographic parameters changed as the population increased, and how the bear management program in Yellowstone National Park has evolved to address new management challenges over time. Finally, using the management experiences in Yellowstone National Park, we present comparisons and perspectives regarding brown bear management in Shiretoko National Park.
DOE Office of Scientific and Technical Information (OSTI.GOV)
El Atwani, Osman; Hinks, Jonathan; Greaves, Graeme
Nanocrystalline metals are considered highly radiation-resistant materials due to their large grain boundary areas. Here, the existence of a grain size threshold for enhanced irradiation resistance in high-temperature helium-irradiated nanocrystalline and ultrafine tungsten is demonstrated. Average bubble density, projected bubble area and the corresponding change in volume were measured via transmission electron microscopy and plotted as a function of grain size for two ion fluences. Nanocrystalline grains of less than 35 nm size possess ~10–20 times lower change in volume than ultrafine grains and this is discussed in terms of the grain boundaries defect sink efficiency.
Principles and techniques in the design of ADMS+. [advanced data-base management system
NASA Technical Reports Server (NTRS)
Roussopoulos, Nick; Kang, Hyunchul
1986-01-01
'ADMS+/-' is an advanced data base management system whose architecture integrates the ADSM+ mainframe data base system with a large number of work station data base systems, designated ADMS-; no communications exist between these work stations. The use of this system radically decreases the response time of locally processed queries, since the work station runs in a single-user mode, and no dynamic security checking is required for the downloaded portion of the data base. The deferred update strategy used reduces overhead due to update synchronization in message traffic.
Flame balls dynamics in divergent channel
NASA Astrophysics Data System (ADS)
Fursenko, R.; Minaev, S.
2011-12-01
A three-dimensional reaction-diffusion model for lean low-Lewis-number premixed flames with radiative heat losses propagating in divergent channel is studied numerically. Effects of inlet gas velocity and heat-loss intensity on flame structure at low Lewis numbers are investigated. It is found that continuous flame front exists at small heat losses and the separate flame balls settled within restricted domain inside the divergent channel at large heat losses. It is shown that the time averaged flame balls coordinate may be considered as important characteristic analogous to coordinate of continuous flame stabilized in divergent channel.
Magnetic refrigeration using flux compression in superconductors
NASA Technical Reports Server (NTRS)
Israelsson, U. E.; Strayer, D. M.; Jackson, H. W.; Petrac, D.
1990-01-01
The feasibility of using flux compression in high-temperature superconductors to produce the large time-varying magnetic fields required in a field cycled magnetic refrigerator operating between 20 K and 4 K is presently investigated. This paper describes the refrigerator concept and lists limitations and advantages in comparison with conventional refrigeration techniques. The maximum fields obtainable by flux compression in high-temperature supercoductor materials, as presently prepared, are too low to serve in such a refrigerator. However, reports exist of critical current values that are near usable levels for flux pumps in refrigerator applications.
Dispersion of Sound in Dilute Suspensions with Nonlinear Particle Relaxation
NASA Technical Reports Server (NTRS)
Kandula, Max
2010-01-01
The theory accounting for nonlinear particle relaxation (viscous and thermal) has been applied to the prediction of dispersion of sound in dilute suspensions. The results suggest that significant deviations exist for sound dispersion between the linear and nonlinear theories at large values of Omega(Tau)(sub d), where Omega is the circular frequency, and Tau(sub d) is the Stokesian particle relaxation time. It is revealed that the nonlinear effect on the dispersion coefficient due to viscous contribution is larger relative to that of thermal conduction
Environmentally Safe SRM Strategies Using Liquefied Air
NASA Astrophysics Data System (ADS)
Massmann, M.; Layton, K.
2010-12-01
This presentation includes several SRM strategies to offset global warming using the large scale release of liquefied air (Lair). Lair could be used to cool large atmospheric volumes as it expands from a liquid below minus 300 degrees F (-184 degrees C) into ambient air, which could trigger new clouds or brighten existing clouds. It is hoped that the potential feasibility and benefits of this concept would be found to warrant further development through funded research. A key trait of Lair is its enormous expansion ratio in warming from a cold liquid into ambient air. At sea level, this expansion is about 900 times. At high altitudes such as 50,000 ft (15 km) the same amount of Lair would expand 5,000 times. One strategy for this concept would be to release Lair at 50,000 ft to super-cool existing water vapor into reflective droplets or ice particles. This could create very large clouds thick enough to be highly-reflective and high enough for long residence times. Another strategy to consider for this concept would be to release CCN’s (such as salt particulates) along with Lair. This might enable the formation of clouds where Lair alone is insufficient. Water vapor could also be added to assist in cloud development if necessary. The use of these elements would be non-polluting, enabling the concept to be safely scaled as large as necessary to achieve the desired results without harming the environment. This is extremely important, because it eliminates the risk of environmental damage that is a potential roadblock for most other SRM schemes. Further strategies of this concept would include formation of clouds near the equator to maximize reflected energy, creating clouds over ocean regions so as to minimize weather changes on land, and creating clouds over Arctic regions to minimize the melting of sea ice. Because this concept requires only existing technology to implement, research and implementation timelines could be minimized (unlike most proposed schemes that require new technologies). Energy required for this concept should be very reasonable. Each ton of Lair would require about 345 kW-hrs of energy or less to produce. Assuming power costs 0.1 per kW-hr, energy cost per ton of Lair would be about 34.50 US. Each 100-ton payload of Lair would then cost $3,450 US or just 12.4 cents per gallon (3.3 cents per liter). More extreme weather events are predicted as the planet warms. It should be noted that Lair might also be used to help limit the destructiveness of these events. The same aircraft and Lair tanks from this concept could be used to perform missions that cool the “heat engines” of severe weather, limiting hurricane strength, reducing the likelihood of tornado’s and limiting excessive rain that causes flooding. Also, by loading the Lair tanks with liquid nitrogen, it might be possible to help control large wildfires using wind to blanket fire lines with gaseous nitrogen. Therefore this concept could have multiple uses and solve several problems related to global warming, as well as help to limit global warming itself.
NASA Astrophysics Data System (ADS)
Manfredi, Sabato
2016-06-01
Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.
NASA Technical Reports Server (NTRS)
Steinolfson, Richard S.; Davila, Joseph M.
1993-01-01
Numerical simulations of the MHD equations for a fully compressible, low-beta, resistive plasma are used to study the resonance absorption process for the heating of coronal active region loops. Comparisons with more approximate analytic models show that the major predictions of the analytic theories are, to a large extent, confirmed by the numerical computations. The simulations demonstrate that the dissipation occurs primarily in a thin resonance layer. Some of the analytically predicted features verified by the simulations are (a) the position of the resonance layer within the initial inhomogeneity; (b) the importance of the global mode for a large range of loop densities; (c) the dependence of the resonance layer thickness and the steady-state heating rate on the dissipation coefficient; and (d) the time required for the resonance layer to form. In contrast with some previous analytic and simulation results, the time for the loop to reach a steady state is found to be the phase-mixing time rather than a dissipation time. This disagreement is shown to result from neglect of the existence of the global mode in some of the earlier analyses. The resonant absorption process is also shown to behave similar to a classical driven harmonic oscillator.
A deformable surface model for real-time water drop animation.
Zhang, Yizhong; Wang, Huamin; Wang, Shuai; Tong, Yiying; Zhou, Kun
2012-08-01
A water drop behaves differently from a large water body because of its strong viscosity and surface tension under the small scale. Surface tension causes the motion of a water drop to be largely determined by its boundary surface. Meanwhile, viscosity makes the interior of a water drop less relevant to its motion, as the smooth velocity field can be well approximated by an interpolation of the velocity on the boundary. Consequently, we propose a fast deformable surface model to realistically animate water drops and their flowing behaviors on solid surfaces. Our system efficiently simulates water drop motions in a Lagrangian fashion, by reducing 3D fluid dynamics over the whole liquid volume to a deformable surface model. In each time step, the model uses an implicit mean curvature flow operator to produce surface tension effects, a contact angle operator to change droplet shapes on solid surfaces, and a set of mesh connectivity updates to handle topological changes and improve mesh quality over time. Our numerical experiments demonstrate a variety of physically plausible water drop phenomena at a real-time rate, including capillary waves when water drops collide, pinch-off of water jets, and droplets flowing over solid materials. The whole system performs orders-of-magnitude faster than existing simulation approaches that generate comparable water drop effects.